University of York Library

The Student Communications Audit

This post brings together two articles from the Lib-Innovation blog, where colleagues from the University of York and I write about what's happening in the Library.

At York there are audits every few years around student communication. They're conducted centrally by the University, rather than being library-specific. The most recent one was shared with the Library Marketing and Comms Group (on which I sit) by York's Internal Communications Manager, and she's kindly given me permission to share the findings here because I think they're absolutely fascinating. They challenge some conventional wisdom, and reveal a lot.

There was a lot about email so part 1 of this post is devoted to that area; lower down part 2 covers social media, lecture capture, the VLE and other types of comms.

It's important to note the information was gleaned through focus groups rather than surveys in order to properly capture nuance, so it's not a giant sample size (under 100 people) and inevitably the views reflected will be representative of students engaged enough to turn up for a focus group... But personally I find the findings more useful than generic articles in the Higher Ed press about students of today.

Throughout this article I'll be using phrases like "Students prefer to do X" - the obvious caveat is that I mean "Students at York do X" but I'm not going to write that every time...

How do students communicate? The main findings around email 

1) Email remains the primary and preferred channel of communication with the University

I like this one because it confirms something I've thought for a while - that email is NOT dead. It gets a bad press and it's definitely far less cool than social media, but it still has a function. It's not that students especially love email, it's that they want US - the University and its key services - to communicate key info this way. 

Your users are triaging your emails, checking first on their phones...

Your users are triaging your emails, checking first on their phones...

2) Email mainly gets checked on phones, and this happens very frequently

Students check email primarily on their phones, sometimes moving on to a PC / laptop later (see point 3 below). 

Students check their phones for emails first thing when they wake up, last thing at night, and several times in between - many students have 'new message' alerts set up to go to their lock-screens, and will check new emails as they come in even whilst doing other things such as attending lectures.

3) Students triage messages according to 5 criteria

Students make quick decisions on whether an email gets read there and then, binned, or deferred. They consider (in order of importance):

Relevance - title, sender, and the opening part of the message visible on their phone before they press to open the message; 

Familiarity - do they know and trust the person sending the email? Trusted senders include tutors, supervisors, departmental administrators, the Library, Careers Service, and Timetabling; 

Urgency - does it relate to something important that day or a pending deadline; 

Action - do they need to do something?  (Notice this is 4th on the list of importance...) Interestingly if they do need to do something they'll star the email and find a PC or laptop to log onto and action the email;

Time - if it looks like it can be dealt with quickly they'll read it right away and then delete, file or just mark as read. 

4) The high volume of email they receive is okay, on one condition... it MUST be targeted

Students get a huge volume of emails but they don't mind this as long as the emails are targeted. They object to irrelevant emails and perhaps more so to emails that appear to be relevant but turn out not to be - one example given was an invitation to an employers' event for the wrong subject area or year / status. The sender of that email lost the trust of the students and future emails were deleted upon arrival, unread. 

Any sense of emails happening automatically or without proper thought as to their relevance was met with dissatisfaction. A particular type of email came at the same time each day, suggesting it was automated - this too became one to delete automatically. 

Newsletters and digest emails were read, but often only the first part (too much scrolling and the email was abandoned) and these are the first to go - to be deleted unread - when there's a day with an overly high volume of emails. 


What can libraries change about the way we email students? 

The first thing is don't give up on email. Students expect communication from us to be via this medium, and it was strongly expressed that important information should come this way - key info can be shared via social media but must ALSO be shared via email because it's the one channel everyone checks. The reports of email's death have been greatly exaggerated. 

The second thing is, small details - like titles - really matter. The Library appears to be on the list of trusted senders, but in order to get read you need a decent subject line. (This didn't come up in the audit but I'd argue time of day is important too - if students get a truck load of emails between 9am and 10am, it may be better to join a shorter queue for their attention later in the day at 11am.) Also, because students primarily read email on their phone, you need a very strong opening line. Open your email client on your phone right now - how much can you read without opening a specific email? The way my phone is set-up I get to see about 40 words. So your first few words need to go straight to the heart of the matter - no long intros.

This is obvious, right? We all check emails ourselves on mobiles, we know what it's like. But how many times do we craft emails specifically with the receiver on their phone in mind? I can't speak for anyone else but in my case the answer is: not nearly often enough. 

Thirdly, segment your audience and target them with relevant emails - never include a group in a mass email unless they are directly relevant and would benefit from the info. If an email isn't essential to anyone, does it even need to be sent at all? There are too many emails that are sent out not just to the relevant people but to a smattering of less relevant people too. Every time we do that we diminish our value as communicators - our currency - and get closer to joining the dreaded auto-delete list. 

And related to that, reduce automation because it suggests we're not trying hard enough to avoid wasting their time. It's very hard to think carefully whether or not to send an email if it's automated. I've always said we shouldn't send newsletters out at the same time each week or month - it should be because we have a critical mass of useful things to tell our audience, not because 'it's the time we send out the newsletter'. So anything automated should at least be reviewed to make sure it's still serving a worthwhile purpose and not alienating our users. 

The surprising popularity of the VLE and the unsurprisingly popularity of Lecture Capture

I must admit I was a little surprised to read that Blackboard was popular with the students. In actual fact they say it is difficult to navigate, but once mastered and if used well by their tutors, they are generally very positive about the VLE.

In particular the students liked the discussion forums where the lecturer takes the time to get involved. The opportunity to ask questions and clarify parts of the lecture they didn't understand is very much appreciated, and they highlighted the public availability of all the questions and answers - as opposed to a private conversation between student and lecturer which is seen as less fair and transparent.

The other things noted as positives were the email notifications when new content is added, and the posting of lecture materials and supporting information.

The most popular part of the VLE, however, was Replay, the lecture capture system that allows students to re-watch lectures (or catch-up if they were ill - lecture capture has been shown time and time again not to negatively impact on attendance, so it's not used as a way to avoid having to actually go to lectures...). To quote the report:

"At degree level they find it difficult to take in the level of detail and complexity in one sitting and so the opportunity to re-visit the lecture to listen and learn again, to take better notes and to revise is something they really, really value"

It is particularly valuable in conjunction with the discussion forums mentioned above, and reduces the need to seek out the tutor for extra guidance.

Not all students in the focus groups are on courses which use lecture capture extensively - when those that weren't heard from those that are, they made it clear they'd very much like this facility on their modules too.

You can read more about Replay on the E-Learning Development Team blog.

Students, social media and the University

As mentioned above the students would expect anything essential to be communicated by email. Social media can be used as well, but shouldn't ever be used exclusively for key info such as timetable changes and so on.

They're happy for Facebook to be used for 'fun stuff' but not serious stuff - they use it more than any other medium between themselves, but there's a mixed reaction to the University joining in. WhatsApp, Messenger and Snapchat are used a lot for peer-to-peer communication, and they really don't associate this kind of platform with the University and its communication channels at the moment. YikYak is known primarily as a place for cruelty and harm - students don't tend to use it unless there is a particular scandal they want to hear the gossip about.

Interestingly to me, Twitter is was reported as not being used abundantly and is considered as a tool for 'old people'. The main downside noted was about control, or the lack of control, over who sees what. At the Library we actually find Twitter to have quite high levels of engagement, the most of any social media platform we use, and the comms audit contradicting this chimes with other anecdotal evidence I've heard online of students being reluctant to really admitting they use Twitter but nevertheless using it anyway. It's also a lesson in trusting the stats, but interrogating the stats to make sure the engagement is actually coming from your users rather than your peers (and in our case, it is our actual users who interact with us and benefit from our Twitter account, predominantly).

Webpages and Google

Students prefer to use Google to find information, even if it's info they know exists on the University website. I do this too - I Google my query plus the word 'York' even for stuff on the library website because it's quicker and more reliable. The students don't always trust the University's search function... They also don't expect news via the website - they feel they'll get any updates they need via email and social media. 

Perhaps the most interesting theme for me which came up in this section was one of relevance - students feel a lot of University comms are aimed at potential students, rather than at them, the current incumbents. There's an opportunity here for libraries: we are predominantly focused on existing users, and we can pull in other content for an internal audience, for example via Twitter, and share this with the students too.

5 stages to processing and acting on 100+ hours of ethnographic study

This post is reblogged from the Lib-Innovation blog, to tie up and follow on from the previous post on THIS blog about the Understanding Academics Project.

Understanding Academics, introduced in the last blog post, is far and away the biggest UX project we’ve attempted at York, and the processing and analysis of the data has been very different to our previous ethnographic studies. This is due to a number of factors: primarily the sheer size of the study (over 100 hours’ worth of interviews), the subject matter (in depth and open ended conversations with academics with far ranging implications for our library services), and actually the results themselves (we suspected they’d be interesting, but initial analysis showed they were SO insightful we needed to absolutely make the most of the opportunity).  

Whereas for example the first UX project we ran conformed almost exactly to the expected 4:1 ratio of processing to study – in other words for every 1 hour of ethnography it took four hours to analyse and process – the total time spent on Understanding Academics will comfortably be in excess of 400 hours, and in fact has probably exceeded that already. 

UX is an umbrella term which has come to mean a multi-stage process – first the ethnography to understand the users, then the design to change how the library works based on what you learned. In order to ensure we don’t drown in the ethnographic data from this project and never get as far as turning it into ‘proper’ UX with recommendations and changes, Michelle Blake and Vanya Gallimore came up with a 5 stage method of delivering the project. 

Two particular aspects of this I think are really useful, and not things we’ve done in our two previous UX projects: one is assigning themes to specific teams or individuals to create recommendations from, and the other is producing and publicising recommendations as soon as possible rather than waiting until the end of the whole project. 

As you can imagine the 5 stage method is very detailed but here’s a summary:

Coloured pens used in cognitive mapping (in this case with the interviewer's reminder about the order in which to use them)

Coloured pens used in cognitive mapping (in this case with the interviewer's reminder about the order in which to use them)

      1)  Conduct and write up the ethnography. Academic Liaison Librarians (ALLs) spoke to around 4 academics from each of ‘their’ Departments, usually asking the subject to draw a cognitive map relating to their working practice, 
and then conducting a semi-structured interview based on the results. 

The ALLs then wrote up their notes from the interviews, if necessary referring to the audio (all interviews were recorded) to transcribe sections where the notes written during the process didn’t adequately capture what was said. The interviews happened over a 2 month period, with a further month to complete the writing up. 

      2)   Initial coding and analysis. A member of the Teaching and Learning Team (also based in the library) who has a PhD and experience of large research projects then conducted initial analysis of the entire body of 100+ interviews, using NVIVO software. The idea here was to look for trends and themes within the interviews. The theming was done based on the data, rather than pre-existing categories – a template was refined based on an initial body of analysis. In the end, 23 over-arching themes emerged – for example Teaching, Digital Tools and Social Media Use, Collaborations, Research, Working Spaces. This process took around 2 months. 

      3)   Assigning of themes for further analysis and recommendations. Vanya then took all of the themes and assigned them (and their related data) to members of the Relationship Management Team – this consists of the Academic Liaison and Teaching and Learning teams already mentioned, and the Research Support team. This is the stage we are at now with the project – each of us in the team have been assigned one or more theme and will be doing further analysis at various times over the next 8 to 10 months based on our other commitments. A Gantt chart has been produced of who is analysing what, and when. The preparation and assigning of themes took around 2 weeks.

      4)   Outcomes and recommendations. There are three primary aims here. To come up with a set of practical recommendations for each of the themes of the project, which are then taken forward and implemented across the library. To come up with an evidence-base synthesis of what it means to be an academic at the University of York: a summary of how academics go about research and teaching, and what their key motivations, frustrations and aspirations are. (From this we’ll also aim to create personas to help articulate life for academics at York.) And finally to provide Information Services staff with access to data and comments on several areas in order to help inform their work – for example members of the Research Support team will have access to wealth of views on how academics think about Open Access or the repository. 

These aims will be achieved with a combination of devolved analysis assigned to different groups, and top-down analysis of the everything by one individual. Due to other projects happening with the teams involved, this stage will take up to 7 months, although results will emerge sooner than that, which leads us neatly to...

      5)  Distribution and Dissemination. Although this is last on the list, we’re aiming to do it as swiftly as possible and where appropriate we’ll publicise results before the end of the project, so stages 4 and 5 will run simultaneously at times. The total duration from the first interview to the final report will be around 18 months, but we don’t want to wait that long to start making changes and to start telling people what we’ve learned. So, once an evidence-based recommendation has been fully realised, we’ll attempt to design the change and make it happen, and tell people what we’re doing - and in fact the hope is to have a lot of this work completed by Christmas (half a year or so before the Summer 2017 intended end date for the final report). 

The full methods of dissemination are yet to decided, because it’s such a massive project and has (at a minimum) three interested audiences: York’s academic community, the rest of Information Services here, and the UX Community in Libraries more widely. We know there will be a final report of some sort, but are trying to ensure people aren’t left wading through a giant tome in order to learn about what we’ve changed. We do know that we want to use face to face briefings where possible (for example to the central University Learning and Teaching Forum), and that we’ll feedback to the 100 or so academics involved in the study before we feedback to the community more widely. 

Above all, Understanding Academics has been one of the most exciting and insightful projects any of us have ever attempted in a library context. 

Embedding Ethnography Part 5: Understanding Academics with UX

This is the 5th post in a series about using UX and ethnography as regular tools at the University of York. We're treating these techniques as 'business as usual' - in other words part of a selection of tools we'd call upon regularly in appropriate situations, rather than a stand-alone or siloed special project. If you're interested you can read Part 1: Long term UX, and two guest posts from our UX Interns in Part 2 and Part 4, plus my take on planning and delivering a UX-led project in Part 3.

Having focused our first two uses of UX on students - specifically postgraduates - the third time we've used it in earnest has been with the academic community.

One of the consent forms from the project

One of the consent forms from the project

The Understanding Academics Project

The project to better understand the lives and needs of our academics was an existing one in the Library: we knew we wanted to tweak our services to suit them better.  After finding the UX techniques so useful we decided to apply them here and make them the driving force behind the project. All other useful sources of info have been considered too - for example feedback to Academic Liaison Librarians, comments from the LibQual+ survey etc - but the body of the project would involve using ethnography to increase our understanding.

We've used five main ethnographic techniques at York (six if you count the feedback wall we now have near the exit of the library) but decided to limit ourselves to two of them for this project: cognitive maps, and semi-structured interviews. We aimed to meet 4 academics per Department, and ask them to draw a cognitive map of either their research process or the process for designing a new module - so unlike our previous UX projects which involved maps of physical spaces, this was literally 'mapping' the way they worked. Some interpreted this very visually, others in a more straightforward textual way. In all cases though, it proved an absolutely fascinating insight in to how things really work in academia, and provided a brilliant jumping off point for the interviews themselves.

These interviews were semi-structured rather than structured or unstructured; in other words they were based largely on the map and a natural flow of conversation rather than having any pre-set questions, but there were areas which we'd bring up at the end of they didn't come in the conversation without prompting. So for example most people in drawing the teaching-related map mentioned our reading list system, either in the map or in conversation - if after 50 minutes of chat it hadn't come up at all, we'd ask as open a question as possible to prompt some insight into their thoughts on it.

Vanya Gallimore has written a great overview of the project on the Lib-Innovation Blog, which we set up in the library to document our UX work among other things. In it she writes about the background to the project, the methods used, staffing it (in other words, who was doing the interviews) and then briefly about processing the data. It's the most popular post on our new blog and I'd recommend giving it a read.

For now I want to focus on something that post doesn't cover so much: actually doing the ethnography.

Ethnography fieldwork in practice

What is the verb for ethnography? Is it just 'doing' ethnography, or performing ethnography? Ehtnographising? Whatever it is, I hadn't done it in earnest until this project. In the two previous projects I'd been involved in setting things up, helping with the direction, interpreting the data and few other things, but we'd had interns out in the field, talking to people and asking them to draw maps etc. For Understanding Academics, it was agreed that the Academic Liaison Librarians (of which I am one) should be doing the fieldwork, for various reasons described by Vanya in her post linked above - ultimately it came down to two things: the need for a proper familiarity of the HE context and our systems in the Library in order to understand everything the academics were saying; and the sheer opportunity of talking in amazing depth with people in our departments.

One of the most common quesitons about the project is: how did you get the academics to take part? The answer is, we asked them all individually, by email. No mass emails to the whole department, but no incentives either (we've offered post-graduates cake vouchers and the like, in previous UX projects) - just an email to a person selected with care, often in conjunction with the Library Rep and / or Head of Department, explaining what we were doing, why we were doing it, and our reasons for approaching them specifically. We asked around 110 academics this way, and 97 said yes: the other 13 either didn't want to do it or couldn't make time within the duration of the project.

There was a roughly even split of research focused and teaching focused conversations (although in either case there were no limits to the conversation, so some interviews ended up mentioning both). I look after three Departments from the Library: I interviewed three academics from one, and four from each of the other two, plus I did two of the three 'warm-up' interviews.

Prep

The warm up interviews were just like the regular interviews, and their data included in the project, but they were with partners of library staff who happened to be academics... The idea was to refine our processes and see how things worked in practice, on an audience who wouldn't mind being subject to our first attempts at ethnographic fieldwork. This was really useful, and we changed things as a result - for example the message written on the top of the piece of paper assigned to draw cognitive maps on was made clearer, and we extended the time we'd set aside for each interview after the try-outs used their 60 minute slots before the conversations had reached a natural conclusion. 

For the remainder of my interviews the prep consisted of reading up on each academic on their staff profile page, printing out the various bits of paper required, and charging devices. 

Accoutrements

There were a lot of things we had to bring with us to each interview.

  • a device to audio-record the whole thing on (my phone);
  • a device to write on (ipad with keyboard, or laptop); 
  • the paper with the map explanation on; 
  • the paper with the areas to cover if they didn't arise naturally listed; 
  • two copies of the consent form - one for us to keep and one for the subject to keep
  • a set of four pens (we ask users to draw cognitive maps over a period of 6 minutes, giving them a different colour of pen every 2 minutes)

Of the above, the cognitive map, conversation topics and consent forms were all either teaching specific or research specific - largely the same but with subtly different wording in places. 

The Map

Each session began with an explanation of what we were doing here. The emails sent to invite each academic had covered some of that, but it's always good to talk it over. We discussed what the library wanted to do (change things for the better) but that we didn't have specific things in mind - we wanted to be led by the data. Then we talked about the format of the interview, the fact it would be recorded, and went through the consent forms. I particularly stressed the fact they could withdraw at any time - in other words, an academic could decide now, several months later, that they wished they hadn't been so candid, and we'd take all their data out of the study.

Finally we explained the map, the use of the different colours of pen, the fact it didn't have to be remotely artistic. None of my interviewees seemed particularly put off or phased by drawing the map. Then there was a period of silence as they drew the map (not everyone needed all six minutes; if people took longer than six minutes I didn't hurry them), after which I turned on the recorder and said 'Now if you can talk me through what you've drawn...' 

The Interview

Once the subject had described their map - with me trying not to interrupt unless I didn't understand something, but jotting down potential questions as they talked - the interview commenced. I can't recommend highly enough using either a cognitive map or another ethnographic technique such as a love/break-up letter or touchstone tour as a jumping off point for an interview. It means you instantly have context, you're in their world, and there's no shortage of meaningful ideas to talk about. 

I have to say that during the main body of the interview, I didn't actively try and think about what the project was trying to achieve, I just asked questions I was interested in. Sometimes this meant spending a long time discussing things which weren't library related at all - but that's part of what this project is all about, to understand the academic world more holistically rather than in a library-centric way. 

Some interviews came to a natural end after around 40 minutes; others I felt like we could have gone much longer but I didn't want to take up more of their time than I said I would.

Writing up

One of the changes we made after the initial interviews was to just listen and not try and write notes whilst the protagonists described their map. We didn't have time to transcribe each interview (that would mean we'd have spent more than 500 hours on the project before a single piece of analysis) but we did feel the map description was key, so we listened without writing during that bit and transcribed it fully later. We then wrote notes as we conducted the interview, using the recording to go back and fill any holes or make clear anything from our notes that didn't make sense. Sometimes during a particularly long and involved answer I'd just write go back and listen to this in my notes and stop writing until the next question. 

We blocked out time after each interview to write it up immediately while it was fresh in our minds - so in my case this was mainly going through and correcting all the mistakes from my high-speed typing, then referring to the recording where necessary, then noting down any immediate conclusions I could draw outside of the project framework - things I could learn from and change the way I worked because of. I didn't write these down as part of the notes from the interview because I didn't want to bias the analysis in any way - I just wrote ideas down elsewhere for my own use. 

Conclusions

I absolutely loved doing the fieldwork for this project. It was fantastic. I learned so much, I deepened existing relationships, and I got to know staff really well who I'd barely met before. Every time I came away from an interview I was absolutely buzzing. 

I don't think everyone enjoyed it as much as I did. Some people felt like they didn't know enough about a subject's research project to be able to ask intelligent questions about it - personally I just asked unintelligent questions until I got it - and there was the odd instance of the conversation being stilted or awkward. For me and a lot of my colleagues, though, it was eye-opening and actually really exciting. 

The question of what we do next - how we process all the data, and then act on what we learn - is covered in the following post.

Library ratio of online versus on site visits and visitors

Interesting tweet from #uxlibucc, above. Can you answer that question? I'd wager most of us can't, but we should be able to. Someone in our orgs should be able to, right? We need to make strategic decisions, about to where to prioritise resources, based on evidence wherever we can.

As it happens, I can check this figure at my own institution, because it's something I've been thinking about a lot. I realised that a) I had no idea which were the most popular online parts of the library presence and b) I had no idea how this compared with actual footfall. So I got access to Google analytics, and I repeatedly ask my colleague Steph for turnstile statistics...

So below is a chart to compare physical and online use of the Library, for Monday of this week (all 24 hours of it). A couple of caveats:

1) This is not presented as a pie chart because we're not dealing in percentages. Many of the users of the building will also be using the catalogue at the same time.

2) I've taken the actual numbers off it in case I shouldn't be giving that info out publicly, but to give you a rough idea the total number of visits to the building is well over 10,000

3) This is just one day. I have not gone into the data to try and find an average day or a representative day, I just chose the first day of this week. (Although we compared it with the previous Monday and none of the data was atypical.)

Chart comparing visits to the library building, subject guide, website and catalogue on one day. The building gets the most visits, the catalogue the most visitors.

Chart comparing visits to the library building, subject guide, website and catalogue on one day. The building gets the most visits, the catalogue the most visitors.

So while building visits outstrip online visits (because each student is coming in almost exactly 3 times on average) it appears more people use the catalogue overall, though that figure could be skewed by people using the catalogue multiple times on different devices. Clearly the catalogue is MUCH more popular than the website, which makes me think: should we work even harder than we already do on the system as it's the way more people interface with us than any other? Should we be trying to get more info on to it as people go there so much more than they go to our other online places, or should we try and strip it down so the usability is as good as possible? 

If you combine the online stats into one figure, the graph looks like this:

Chart showing there are more overall online visitors compared to building visitors, but more uses of the building than the combined online spaces.

Chart showing there are more overall online visitors compared to building visitors, but more uses of the building than the combined online spaces.

Keeping in mind that I'm not including any social media in the online figure - so our YouTube, Facebook, Twitter, Slideshare, Blog(s) and Instagram views aren't represented - you can see the online sphere is a huge factor in people's daily use of the library, as indeed we'd expect.

To go back to the question in the tweet at the start of the post, does our staff allocation reflect this ratio? No it doesn't. The staffing at York is so labyrinthine I can't work the ratio out, but suffice to say we devote much more staff to face-to-face interaction than we do to the website and catalogues.

This is as it should be. I'm not advocating for the staff ratio to exactly reflect the virtual/physical ratio, because the nature of the use is very different. But I do wonder if we were starting from scratch but knew the ratio in the graphs above, would we do things a little differently?


UPDATE: Since I posted this yesterday I had some interesting discussion on Twitter, and a couple of people mentioned if we included the stats for the resources themselves (JSTOR for example) then the online side of things would be even higher.

This is a good point. I didn't include them because I didn't think of doing so (rather than it being a position I'd deliberately taken), but on reflection I think it skews the picture too much to put them in - because a lot of the catalogue views, and probably the majority of the SubjectGuide views, will be people on their way to the licenced e-resources. I'd argue that in the same way one visit to the building results in lots of potential uses of the library, one use of the catalogue may result in several e-resources being consulted.

That said, there will always be lots of people on campus going direct to the resources without following our links, so that would increase the online views somewhat. Ultimately the reason I find this interesting is comparing, for want of a better word, the different interfaces of the library and being able to see explicitly which area engages the most users. So although our databases and journals are hugely important, they aren't 'our' interfaces in quite the same way as the catalogue, website, libguides and building.

Using Kahoot in Library Induction and Teaching Sessions

A colleague at York, Tony Wilson, brought Kahoot! to our attention recently for possible use in teaching and orientation sessions: it's a really nice quiz tool. There is nothing new about using quizzes in library sessions and there's about a million and one tools out there for making them, but Kahoot differs in its execution of the idea. It's so much slicker and just more FUN than anything like this I've looked at before. And interestingly, it already seems to have currency with students:

One of the most useful aspects of a quiz is that people are asked to actively engage with the information rather than passively receive it. I'm absolutely convinced the students are remembering more this way than if we just presented them with the complete information.

4 reasons Kahoot works so well

It's really, really nice, for these reasons in reverse order of importance:

The music. It has cool retro sort of 8-bit music in the background.
The aesthetics. It has bright colours and looks generally nice. Here's what a question looks like:

An example of a question as it looks on the screen during the quiz

An example of a question as it looks on the screen during the quiz

The leaderboard. Oh yes. It has a LEADERBOARD. This is the key thing, really: people put in their nicknames and after each question the top 5 is displayed (based on, obviously, how acurate their answers are but also how quick). Even completely non-competitive people get excited when they see their name in the top 5... I tweeted about using Kahoot and Diana Caulfied chimed in about the tension the leaderboard brings:

The mobile view from the student perspective

The mobile view from the student perspective

It's VERY easy to use. These things have to be SO simple to justify using them. In the case of Kahoot, you load up the quiz, and the students go to kahoot.it and put in the pin number the quiz gives you on the screen. It works perfectly on phones, tablets, or PCs. There's only one thing on the screen - the box to put the pin number in; and only one thing to do - put the pin number in. This simplicity and intuitive interface means everyone can get on board right away. There's no hunting around. 

You can also use it on an epic scale - one colleague just came back from using it with 95 undergraduates today, who responded really well, another used it with over 100 who were absolutely buzzing after each question. You can actually have up to 4,000 players at once.

Here's what the students are presented with when they go to the (very simple) URL:

An example from York

So here's the quiz I made for Induction, click here if you want to have a go. This particular link is (I think) in ghost mode, where you're competing with a previous group of players. So if you do the quiz now, you'll be up against my History of Art PostGraduates and will only show up in the Top 5 leaderboard if you get better scores than at least 25 of them! But normally in a session I'd use a completely blank slate.

Possible uses

In this example the questions I chose are basically just a way to show off our resources and services: it's all stuff I'd be telling them as part of a regular induction talk anyway:

My Kahoot quiz questions

My Kahoot quiz questions

The students I've used it with so far have really enjoyed it (as far as I can tell!). It's much more interesting than listing things, and, intruigingly, I think that asking people to guess between possible options actually seems the answer more impressive than just telling them the fact outright. So for example in the Google Apps question above, there were gasps when I revealed they get unlimited storage and the majority had chosen one of the lower options (the results screen shows how many people have chosen each option) - I'm fairly sure if I'd just told them they get unlimited storage, not one person would have gasped.

But there are plenty of other possibilities for Kahoot that are a bit more pedagogical in nature. Using it to measure how much of the session has sunk in at the end; using it at the start and end to measure a difference in knowledge; and using it to establish the level of student understanding:

There's also a Discussion mode rather than a Quiz mode. You pose a question and students type their answers in (rather than selecting from multiple choice) and their words come up on the screen. Anything rude or offensive can be deleted with one click. It would be a great way to find out what students felt unsure of or wanted to learn about, or to discuss the merits of a particular approach.

In summary

So I'd recommend taking a look at Kahoot and seeing if you can incorporate it into your teaching. As well as using it throughout Induction I'm planning on using different kinds of quizzes as part of infolit sessions and am excited to see how that works. You can easily incorporate your own library's images and videos and the tool is free, very easy to use, nicely made, and FUN.