5 stages to processing and acting on 100+ hours of ethnographic study

This post is reblogged from the Lib-Innovation blog, to tie up and follow on from the previous post on THIS blog about the Understanding Academics Project.

Understanding Academics, introduced in the last blog post, is far and away the biggest UX project we’ve attempted at York, and the processing and analysis of the data has been very different to our previous ethnographic studies. This is due to a number of factors: primarily the sheer size of the study (over 100 hours’ worth of interviews), the subject matter (in depth and open ended conversations with academics with far ranging implications for our library services), and actually the results themselves (we suspected they’d be interesting, but initial analysis showed they were SO insightful we needed to absolutely make the most of the opportunity).  

Whereas for example the first UX project we ran conformed almost exactly to the expected 4:1 ratio of processing to study – in other words for every 1 hour of ethnography it took four hours to analyse and process – the total time spent on Understanding Academics will comfortably be in excess of 400 hours, and in fact has probably exceeded that already. 

UX is an umbrella term which has come to mean a multi-stage process – first the ethnography to understand the users, then the design to change how the library works based on what you learned. In order to ensure we don’t drown in the ethnographic data from this project and never get as far as turning it into ‘proper’ UX with recommendations and changes, Michelle Blake and Vanya Gallimore came up with a 5 stage method of delivering the project. 

Two particular aspects of this I think are really useful, and not things we’ve done in our two previous UX projects: one is assigning themes to specific teams or individuals to create recommendations from, and the other is producing and publicising recommendations as soon as possible rather than waiting until the end of the whole project. 

As you can imagine the 5 stage method is very detailed but here’s a summary:

Coloured pens used in cognitive mapping (in this case with the interviewer's reminder about the order in which to use them)

Coloured pens used in cognitive mapping (in this case with the interviewer's reminder about the order in which to use them)

      1)  Conduct and write up the ethnography. Academic Liaison Librarians (ALLs) spoke to around 4 academics from each of ‘their’ Departments, usually asking the subject to draw a cognitive map relating to their working practice, 
and then conducting a semi-structured interview based on the results. 

The ALLs then wrote up their notes from the interviews, if necessary referring to the audio (all interviews were recorded) to transcribe sections where the notes written during the process didn’t adequately capture what was said. The interviews happened over a 2 month period, with a further month to complete the writing up. 

      2)   Initial coding and analysis. A member of the Teaching and Learning Team (also based in the library) who has a PhD and experience of large research projects then conducted initial analysis of the entire body of 100+ interviews, using NVIVO software. The idea here was to look for trends and themes within the interviews. The theming was done based on the data, rather than pre-existing categories – a template was refined based on an initial body of analysis. In the end, 23 over-arching themes emerged – for example Teaching, Digital Tools and Social Media Use, Collaborations, Research, Working Spaces. This process took around 2 months. 

      3)   Assigning of themes for further analysis and recommendations. Vanya then took all of the themes and assigned them (and their related data) to members of the Relationship Management Team – this consists of the Academic Liaison and Teaching and Learning teams already mentioned, and the Research Support team. This is the stage we are at now with the project – each of us in the team have been assigned one or more theme and will be doing further analysis at various times over the next 8 to 10 months based on our other commitments. A Gantt chart has been produced of who is analysing what, and when. The preparation and assigning of themes took around 2 weeks.

      4)   Outcomes and recommendations. There are three primary aims here. To come up with a set of practical recommendations for each of the themes of the project, which are then taken forward and implemented across the library. To come up with an evidence-base synthesis of what it means to be an academic at the University of York: a summary of how academics go about research and teaching, and what their key motivations, frustrations and aspirations are. (From this we’ll also aim to create personas to help articulate life for academics at York.) And finally to provide Information Services staff with access to data and comments on several areas in order to help inform their work – for example members of the Research Support team will have access to wealth of views on how academics think about Open Access or the repository. 

These aims will be achieved with a combination of devolved analysis assigned to different groups, and top-down analysis of the everything by one individual. Due to other projects happening with the teams involved, this stage will take up to 7 months, although results will emerge sooner than that, which leads us neatly to...

      5)  Distribution and Dissemination. Although this is last on the list, we’re aiming to do it as swiftly as possible and where appropriate we’ll publicise results before the end of the project, so stages 4 and 5 will run simultaneously at times. The total duration from the first interview to the final report will be around 18 months, but we don’t want to wait that long to start making changes and to start telling people what we’ve learned. So, once an evidence-based recommendation has been fully realised, we’ll attempt to design the change and make it happen, and tell people what we’re doing - and in fact the hope is to have a lot of this work completed by Christmas (half a year or so before the Summer 2017 intended end date for the final report). 

The full methods of dissemination are yet to decided, because it’s such a massive project and has (at a minimum) three interested audiences: York’s academic community, the rest of Information Services here, and the UX Community in Libraries more widely. We know there will be a final report of some sort, but are trying to ensure people aren’t left wading through a giant tome in order to learn about what we’ve changed. We do know that we want to use face to face briefings where possible (for example to the central University Learning and Teaching Forum), and that we’ll feedback to the 100 or so academics involved in the study before we feedback to the community more widely. 

Above all, Understanding Academics has been one of the most exciting and insightful projects any of us have ever attempted in a library context. 

Embedding Ethnography Part 5: Understanding Academics with UX

This is the 5th post in a series about using UX and ethnography as regular tools at the University of York. We're treating these techniques as 'business as usual' - in other words part of a selection of tools we'd call upon regularly in appropriate situations, rather than a stand-alone or siloed special project. If you're interested you can read Part 1: Long term UX, and two guest posts from our UX Interns in Part 2 and Part 4, plus my take on planning and delivering a UX-led project in Part 3.

Having focused our first two uses of UX on students - specifically postgraduates - the third time we've used it in earnest has been with the academic community.

One of the consent forms from the project

One of the consent forms from the project

The Understanding Academics Project

The project to better understand the lives and needs of our academics was an existing one in the Library: we knew we wanted to tweak our services to suit them better.  After finding the UX techniques so useful we decided to apply them here and make them the driving force behind the project. All other useful sources of info have been considered too - for example feedback to Academic Liaison Librarians, comments from the LibQual+ survey etc - but the body of the project would involve using ethnography to increase our understanding.

We've used five main ethnographic techniques at York (six if you count the feedback wall we now have near the exit of the library) but decided to limit ourselves to two of them for this project: cognitive maps, and semi-structured interviews. We aimed to meet 4 academics per Department, and ask them to draw a cognitive map of either their research process or the process for designing a new module - so unlike our previous UX projects which involved maps of physical spaces, this was literally 'mapping' the way they worked. Some interpreted this very visually, others in a more straightforward textual way. In all cases though, it proved an absolutely fascinating insight in to how things really work in academia, and provided a brilliant jumping off point for the interviews themselves.

These interviews were semi-structured rather than structured or unstructured; in other words they were based largely on the map and a natural flow of conversation rather than having any pre-set questions, but there were areas which we'd bring up at the end of they didn't come in the conversation without prompting. So for example most people in drawing the teaching-related map mentioned our reading list system, either in the map or in conversation - if after 50 minutes of chat it hadn't come up at all, we'd ask as open a question as possible to prompt some insight into their thoughts on it.

Vanya Gallimore has written a great overview of the project on the Lib-Innovation Blog, which we set up in the library to document our UX work among other things. In it she writes about the background to the project, the methods used, staffing it (in other words, who was doing the interviews) and then briefly about processing the data. It's the most popular post on our new blog and I'd recommend giving it a read.

For now I want to focus on something that post doesn't cover so much: actually doing the ethnography.

Ethnography fieldwork in practice

What is the verb for ethnography? Is it just 'doing' ethnography, or performing ethnography? Ehtnographising? Whatever it is, I hadn't done it in earnest until this project. In the two previous projects I'd been involved in setting things up, helping with the direction, interpreting the data and few other things, but we'd had interns out in the field, talking to people and asking them to draw maps etc. For Understanding Academics, it was agreed that the Academic Liaison Librarians (of which I am one) should be doing the fieldwork, for various reasons described by Vanya in her post linked above - ultimately it came down to two things: the need for a proper familiarity of the HE context and our systems in the Library in order to understand everything the academics were saying; and the sheer opportunity of talking in amazing depth with people in our departments.

One of the most common quesitons about the project is: how did you get the academics to take part? The answer is, we asked them all individually, by email. No mass emails to the whole department, but no incentives either (we've offered post-graduates cake vouchers and the like, in previous UX projects) - just an email to a person selected with care, often in conjunction with the Library Rep and / or Head of Department, explaining what we were doing, why we were doing it, and our reasons for approaching them specifically. We asked around 110 academics this way, and 97 said yes: the other 13 either didn't want to do it or couldn't make time within the duration of the project.

There was a roughly even split of research focused and teaching focused conversations (although in either case there were no limits to the conversation, so some interviews ended up mentioning both). I look after three Departments from the Library: I interviewed three academics from one, and four from each of the other two, plus I did two of the three 'warm-up' interviews.

Prep

The warm up interviews were just like the regular interviews, and their data included in the project, but they were with partners of library staff who happened to be academics... The idea was to refine our processes and see how things worked in practice, on an audience who wouldn't mind being subject to our first attempts at ethnographic fieldwork. This was really useful, and we changed things as a result - for example the message written on the top of the piece of paper assigned to draw cognitive maps on was made clearer, and we extended the time we'd set aside for each interview after the try-outs used their 60 minute slots before the conversations had reached a natural conclusion. 

For the remainder of my interviews the prep consisted of reading up on each academic on their staff profile page, printing out the various bits of paper required, and charging devices. 

Accoutrements

There were a lot of things we had to bring with us to each interview.

  • a device to audio-record the whole thing on (my phone);
  • a device to write on (ipad with keyboard, or laptop); 
  • the paper with the map explanation on; 
  • the paper with the areas to cover if they didn't arise naturally listed; 
  • two copies of the consent form - one for us to keep and one for the subject to keep
  • a set of four pens (we ask users to draw cognitive maps over a period of 6 minutes, giving them a different colour of pen every 2 minutes)

Of the above, the cognitive map, conversation topics and consent forms were all either teaching specific or research specific - largely the same but with subtly different wording in places. 

The Map

Each session began with an explanation of what we were doing here. The emails sent to invite each academic had covered some of that, but it's always good to talk it over. We discussed what the library wanted to do (change things for the better) but that we didn't have specific things in mind - we wanted to be led by the data. Then we talked about the format of the interview, the fact it would be recorded, and went through the consent forms. I particularly stressed the fact they could withdraw at any time - in other words, an academic could decide now, several months later, that they wished they hadn't been so candid, and we'd take all their data out of the study.

Finally we explained the map, the use of the different colours of pen, the fact it didn't have to be remotely artistic. None of my interviewees seemed particularly put off or phased by drawing the map. Then there was a period of silence as they drew the map (not everyone needed all six minutes; if people took longer than six minutes I didn't hurry them), after which I turned on the recorder and said 'Now if you can talk me through what you've drawn...' 

The Interview

Once the subject had described their map - with me trying not to interrupt unless I didn't understand something, but jotting down potential questions as they talked - the interview commenced. I can't recommend highly enough using either a cognitive map or another ethnographic technique such as a love/break-up letter or touchstone tour as a jumping off point for an interview. It means you instantly have context, you're in their world, and there's no shortage of meaningful ideas to talk about. 

I have to say that during the main body of the interview, I didn't actively try and think about what the project was trying to achieve, I just asked questions I was interested in. Sometimes this meant spending a long time discussing things which weren't library related at all - but that's part of what this project is all about, to understand the academic world more holistically rather than in a library-centric way. 

Some interviews came to a natural end after around 40 minutes; others I felt like we could have gone much longer but I didn't want to take up more of their time than I said I would.

Writing up

One of the changes we made after the initial interviews was to just listen and not try and write notes whilst the protagonists described their map. We didn't have time to transcribe each interview (that would mean we'd have spent more than 500 hours on the project before a single piece of analysis) but we did feel the map description was key, so we listened without writing during that bit and transcribed it fully later. We then wrote notes as we conducted the interview, using the recording to go back and fill any holes or make clear anything from our notes that didn't make sense. Sometimes during a particularly long and involved answer I'd just write go back and listen to this in my notes and stop writing until the next question. 

We blocked out time after each interview to write it up immediately while it was fresh in our minds - so in my case this was mainly going through and correcting all the mistakes from my high-speed typing, then referring to the recording where necessary, then noting down any immediate conclusions I could draw outside of the project framework - things I could learn from and change the way I worked because of. I didn't write these down as part of the notes from the interview because I didn't want to bias the analysis in any way - I just wrote ideas down elsewhere for my own use. 

Conclusions

I absolutely loved doing the fieldwork for this project. It was fantastic. I learned so much, I deepened existing relationships, and I got to know staff really well who I'd barely met before. Every time I came away from an interview I was absolutely buzzing. 

I don't think everyone enjoyed it as much as I did. Some people felt like they didn't know enough about a subject's research project to be able to ask intelligent questions about it - personally I just asked unintelligent questions until I got it - and there was the odd instance of the conversation being stilted or awkward. For me and a lot of my colleagues, though, it was eye-opening and actually really exciting. 

The question of what we do next - how we process all the data, and then act on what we learn - is covered in the following post.

Library ratio of online versus on site visits and visitors

Interesting tweet from #uxlibucc, above. Can you answer that question? I'd wager most of us can't, but we should be able to. Someone in our orgs should be able to, right? We need to make strategic decisions, about to where to prioritise resources, based on evidence wherever we can.

As it happens, I can check this figure at my own institution, because it's something I've been thinking about a lot. I realised that a) I had no idea which were the most popular online parts of the library presence and b) I had no idea how this compared with actual footfall. So I got access to Google analytics, and I repeatedly ask my colleague Steph for turnstile statistics...

So below is a chart to compare physical and online use of the Library, for Monday of this week (all 24 hours of it). A couple of caveats:

1) This is not presented as a pie chart because we're not dealing in percentages. Many of the users of the building will also be using the catalogue at the same time.

2) I've taken the actual numbers off it in case I shouldn't be giving that info out publicly, but to give you a rough idea the total number of visits to the building is well over 10,000

3) This is just one day. I have not gone into the data to try and find an average day or a representative day, I just chose the first day of this week. (Although we compared it with the previous Monday and none of the data was atypical.)

Chart comparing visits to the library building, subject guide, website and catalogue on one day. The building gets the most visits, the catalogue the most visitors.

Chart comparing visits to the library building, subject guide, website and catalogue on one day. The building gets the most visits, the catalogue the most visitors.

So while building visits outstrip online visits (because each student is coming in almost exactly 3 times on average) it appears more people use the catalogue overall, though that figure could be skewed by people using the catalogue multiple times on different devices. Clearly the catalogue is MUCH more popular than the website, which makes me think: should we work even harder than we already do on the system as it's the way more people interface with us than any other? Should we be trying to get more info on to it as people go there so much more than they go to our other online places, or should we try and strip it down so the usability is as good as possible? 

If you combine the online stats into one figure, the graph looks like this:

Chart showing there are more overall online visitors compared to building visitors, but more uses of the building than the combined online spaces.

Chart showing there are more overall online visitors compared to building visitors, but more uses of the building than the combined online spaces.

Keeping in mind that I'm not including any social media in the online figure - so our YouTube, Facebook, Twitter, Slideshare, Blog(s) and Instagram views aren't represented - you can see the online sphere is a huge factor in people's daily use of the library, as indeed we'd expect.

To go back to the question in the tweet at the start of the post, does our staff allocation reflect this ratio? No it doesn't. The staffing at York is so labyrinthine I can't work the ratio out, but suffice to say we devote much more staff to face-to-face interaction than we do to the website and catalogues.

This is as it should be. I'm not advocating for the staff ratio to exactly reflect the virtual/physical ratio, because the nature of the use is very different. But I do wonder if we were starting from scratch but knew the ratio in the graphs above, would we do things a little differently?


UPDATE: Since I posted this yesterday I had some interesting discussion on Twitter, and a couple of people mentioned if we included the stats for the resources themselves (JSTOR for example) then the online side of things would be even higher.

This is a good point. I didn't include them because I didn't think of doing so (rather than it being a position I'd deliberately taken), but on reflection I think it skews the picture too much to put them in - because a lot of the catalogue views, and probably the majority of the SubjectGuide views, will be people on their way to the licenced e-resources. I'd argue that in the same way one visit to the building results in lots of potential uses of the library, one use of the catalogue may result in several e-resources being consulted.

That said, there will always be lots of people on campus going direct to the resources without following our links, so that would increase the online views somewhat. Ultimately the reason I find this interesting is comparing, for want of a better word, the different interfaces of the library and being able to see explicitly which area engages the most users. So although our databases and journals are hugely important, they aren't 'our' interfaces in quite the same way as the catalogue, website, libguides and building.

For the last time, Google is not our competition in libraries...

There's a very famous Neil Gaiman quote among librarians and lovers of libraries: "Google will bring you back, you know, a hundred thousand answers. A librarian will bring you back the right one."

I found this on Jennie Stolz's Pinterest page. Click the pic to go to there.

I found this on Jennie Stolz's Pinterest page. Click the pic to go to there.

You see it on social media. You hear it used as a soothing balm at library conferences. More than one library has it printed on their floors.

There are various different versions of the quote - often people will attribute Gaiman with having said a million answers from Google, and pretty much no one puts in the 'you know'. In order that I, a librarian, could use the RIGHT quote for this article I...

Well, I Googled it. Obvs.

Because how else would I find it? I don't want to put us as libraries and librarians in competition with Google for loads of reasons, but we still do it a lot. I contributed to a SWOT analysis on libraries in LibFocus and someone put the Gaiman quote in there too:

An excerpt from a crowd-sourced LibFocus article - click the image to read the full thing

An excerpt from a crowd-sourced LibFocus article - click the image to read the full thing

The thing is, most people aren't seeking 'right answers' on Google. They just want basic or general info. Here's what SiegeMedia discovered were the top 15 searches on Google in 2015 in the US, if you exclude brands and porn (the top 5 if you don't exclude them is Gmail, Craiglists, Amazon, Yahoo [why?!] and porn).

Click the pic to read the full article on SeigeMedia

Click the pic to read the full article on SeigeMedia

How many of those have a right answer a librarian could bring back? The weather, obviously - but you'd find that out by Googling it. Perhaps a librarian could find you a more reliable dictionary, that could be a 'right' answer. What's on at the Movies, cheap flights - again we'd at least go online and search, if not specifically Google.

In the UK in 2015 according to Google itself, the top 5 searches were 1) Cilla Black, 2) Lady Colin Campbell, 3) Rugby World Cup, 4) Jeremy Clarkson and 5) Paris. Is there a right answer to 'Cilla Black'? Right answers are not what Google is for. More broadly, people aren't searching Google for things they used to come and find at libraries.

The reason the Gaiman quote includes a 'you know' is this wasn't some grand written statement, it was part of an answer to an interview question he was asked upon becoming honorary chair of National Libaries Week in 2010. The full answer, with the Google part right at the end, can be seen in this video:

What a great quote that whole thing is! Fantastic. He GETS it. This isn't some well-meaning but misguided celeb talking about how much they loved the smell of books as a child in their local library. This is someone who understands how libraries are about social inclusion. I love the full answer. I think Gaiman is brilliant. I just wish we, as a library community, hadn't quite latched onto the Google part of it so much, as the dichotomy isn't helpful.

Also, I don't personally think I can find the 'right' answer in most of the situations I find myself in, even as an academic librarian helping people are who ARE actually after very specific information. Our role is more about helping people find answers for themselves - not in all cases and branches of the profession, but in most - as a couple of people pointed out on Twitter:

Of course this dichotomy isn't somehow Gaiman's fault or exclusive to him, you see it everywhere among librarians. This tweet from Internet Librarian International encapsulates a sentence you hear a lot about libraries and competition:

(It was reflecting something the speaker had said rather than Martin's own opinion.) I find this rhetoric troubling for lots of reasons, many of which I've spoken about before but the idea of Google as competitor just won't go away...

Here are my issues with it:

  1. As discussed above, people don't now use Google for things they previously used libraries for
  2. Google doesn't do what we do. It precisely the human element of libraries that will ensure they endure
  3. If Google IS our competitor then we will lose every battle, forever
  4. Ultimately, to pit libraries against Google is to reduce libraries to their most basic function (provider of information) and indeed the one which IS most easily replaced...
  5. ... and then try and convince people not to replace us with Google by telling them Google is not any good, when in fact - for all the troubling things about Google, and there are many - it IS pretty good at bringing back info of sufficient quality that most people who are non-specialists find it to be excellent for their needs
  6. Related: no one ever won any friends by slagging off something useful (that they themselves use every day)

The fact that Google and the internet more widely has made it so easy for people to access information without needing to physically visit a library is a GOOD thing. So I'd like us all agree to stop trying to make it into us versus them, and focus more on the things we can do to cater for the needs of our users and potential users. They don't need us to find them info on Cilla Black but they DO need us for plenty else. 

Google could find 100,000 things for libraries to do next, but only our communities can find us the right one...

Using Kahoot in Library Induction and Teaching Sessions

A colleague at York, Tony Wilson, brought Kahoot! to our attention recently for possible use in teaching and orientation sessions: it's a really nice quiz tool. There is nothing new about using quizzes in library sessions and there's about a million and one tools out there for making them, but Kahoot differs in its execution of the idea. It's so much slicker and just more FUN than anything like this I've looked at before. And interestingly, it already seems to have currency with students:

One of the most useful aspects of a quiz is that people are asked to actively engage with the information rather than passively receive it. I'm absolutely convinced the students are remembering more this way than if we just presented them with the complete information.

4 reasons Kahoot works so well

It's really, really nice, for these reasons in reverse order of importance:

The music. It has cool retro sort of 8-bit music in the background.
The aesthetics. It has bright colours and looks generally nice. Here's what a question looks like:

An example of a question as it looks on the screen during the quiz

An example of a question as it looks on the screen during the quiz

The leaderboard. Oh yes. It has a LEADERBOARD. This is the key thing, really: people put in their nicknames and after each question the top 5 is displayed (based on, obviously, how acurate their answers are but also how quick). Even completely non-competitive people get excited when they see their name in the top 5... I tweeted about using Kahoot and Diana Caulfied chimed in about the tension the leaderboard brings:

The mobile view from the student perspective

The mobile view from the student perspective

It's VERY easy to use. These things have to be SO simple to justify using them. In the case of Kahoot, you load up the quiz, and the students go to kahoot.it and put in the pin number the quiz gives you on the screen. It works perfectly on phones, tablets, or PCs. There's only one thing on the screen - the box to put the pin number in; and only one thing to do - put the pin number in. This simplicity and intuitive interface means everyone can get on board right away. There's no hunting around. 

You can also use it on an epic scale - one colleague just came back from using it with 95 undergraduates today, who responded really well, another used it with over 100 who were absolutely buzzing after each question. You can actually have up to 4,000 players at once.

Here's what the students are presented with when they go to the (very simple) URL:

An example from York

So here's the quiz I made for Induction, click here if you want to have a go. This particular link is (I think) in ghost mode, where you're competing with a previous group of players. So if you do the quiz now, you'll be up against my History of Art PostGraduates and will only show up in the Top 5 leaderboard if you get better scores than at least 25 of them! But normally in a session I'd use a completely blank slate.

Possible uses

In this example the questions I chose are basically just a way to show off our resources and services: it's all stuff I'd be telling them as part of a regular induction talk anyway:

My Kahoot quiz questions

My Kahoot quiz questions

The students I've used it with so far have really enjoyed it (as far as I can tell!). It's much more interesting than listing things, and, intruigingly, I think that asking people to guess between possible options actually seems the answer more impressive than just telling them the fact outright. So for example in the Google Apps question above, there were gasps when I revealed they get unlimited storage and the majority had chosen one of the lower options (the results screen shows how many people have chosen each option) - I'm fairly sure if I'd just told them they get unlimited storage, not one person would have gasped.

But there are plenty of other possibilities for Kahoot that are a bit more pedagogical in nature. Using it to measure how much of the session has sunk in at the end; using it at the start and end to measure a difference in knowledge; and using it to establish the level of student understanding:

There's also a Discussion mode rather than a Quiz mode. You pose a question and students type their answers in (rather than selecting from multiple choice) and their words come up on the screen. Anything rude or offensive can be deleted with one click. It would be a great way to find out what students felt unsure of or wanted to learn about, or to discuss the merits of a particular approach.

In summary

So I'd recommend taking a look at Kahoot and seeing if you can incorporate it into your teaching. As well as using it throughout Induction I'm planning on using different kinds of quizzes as part of infolit sessions and am excited to see how that works. You can easily incorporate your own library's images and videos and the tool is free, very easy to use, nicely made, and FUN.