Information Professional

Type Genius: the joys of font-pairing

I'm a little bit obsessed with nice fonts - I love how they can impact on design and help tell your story. An aspect of design which is often undervalued is the combinations of fonts: pairing up fonts (or sometimes mixing groups of three fonts, ideally not more than three in one design) for posters, or social media campaigns, or PowerPoint presentations.

I've just found a great site called Type Genius that helps out with choosing fonts, more on which below.

Here are four font combinations I like, three of which I've used, and all the fonts for which can be downloaded individually from Fontsquirrel:

BEBAS NUEUE AND MONTSERRAT

The first combination is what is used for the blog and much of the rest of site. The blog title is Bebas Nueue and the body text is Monstserrat. (Whenever I use Heading 2 in the formatting that's also Montserrat, but in all-caps - the Heading 3 used in this post os back to Bebas Nueue again). I chose them mainly because when I rejigged the design of the site recently I wanted a thicker body text font, so chose Montserrat which I've been using since I saw Matthew Reidsma use it for his UXLibs I keynote. Bebas compliments it for titles because it a tall and narrow font in contrast with Montserrat being thick and more rounded.

LATO AND ROBOTO SLAB

The second combination I've not used at the time of writing, but got from Type Genius - which you can find at typegenius.com. You tell it what font you want to use, and it gives you a number of potential companions to pair with it (as it happens when you put in Montserrat it suggests Bebas Nueue, as used on this site).

In the case of Lato and Roboto Slab, I've actually not used the Regular Lato in the example above at all - I used Lato Thin for the first part and Lato Heavy for 'titles'. I do like the contrast of light and heavy.

RALEWAY AND... RALEWAY

Which brings us to the third combination, which isn't technically a pair as it's just Raleway used in three different ways. I love Raleway beyond all other fonts. As long as you have both Raleway Regular and Raleway Bold installed (although PowerPoint will try and Bold non-standard fonts when you highlight them and click the Bold button, it's not the same as actually installing the Bold version that the typographers intended) they work so beautifully together, especially in all caps. The intro to UX presentation I blogged about recently used Raleway in all possible combinations (Regular and Bold in both lower and upper case) with no other fonts involved:

The other joy of Raleway is it renders perfectly on Slideshare. Some other fonts, even when you save and upload your presentations as a PDF, go a bit blocky on Slideshare, for example my LIANZA Marketing Manifesto slides, which use Raleway along with ChunkFive Roman - the latter looked great at the conference but not so good on Slideshare, but Raleway was perfect in both situations.

MATHLETE BULKY AND CAVIAR DREAMS

I used this combination for my Tuning Out The White Noise presentation which became the most popular thing I've ever put on Slideshare (despite Mathlete Bulky not rendering properely on the site) and I use it in some of my training materials, so I've become slightly bored with it due to over-exposure! I also over-used Mathlete and have since changed it round so it gets much less use in my slides, because it's a little too quirky for any kind of long-term reading. I like the way it looks but usability has to come first.

Further reading

For more info and guidance on font-pairing, check out this article from CreativeBloq, and Canva's Ultimate Guide to font-pairing.

If you have a particular pairing you'd recommend I'd love to hear it in a comment below.

What is UX and how can it help your organisation?

User Experience - UX - is still relatively new to libraries. I've been writing about it a lot on here of late: there's now been 4 posts in the Embedding Ethnography series about what we're doing at York.

I thought it would be useful take a step back and create a slide-deck to introduce UX - ethngraphy and design - in this context. Here it is:

One of the most popular pages on this site is the Structured Introduction to UX and Ethnography and I wanted something to go on there, and also for a new blog from the University of York.

Introducing Lib-Innovation

The Lib-Innovation blog is an attempt to capture some of the more creative stuff we do at York, and especially as a channel to disseminate ideas and results around our UX activities.  I'm reposting my own articles from Lib-Innovation on here, but not those written by my colleagues: if you're interested in the results of the UX studies I've written about on here so far, the Head of Relationship Management at York, Michelle Blake, has written about the projects on Lib-Innovation. What we learned what absolutely fascinating and we've already started to make the changes to help both students and staff.

More on UX

Here is a (continually updated) list of the latest posts on this blog that feature User Experience in some way.

Planning and delivering an Intern-led UX Library Project (Embedding ethnography Part 3)

This post originally appeared on the Lib-Innovation blog.

Last time out, as part of the Embedding Ethnography series, Emma Gray wrote about what it was like to be a UX Intern here at York, and the techniques she employed while she worked with us. Next time I'll write about what the study discovered.

If it's in any way possible to get an intern to help out with your ethnographic project I'd highly recommend it, so this post is about our process for setting the project up and working with Emma.

Here's a summary of the project:

Recruiting an intern

This was the part of the whole project we struggled most with, and were fortunate in how it worked out.

Five of our staff went to the first UXLibs Conference in 2015, and came back wanting to immediately implement some of the things we'd learned. But we all had not nearly enough day-to-day time in our roles to do any serious amount of ethnographic observation and interaction. So I submitted a proposal to a University-wide Intern scheme - but despite making it attractive as a I could, all the applicants chose to go for other Internships on offer from the University. If anyone has any tips on writing a great UX Intern job spec and advert, I'd love to hear them in a comment below...

We then got an email from the Head of HR in the Library saying a student at Durham University who lived locally wanted to work for the library over the summer, and did anyone have any suitable work? Naturally I jumped at this and sent Emma the existing job spec, she agreed it looked interesting, and she came in for an interview.

Emma Gray talks to a colleague

Emma Gray talks to a colleague

It was a very informal interview, just me and my manager and Emma without a huge list of pre-prepared questions. Emma didn't have any UX knowledge prior to coming in, but that didn't matter. As it happened she did have experience of working in a public library but that wasn't essential either. For us, the essential qualities were to show some initiative (Emma ticked this box, having found my website and read my reviews of the UXLibs conference...) and above all to be a good communicator. UX work involves a LOT of dialogue with users, so if that isn't something you enjoy it's going to be a slog... Emma was naturally communicatory so we had no doubts about offering her the role. As it turned out she was much more brilliant than we could have anticipated.

Pre-arrival set up

As Emma's manager I set about doing several things before she started at the Library:

1) Putting together a resource list on UX in Libraries to get her up to speed with an area she was unfamiliar with - I made that publicly available here

2) Putting together a document that outlined the aims of the internship so Emma would know exactly what she was working towards - I've put this on Google Drive here for anyone interested. I've not edited this from the original so there's some York-centric language - also I said 'emoji' when I meant 'emoticons' so you'll have to forgive me. Here's a preview:

Part of the Aims of the Internship document I put together for Emma

Part of the Aims of the Internship document I put together for Emma

(It's worth noting that we didn't achieve some of the aims - for example visiting Cambridge and Sheffield Hallam, or trying out group interviews.) 

Essentially the thing that made this project different to future UX projects we'd undertake is this one was at least partly about understanding UX processes as well as our actual users - so Emma was tasked with setting up a UX Toolkit for our future use 

3) Sort out all the admin stuff associated with a new member of staff - entry card, username and password, where Emma would sit, PC she'd use, access to Google Drive folders etc etc 

4) Put together a timetable for the first week or so of her employment, after which she would become more self-directed. This included inviting Emma to a number of meetings and a couple of teaching sessions, so she could go away with a more rounded impression of what life in an academic library, and particular in the Academic Liaison Team, was like. We wanted it to be as rewarding and CV-enhancing as possible for her, as well as focusing on our project.

All of this took AGES. Any work you can put in beforehand is worth it though, otherwise it quickly takes up most of your job generating work and things to do for the intern. (This is something I've heard echoed across other sectors too.) 

[Feel free to re-purpose the reading list or the aims document if they help at your own organisation.]

Planning the project

As mentioned part of the aim was to build a UX toolkit - a suite of information and resources to call upon for future projects. As such as we decided Emma would use, as far as possible, all four of the interactive ethnographic techniques we'd learned (cognitive mapping, unstructured interviews, touchtone tours, love/break-up letters) with each participant, as well as doing behavioural mapping. My explanations of how to do these are in the 'Aims of the Internship' document, or see Emma's own post her description of each of these

This meant that a) Emma could start on the behavioural mapping and general observation while we recruited participants, and b) we'd need at least an hour of each participant's time. This would in turn mean a large amount of time spend interpreting and analysing the results; as a rule of thumb UX work takes 4 hours of analysis and reporting for every 1 hour of ethnographic fieldwork - a 4:1 ratio. 

The UX Team (the five conference attendees) met to discuss what sort of thing we should focus on in the project - I found this tricky because you want to provide a framework and guidance for an intern, but also part of the spirit of UX is to let the data tell you the story and not to go in with preconceptions to, or even seeking specific answers to questions. In the end we settled on using the project to better understand Postgraduate students simply because, during the summer holidays as this was, there were many more of them around. There were various things we hoped to learn - or various aspects we hoped to learn about - but we didn't put these into the project documentation or ask Emma to focus on them (or even tell her about them); we wanted the process to be as neutral as possible. 

We agreed that the five of us would meet during Emma's 6 weeks with us to discuss progress, look at the results, steer the further direction and so on.

During the project

Once Emma arrived and worked her way through the reading list, we started with observation and behavioural mapping. Observation is a great way for an intern to settle in because it's a relatively low pressure environment - it's a break from ingesting huge chunks of written information and a chance to be in your own head-space, and actually DOING ethnography where the stakes are much lower if you're not familiar with it yet. Not being sure about how to label a map of someone's path through the lobby is less intimidating than not being sure how to ask someone to write love-letter to a library service! 

The biggest problem we had was recruitment. We put requests for participants on social media, e.g.

.. and we put similar info on a giant whiteboard in the Postgraduate Lounge area. We also approached people face to face and left them with info about the project and Emma's contact details. All in all these approaches yielded just three participants.

So all the Academic Liaison Librarians emailed their PostGrad cohorts via Departmental administrators: this was much more successful and yielded lots of emails to Emma, most of whom went on to book appointments with her. 23 people in total were recruited this way. The students were a mixture of PGTs and PGRs, from a variety of Departments and a variety of nationalities.

As it happened this project would conform to the 4:1 analysis to field work ratio almost EXACTLY: Emma was with us for 125 hours in total, and engaged with 26 participants in that time for around an hour each, spending the other 99 hours doing everything else: analysing, interpreting, transcribing, and writing up (and getting to grips with UX in the first place). It must be said that Emma was an incredibly proficient transcriber, having done this kind of work before: for mere mortals (me, for instance) the 4:1 ratio would not be remotely possible with transcription included, and in fact transcription itself often comes with a 4:1 ratio of its own, before you even get as far as analysis.

In general we consider ourselves incredibly lucky to have got Emma as our first ever UX intern: she was extremely bright and showed a great deal of initiative and confidence, as well as working extremely hard. She produced a brilliant report detailing her experiences across the 26 participants, with the findings clustered around the areas of: study space, noise levels, the catalogue, the library building, and facilities. We learned more about those 26 students than we'd ever learned about any students before.

Working with an intern is a brilliant way to free up enough time to actually start the process of UX and ethnography, although it still takes existing staff time to manage the project.

Michelle Blake is going to blog about the results of this and the next UX project we did; I'll add a link here when this is online.

The next post on here in this series will be another guest slot from an Intern, Oliver Ramirez, so undertook our second UX project at York.

Ask yourselves, libraries: are surveys a bit bobbins?

We all agree we need data on the needs and wants of our users.

We all agree that asking our users what they want and need has traditionally been a good way of finding that out.

But do we all agree surveys really work? Are they really getting the job done - providing us with the info we need to make changes to our services?

Personally I wouldn't do away with surveys entirely, but I would like to see their level of importance downgraded and the way they're often administered changed. Because I know what it's like to fill in a survey, especially the larger ones. Sometimes you just tick boxes without really thinking too much about it. Sometimes you tell people what they want to hear.  Sometimes you can't get all the way through it. Sometimes by the end you're just clicking answers so you can leave the survey.

I made this. CC-BY.

I made this. CC-BY.

How can we de-bobbins* our surveys? Let me know below. Here are some ideas for starters:

  1. Have a very clear goal of what the survey is helping to achieve before it is launched. What's the objective here? ('It's the time of year we do the survey' does not count as an objective)
     
  2. Spend as much time interpretting and analysing and ACTING ON the results as we do formatting, preparing and promoting the survey (ideally, more time)
     
  3. Acknowledge that surveys don't tell the whole story, and then do something about it. Use surveys for the big picture, and use UX techniques to zoom in on the details. It doesn't have to be pointless data. We can collect meaningful, insightful data.
     
  4. Run them less frequently. LibQual every 2 years max, anyone?
     
  5. Only ever ask questions that give answers you can act on
     
  6. Run smaller surveys more frequently rather than large surveys annually: 3 questions a month, with FOCUS on one theme per month, that allows you to tweak the user experience based on what you learn
     
  7. Speak the language of the user. Avoid confusion by referring to our stuff in the terms our users refer to our stuff
     
  8. [**MANAGEMENT-SPEAK KLAXON**] Complete the feedback loop. When you make changes based on what you learn, tell people you've done it. People need to know their investment of time in the survey is worth it.

Any more?


*International readers! Bobbins is a UK term for 'not very good'.

The problem with peer review (by @LibGoddess)

 

I am ridiculously excited to introduce a new guest post.

I've been wrestling for a while with the validity or otherwise of the peer review process, and where that leaves us as librarians teaching information literacy. I can't say 'if you use databases you'll find good quality information' because that isn't neccessarily true - but nor is it true to say that what one finds on Google is always just as good as what one finds in a journal.

There was only one person I thought of turning to in order to make sense of this: Emma Coonan. She writes brilliantly about teaching and information on her blog and elsewhere - have a look at her fantastic post on post-Brexit infolit, here.


The Problem With Peer Review | Emma Coonan

Well, peer review is broken. Again. Or, if you prefer, still.

The problems are well known and often repeated: self-serving reviewers demanding citations to their own work, however irrelevant, or dismissing competing research outright; bad data not being picked up; completely fake articles sailing through review. A recent discussion on the ALA Infolit mailing list centred on a peer-reviewed article in a reputable journal (indexed, indeed, in an expensive academic database) whose references consisted solely of Wikipedia entries. This wonderfully wry PNIS article - one of the most approachable and most entertaining overviews of the issues with scholarly publishing - claims that peer reviewers are “terrible at spotting weaknesses and errors in papers”.

As for how peer review makes authors feel, well, there’s a Tumblr for that. This cartoon by Jason McDermott sums it up:

Click the pic to open the original on jasonya.com in a new window

Click the pic to open the original on jasonya.com in a new window

- and that’s from a self-proclaimed fan of peer review.

For teaching librarians, the problems with peer review have a particularly troubling dimension because we spend so much of our time telling students of the vital need to evaluate information for quality, reliability, validity and authority. We stress the importance of using scholarly sources over open web ones. What’s more our discovery services even have a little tickbox that limits searches to peer reviewed articles, because they’re the ones you can rely on. Right? …

So what do we do if peer review fails to act as the guarantee of scholarly quality that we expect and need it to be? Where does it leave us if “peer review is a joke”?

The purpose of peer review

From my point of view as a journal editor, peer review is far from being a joke. On the contrary, it has a number of very useful functions:

·        It lets me see how the article will be received by the community

The reviewers act as trial readers who have certain expectations about the kind of material they’re going to find in any given journal. This means I can get an idea of how relevant the work is to the journal’s audience, and whether this particular journal is the best place for it to appear and be appreciated.

·        It tests the flow of the argument

Because peer reviewers read actively and critically, they are alert to any breaks in the logical construction of the work. They’ll spot any discontinuities in the argument, any assumptions left unquestioned, and any disconnection between the method, the results and the conclusions, and will suggest ways to fix them.

·        It suggests new literature or different viewpoints that add to the research context

One of the hardest aspects of academic writing is reconciling variant views on a topic, but a partial – in any sense – approach does no service to research. Every argument will have its counter-position, just as every research method has its limitations. Ignoring these doesn’t make them go away; it just makes for an unbalanced article. Reviewers can bring a complementary perspective on the literature that will make for a more considered background to the research.

·        It helps refine and clarify a writing style which is governed by rigid conventions and in which complex ideas are discussed

If you’ve ever written an essay, you’ll know that the scholarly register can work a terrible transformation on our ability to articulate things clearly. The desire to sound objective, knowledgeable, or just plain ‘academic’ can completely obscure what we’re trying to say. When this happens (and it does to us all) the best service anyone can do is to ask (gently) “What the heck does this mean?”

In my journal’s guidelines for authors and reviewers we put all this a bit more succinctly:

The role of the peer reviewer is twofold: Firstly, to advise the editor as to whether the paper is suitable for publication and, if so, what stage of development it has reached. [ ….] Secondly, the peer reviewer will act as a constructively critical friend to the author, providing detailed and practical feedback on all the aspects of the article.

But you’ll notice that these functions aren’t to do with the research as such, but with the presentation of the research. Scholarly communication always, necessarily, happens after the fact. It’s worth remembering that the reviewers weren’t there when the research was designed, or when the participants were selected, or when the audio recorder didn’t work properly, or the coding frame got covered in coffee stains. The reviewers aren’t responsible for the design of the research, or its outputs: all they can do is help authors make the best possible communication of the work after the research process itself is concluded.

Objective incredulity

Despite this undeniable fact, many of the “it’s a joke” articles seem to suggest that reviewers should take personal responsibility for the bad datasets, the faulty research design, or the inflated results. However, you can’t necessarily locate and expose those problems on reading alone. The only way to truly test the quality and validity of a research study is to replicate it.

Replication - the principle of reproducibility - is the whole point of the scientific method, which is basically a highly refined and very polite form of disbelief. Scholarly thinking never accepts assertions at face value, but always tests the evidence and asks uncomfortable, probing questions: is that really the case? Is it always the case? Supposing we changed the population, the dosage, one of the experimental conditions: what would the findings, and the implications we draw from them, look like then?

And here’s the nub of the whole problem: it’s not the peer reviewer’s job to replicate the research and tell us whether it’s valid or not. It’s our job - the job of the academic community as a whole, the researcher, the reader. In fact, you and me. Peer reviewers can’t certify an article as ‘true’ so that we know it meets all those criteria of authority, validity, reliability and the rest of them. All a reviewer can do is warrant that the report of a study has been composed in the appropriate register and carries the signifiers of academic authority, and that the study itself - seen only through this linguistic lens - appears to have been designed and executed in accordance with the methodological and analytical standards of the discipline. Publication in a peer-reviewed journal isn’t a simple binary qualifier that will tell you whether an article is good or bad, true or false; it’s only one of many nuanced and contextual evaluative factors we must weigh up for ourselves.

So when we talk to our students about sources and databases, we should also talk about peer review; and when we talk about peer review, we need to talk about where the authority for deciding whether something is true really rests.

Tickboxing truth

This brings us to one of the biggest challenges about learning in higher education: the need to rethink how we conceive of truth.

We generally start out by imagining that the goal of research is to discover the truth or find the answer - as though ‘Truth’ is a simple, singular entity that’s lying concealed out there, waiting to be for us to unearth it. And many of us experience frustration and dismay at university as a direct result of this way of thinking. We learn, slowly, that the goal of a research study is not to ‘find out the truth’, nor even to find out ‘a’ truth. It’s to test the validity of a hypothesis under certain conditions. Research will never let us say “This is what we know”, but only “This is what we believe - for now”.

Research doesn’t solve problems and say we can close the book on them. Rather it frames problems in new ways, which give rise to further questions, debate, discussion and further research. Occasionally these new ways of framing problems can painfully disrupt our entire understanding of the world. Yet once we understand that knowledge is a fluid construct created by communities, not a buried secret waiting for us to discover, then we also come to understand that there can be no last word in research: it is, rather, an ongoing conversation.

The real problem with peer review is that we’ve elevated it to a status it can’t legitimately occupy. We’ve tried to turn it into a truth guarantee, a kind of kitemark of veracity, but in doing so we’ve shut our eyes to the reality that truth in research is a shifting and slippery beast.

Ultimately, we don’t get to outsource evaluation: it’s up to each one of us to make the judgement on how far a study is valid, authoritative, and relevant. As teaching librarians, it’s our job to help our learners develop a critical mindset - that same objective incredulity that underlies scientific method, that challenges assertions and questions authority. And that being so, it’s imperative that we not only foster certain attitudes to information in our students, but model them ourselves in our own behaviour. In particular, our own approach to information should never be a blind acceptance of any rubber-stamp, any external warrant, any authority - no matter how eminent.

This means that the little tickbox that says ‘peer reviewed’ may be the greatest disservice we do to the thoughtful scepticism we seek to help develop in our students, and in our society at large. Encouraging people to think that the job of assessing quality happens somewhere else, by someone else, leads to a populace which is alternatively complacent and outraged, and in both states unwilling to undertake the critical engagement with information that leads us to be able to speak truth to power.

The only joke is to think that peer review can stand in for that.