Onboarding Qualitative Research

Charter Communications Business Enterprise team was courting me for months. It was an excellent match, but it came down to location. The following was presented to them as the research plan for how to integrate me as a qualitative researcher for their business clients.

Qual-AgendaMR-DAPr

Senior-level Job Interview

I was really excited at the potential to work with a Collective Intelligence SaaS software company. The challenge behind the product was to build engagement and participation through psychological safety and global collaboration.

As the Senior UX Researcher at the company, I would be responsible for building the research program. It was no surprize that the “design test” was to build and explain a research program for a fake app.

2f20170109_170305.jpg

Given this minimal information, certain items stood out in the data. There are goals set in this budget, but I do not know how or why they were set. What is the purpose of this software and why did the user select this program? Did she create the categories? If not, did the data get parsed properly?

I started writing all these questions, issues, and ideas on sticky notes. After a while, I saw some patterns emerging so I moved to affinity mapping them into the clusters.

20170109_171517
Affinity Mapping to organize ideas

To really build upon the questions, I imagined which methods and tools would be most appropriate to answer those questions. From there, it was nearly straightforward to decide on potential company KPIs and regular deliverables based on those tools. I included those in their own swimlanes and color coded stickies.

20170109_174056
What it looked like when I presented 45 min later
Screen Shot 2017-08-15 at 2.38.29 PM
What those stickies actually say!

I called the guys back in and we spent some time going over my ideas. They questioned and “ooh’d and aah’d” before we decided to get happy hour downstairs.

It was one of the best teams I’ve ever worked on and I miss them.

 

Sharing Research Data

Reposted from Inna Kouper’s blog

 

Toni Rosati is a data curator and a usability researcher at the National Snow and Ice Data Center (NSIDC). Toni is involved in several projects at NSIDC, including the Advanced Cooperative Arctic Data and Information Service (ACADIS ) and the Science-Driven Cyberinfrastructure: Integrating Permafrost Data, Services, and Research Applications (PermaData).

Toni and I met at the 5th Research Data Alliance plenary and sat down during one of the coffee breaks to talk about qualitative data and the challenges of sharing them. As a usability researcher and a member of the ACADIS team, Toni conducts tests of the Arctic Data Explorer (ADE), a federated search tool for interdisciplinary Arctic science data. Her research results in recommendations to ADE managers and software developers to improve appearance, functionality, and quality of search results of the tool. Diving deep into the metadata and the code that make the ADE possible, Toni is also, ultimately, looking to improve data management and data sharing practices.

The usability tests generate a wealth of qualitative data including interview recordings and written transcripts. But, can the raw be shared? Not at this point, says Toni, for the following reasons:

  • Toni has been working closely with their institutional review board (IRB), an ethical committee that reviews and approves research involving humans in the U.S. to come to this conclusion. To ensure privacy, raw identifiable data must be anonymized.
  • The raw data are collected in the context of an organization and are most valuable for the organization itself rather than for an outside scholarly community; however, the methods and results are extremely valuable to the outside community and will be shared.

Does anything need to or can be shared in this situation?

In short, yes. As Toni pointed out, the most valuable sharing of qualitative data in an organizational context is the sharing of data that has undergone expert interpretation. “I’m collecting a lot of qualitative data, but to be most useful to my teammates, they have to be distilled into quick actionable items,” says Toni. Qualitative data are sometimes hard to communicate, and building trust in its validity requires time.

Toni and ADE Principal Investigator Lynn Yarmey are writing a paper outlining the user experience / usability research methods and processes they undertook, and the results and lessons learned. Ultimately, usability research is intended to create software with an end user focus that is intuitive, complete, and pleasurable to use. Ms. Rosati is passionate about such research and welcomes your questions.

Search Pathways

Wayfinding can be defined as the characteristics by which someone finds their way around. The word is usually used in a physical context like in a city or building. It can also be used to describe how people give directions in physical space (i.e. research says that women tend to use more land markers whereas men give more cardinal directions).

My research team and I decided to use this concept when testing our online data search tools (Arctic Data Explorer and NSIDC Search). Our audiences have similar characteristics, but Arctic Data Explorer users tended to be more general searchers whereas NSIDC Search users sought out very specific data sets.

Rather than base our hypothesis on gender, we suspected that a user’s level of experience in the science world influenced how they searched for data sets. It turns out we were partially right. It isn’t necessarily how long the user has been in science, but how familiar they are with that particular branch of science.

userssearchpathwaysBoth of these search tools share underlying technology and have the same goal – get users to the data they want! But, given the slight difference in audiences, some interface adjustments were required. The easiest thing to notice is that NSIDC Search has lots of very specific facets to help users drill down whereas the Arctic Data Explorer allows users to do a freeform text search. But what other differences do you see?

28 Days Later

Context

Our assignment for  CSCI 5839, User Centered Design at the University of Colorado Boulder, was to work with a team to design a mobile app using UX research techniques.

Team

Mia Fuhrman  (msfuhrman @ gmail.com)
Skatje Myers  (skatje.myers @ colorado.edu)
Janeen Neri  (jane0320 @ colorado.edu)
Toni Rosati (uxtoni @ gmail.com)

Process, Research Design

  1. Ethnographic research about how mobile devices are used in public.
  2. Pitch an app idea that is interesting and has a viable market
  3. Draw Comics of users interacting with the app in various scenarios
  4. Survey and Cultural Probe research about our app idea
  5. Develop prototype personas and conduct a competitive analysis
  6. Create 3-4 initial designs (drawings, sketches)
  7. In class user testing
  8. Select 2 designs to develop further for testing (digital mockups)
  9. Moderated user testing (in person interviews)
  10. Select one design for further testing using an “animated” paper prototype
  11. Moderated user testing (in person interviews)
  12. Iterate on the design and create a working Axure prototype
  13. Moderated user testing (in person interviews)
  14. Storyboard for a pitch video

Key Findings

There are many period tracking apps out there. They all glamorize “that time of the month” with user interfaces involving bubbles, flowers, and lots of pink. Unfortunately, most women do not delight in “Aunt Flo’s” monthly visit.

Only 25% of survey respondents took note (mentally, physically, or digitally) of when she will be ovulating. Yet, every respondent reported at least one menstrual/PMS symptom within the last 90 days.71% of the 48 survey respondents are using some sort of tracker app. 84% of respondents who use a tracker app are able to log data in less than one minute. We decided to move forward with the idea and focus on biological females who are not trying to conceive.

One surprising benefit of using a cultural probe was the sheer range of responses it encouraged. It paired very well with the strict, easily quantifiable results of the survey. No two people chose the same three symptoms, and the symptoms weren’t straight off the Wikipedia page for “menstruation”.

Visual communication and cues are needed alongside text labels. People were more confident in making a selection when they were quickly able to confirm their assumptions with multiple cues.

screen-shot-2016-07-06-at-6-05-12-pm
Results from the Cultural Probe qualitative research study
inline-e1413757255625
Comic of how and when a user might open the app
28dayslater_prototype-c.jpg
My initial design
28daysmock_homepage_color
My early digital design