Skip to main content

Researching Students' Information Choices: Methods

Researching Students’ Information Choices: Determining Identity and Judging Credibility in Digital Spaces

Research Methods

Prescreen survey:

For each cohort (UF graduate students, UF undergraduate students, Santa Fe College students, and Alachua County fourth through twelfth graders), our team distributed a prescreen survey. At the college level, we provided a link to a Qualtrics survey on both the UF and Santa Fe College library websites for students to follow. For our grade school cohorts (4-5, 6-8, 9-12), we distributed paper surveys within the schools and at local public libraries, as well as post cards with the link to the Qualtrics survey embedded on our project website.

Scheduling:

Potential participants were contacted using the contact information provided on the prescreen surveys. The adult cohorts (UF and Santa Fe students) were contacted via email and grade school cohorts were contacted through their parents via phone call/voicemail. Sessions were scheduled in two-hour blocks. One research team member facilitated session with participants from the adult cohorts, and two research team members facilitated any sessions with a participant that was under 18 years of age.

Once a session was scheduled, each participant was issued an anonymous Participant ID and that session was tracked using Trello, an online scheduling and tracking tool. Any participant information that was not yet anonymized was stored in a secure file. Research sessions were held either at Marston Science Library or at the Santa Fe College Tyree Library.

Research simulation session:

The simulation was designed to be a controlled Google search environment, with identical search results/resources for each participant across a particular cohort. The simulation was created by an Instructional Designer using Articulate Storyline.

For all cohorts, each session was comprised of an identical set of tasks, with the level of resources available according to participant grade, age, and perceived research experience. After a brief pre-simulation interview, the facilitator read clear instructions to the participant that explained what s/he was going to see on the screen, what s/he was expected to do for the duration of the session, and explained the think-aloud protocol. Each session was recorded using Mediasite; only the screen activity and the voices of each facilitator and participant were recorded. The session tasks were as follows:

  • Search task: After watching a short news clip introducing the topic of invasive Burmese pythons in the Florida Everglades and being read an age-appropriate research prompt on the same topic, the participant was directed to the simulated Google search screen, where s/he was instructed to perform a Google search based on the research topic. S/he was then presented with search results.
  • Helpful task: Using the Google search results, each participant was asked to assess each resource and choose a set number of resources that they deemed Helpful to his/her research on this topic (depending on the cohort, the number of resources chosen was either 10 or 20). This was often the longest task; participants were encouraged to "think aloud" and explain what they were looking at and what their thought processes were regarding each resource.
  • Cite task: The participant was presented with the resources s/he had just chosen as Helpful, and then asked to determine whether or not s/he would Cite each resource in the final research paper. Again, the facilitator encouraged the think aloud protocol in order to capture the richest possible information regarding the participant's assessment of the resources.
  • Not Helpful task: The presented was presented with the list of items that s/he did not choose in the Helpful Task, and asked to explain why those items were not chosen.
  • Credible task: Presented with the items that were deemed Helpful, the participant was then asked to determine the credibility of the resources using a one-to-five scale, with one being not credible and five being highly credible. Again, the participant was encouraged to provide assessment and reasoning through the think loud protocol.
  • Container task: The participant was presented with a list of 15 of the original Google search results, and asked to select the container which they thought best fit the resource:
    • Blog
    • Book
    • Conference Proceeding
    • Journal
    • Magazine
    • News
    • Preprint
    • Website

Following the completed tasks, the facilitator conducted a brief post-simulation interview. At the conclusion of the session, each participant was given an Amazon gift card ($50 for college cohorts and $25 for grade school cohorts) for their time and effort.

Simulation Demo

Draft Codebook

University of Florida Home Page

This page uses Google Analytics - (Google Privacy Policy)

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.