Mike Caulfield’s SIFT + AI: Fact-Checking Nursing Pay

Male, African American nurse assists a patient in a wheelchair

I used to be among the people who thought that privacy wasn’t really much of a thing to be overly concerned about. What did I have to hide anyway? What did “good” people have to hide if what they’re doing is all on the up and up? I hope I don’t lose potential readers with the naiveté of that mindset. I have very much changed my mind over many decades now and do what I can to help students, friends, family members, and anyone who might otherwise be persuaded by what I share through my podcast and writing to recognize the issues surrounding privacy that affect all of us and what it means to be a free nation.

I was listening to The Ezra Klein Show, as he discussed the “internet none of us asked for” with two experts matters of ethics. I’m teaching business ethics right now, so my ears were perked even more than they might have otherwise been. The episode is titled: We Didn’t Ask for This Internet and features Cory Doctorow and Tim Wu. From the episode description:

Ragebait, sponcon, A.I. slop — the internet of 2026 makes a lot of us nostalgic for the internet of 10 or 15 years ago.

What exactly went wrong here? How did the early promise of the internet get so twisted? And what exactly is wrong here? What kinds of policies could actually make our digital lives meaningfully better?

Cory Doctorow and Tim Wu have two different theories of the case, which I thought would be interesting to put in conversation together. Doctorow is a science fiction writer, an activist with the Electronic Frontier Foundation and the author of “Enshittification: Why Everything Suddenly Got Worse and What to Do About It.”  Wu is a law professor who worked on technology policy in the Biden White House; his latest book is “The Age of Extraction: How Tech Platforms Conquered the Economy and Threaten Our Future Prosperity.”

In this conversation, we discuss their different frameworks, and how they connect to all kinds of issues that plague the modern internet: the feeling that we’re being manipulated; the deranging of our politics; the squeezing of small businesses and creators; the deluge of spam and fraud; the constant surveillance and privacy risks; the quiet rise of algorithmic pricing; and the dehumanization of work. And they lay out the policies that they think would go furthest in making all these different aspects of our digital lives better.

I thought that a claim made during this episode would be a good one to use in my continued efforts to grow my own information literacy, as well as to pass on what I can to the faculty and students I get to teach and learn alongside…

The Claim: Contract Nurses Are Discriminated Against, Based on Their Likely Desperation to Accept Lower Pay

When I teach Mike Caulfield’s SIFT framework, one of the most challenging hurdles for students is to be able to assess the claim being made. They often think that the article’s headline is the claim. In this example I’m using today, there’s the claim that was made, combined with my feelings about what I was hearing (or what I interpreted being said, as I listened to the podcast, in the middle of doing other things).

Here’s how I remember the claim:

Contract nurses are discriminated against, based on their likely desperation to accept lower pay. Their credit scores and other indicators of just how desperate they might be to take less compensation than someone else competing for the same job allow potential employers to discriminate against them or otherwise game the system toward a race to the bottom for pay.

While listening, I was in the middle of cleaning out our refrigerator and had my hands covered in muck, so wasn’t able to capture the notes of this scholar and her work. Once I got back to my computer, I was able to find the name of the researcher they mentioned (Deborah Rhode). Tim shared an examples from her scholarship regarding the ways in which nurses’ financial data is mined and analyzed to predict for how low a wage they will accept on an hourly contract type of arrangement.

Two Methods of Fact Checking

I thought it would be helpful to document the process I would go through of fact checking this in two ways:

  1. Using the SIFT fact checking framework
  2. Via Mike Caulfield’s emerging “Critical Thinking/Doing with AI” experimentation

So two ways of assessing how likely it is that what I heard was true. I am going to start with SIFT and then move on to the AI tools that Mike Caulfield has been working on.

Fact Checking the Claim via SIFT

If you’re not familiar with the name Mike Caulfield, he created the fact checking framework known as SIFT. Here’s what that might look like in testing this claim:

  • STOP // “S” stands for stop, as in we shouldn’t immediately pass on what we hear when we’re listening to the Ezra Klein show with our hands covered in food waste. We should hang on to a moment and wait to see if it is actually accurate.
  • INVESTIGATE // The “I” stands for investigate the source. In this case, I would be thinking about Ezra Klein and his podcast and fact checking process done by the New York Times. They credit a fact checker for the podcast. I don’t know much about that process, but I just know in the credits, they always list a person as well as the researcher themselves that was mentioned.
  • FIND // “F” stands for find trusted coverage. So I would want to be looking at other news organizations and what they may have shared to support the claim of nurses being discriminated against in this way regarding their compensation.
  • TRACE // And finally, T for trace back to the original source. In this case, I imagine the researcher would be fairly easy to find and would be likely to have done quite a bit of scholarship assessing this claim.

If you would like to see me walk through how I approached this fact checking using SIFT, watch the Using Mike Caulfield’s SIFT Framework to Test a Claim About Wage Discrimination Against Nurses video on the Teaching in Higher Ed YouTube channel.

Watch: Using Mike Caulfield’s SIFT Framework to Test a Claim About Wage Discrimination Against Nurses

Fact Checking the Claim via Mike Caulfield’s Critical Thinking/Doing with AI Experimentation

Some of you may know that Mike Caulfield has been experimenting with what artificial intelligence can and cannot currently do when it comes to our fact checking efforts. The short version is that the standard AI response that comes as a result of a Google search with a question mark after it, the AI summary, if you will, is not particularly good at an individual’s fact checking efforts. However, he has built a custom GPT and other tools that put some parameters around the prompts and he also encourages us to have more of a back and forth as we consider our own pursuit of knowing if what we are looking at is what we think we’re looking at and whether or not it is accurate.

This is the second of two videos exploring different approaches to fact-checking a claim I heard on The Ezra Klein Show (“We Didn’t Ask for This Internet,” featuring Cory Doctorow and Tim Wu). In the first video, I used Mike Caulfield’s SIFT framework. In this one, I experiment with his emerging work on how artificial intelligence can — and cannot — support fact-checking.

Watch: Fact-Checking w/ AI: Testing Claims Using Mike Caulfield’s New Critical Thinking with AI Approach

Some of the resources and references mentioned include:

Learning Out Loud

As I wrap up this post, I’m reminded of how challenging we can make it for ourselves when we commit to a life filled with learning out loud (or maybe that’s just me?). I’ll admit that part of why I went down a less-than-helpful rabbit trail not once but twice was because I am afraid of looking foolish (or dare I say outright wrong?) in my experimentation with this stuff.

Mike Caulfield reminds us that we should always remember what our aim is in our fact checking and overall information literacy efforts. In this case, I’m an average person who knows hardly anything about how nurses are paid (except for at the university where I work). I’m pretty much the perfect candidate to kick the tires on these tools and resources to see what it looks like when we check claims we see online (or, in this case, hear on a podcast).

My goal is to equip others to be better able to assess if what they’re looking at is what they think it is and to determine the credibility of what’s being shared. Given how quickly AI is changing the fact-checking landscape and the consequences of living in a society in which lies are so blatantly propagated, continuing to get better at this stuff and share with others seems an important and necessary thing to do.

Source link