Tag: Divide

  • Class Divide, Debt, and the Search for a Future

    Class Divide, Debt, and the Search for a Future

    For Generation Z, the old story of social mobility—study hard, go to college, work your way up—has lost its certainty. The class divide that once seemed bridgeable through education now feels entrenched, as debt, precarious work, and economic volatility blur the promise of progress.

    The new economy—dominated by artificial intelligence, speculative assets like cryptocurrency, and inflated housing markets—has not delivered stability for most. Instead, it’s widened gaps between those who own and those who owe. Many young Americans feel locked out of wealth-building entirely. Some have turned to riskier bets—digital assets, gig work, or start-ups powered by AI tools—to chase opportunities that traditional institutions no longer provide. Others have succumbed to despair. Suicide rates among young adults have climbed sharply in recent years, correlating with financial stress, debt, and social isolation.

    And echoing through this uncertain landscape is a song that first rose from the coalfields of Kentucky during the Great Depression—Florence Reece’s 1931 protest hymn, “Which Side Are You On?”

    Come all you good workers,

    Good news to you I’ll tell,

    Of how the good old union

    Has come in here to dwell.

    Which side are you on?

    Which side are you on?

    Nearly a century later, those verses feel newly urgent—because Gen Z is again being forced to pick a side: between solidarity and survival, between reforming a broken system or resigning themselves to it.


    The Class Divide and the Broken Ladder

    Despite record levels of education, Gen Z faces limited social mobility. College remains a class marker, not an equalizer. Students from affluent families attend better-funded universities, graduate on time, and often receive help with housing or job placement. Working-class and first-generation students, meanwhile, navigate under-resourced campuses, heavier debt, and weaker professional networks.

    The Pew Research Center found that first-generation college graduates have nearly $100,000 less in median wealth than peers whose parents also hold degrees. For many, the degree no longer guarantees a secure foothold in the middle class—it simply delays financial independence.

    They say in Harlan County,

    There are no neutrals there,

    You’ll either be a union man,

    Or a thug for J. H. Blair.

    The metaphor still fits: there are no neutrals in the modern class struggle over debt, housing, and automation.


    Debt, Doubt, and the New Normal

    Gen Z borrowers owe an average of around $23,000 in student loans, a figure growing faster than any other generation’s debt load. Over half regret taking on those loans. Many delay buying homes, having children, or even seeking medical care. Those who drop out without degrees are burdened with debt and little to show for it.

    The debt-based model has become a defining feature of American life—especially for the working class. The price of entry to a better future is borrowing against one’s own.

    Don’t scab for the bosses,

    Don’t listen to their lies,

    Us poor folks haven’t got a chance

    Unless we organize.

    If Reece’s song once called miners to unionize against coal barons, its spirit now calls borrowers, renters, adjuncts, and gig workers to collective resistance against financial systems that profit from their precarity.


    AI and the Erosion of Work

    Artificial intelligence promises efficiency, but it also threatens to hollow out the entry-level job market Gen Z depends on. Automation in journalism, design, law, and customer service cuts off rungs of the career ladder just as young workers reach for them.

    While elite graduates may move into roles that supervise or profit from AI, working-class Gen Zers are more likely to face displacement. AI amplifies the class divide: it rewards those who already have capital, coding skills, or connections—and sidelines those who don’t.


    Crypto Dreams and Financial Desperation

    Locked out of traditional wealth paths, many young people turned to cryptocurrency during the pandemic. Platforms like Robinhood and Coinbase promised quick gains and independence from the “rigged” economy. But when crypto markets crashed in 2022, billions in speculative wealth evaporated. Some who had borrowed or used student loan refunds to invest lost everything.

    Online forums chronicled not only the financial losses but also the psychological fallout—stories of panic, shame, and in some tragic cases, suicide. The new “digital gold rush” became another mechanism for transferring wealth upward.


    The Real Estate Wall

    While digital markets rise and fall, real estate remains the ultimate symbol of exclusion. Home prices have climbed over 40 percent since 2020, while mortgage rates hover near 8 percent. For most of Gen Z, ownership is out of reach.

    Older generations built equity through housing; Gen Z rents indefinitely, enriching landlords and institutional investors. Without intergenerational help, the “starter home” has become a myth. In America’s new class order, those who inherit property inherit mobility.


    Despair and the Silent Crisis

    Behind the data lies a mental health emergency. The CDC reports that suicide among Americans aged 10–24 has risen nearly 60 percent in the past decade. Economic precarity, debt, housing insecurity, and climate anxiety all contribute.

    Therapists describe “financial trauma” as a defining condition for Gen Z—chronic anxiety rooted in systemic instability. Universities respond with mindfulness workshops, but few confront the deeper issue: a society that privatized risk and monetized hope.

    They say in Harlan County,

    There are no neutrals there—

    Which side are you on, my people,

    Which side are you on?

    The question lingers like a challenge to policymakers, educators, and investors alike.


    A Two-Tier Future

    Today’s economy is splitting into two distinct realities:

    • The secure class, buffered by family wealth, education, AI-driven income, and real estate assets.

    • The precarious class, burdened by loans, high rents, unstable work, and psychological strain.

    The supposed democratization of opportunity through technology and education has in practice entrenched a new feudalism—one coded in algorithms and contracts instead of coal and steel.


    Repairing the System, Not the Student

    For Generation Z, the American Dream has become a high-interest loan. Education, technology, and financial innovation—once tools of liberation—now function as instruments of control.

    Reforming higher education is necessary, but not sufficient. The deeper work lies in redistributing power: capping predatory interest rates, investing in affordable housing, curbing speculative bubbles, ensuring that AI’s gains benefit labor as well as capital, and confronting the mental health crisis that shadows all of it.

    Florence Reece’s song endures because its question has never been answered—only updated. As Gen Z stands at the intersection of debt and digital capitalism, that question rings louder than ever:

    Which side are you on?


    Sources

    • Florence Reece, “Which Side Are You On?” (1931).

    • Pew Research Center, “First-Generation College Graduates Lag Behind Their Peers on Key Economic Outcomes,” 2021.

    • Dēmos, The Debt Divide: How Student Debt Impacts Opportunities for Black and White Borrowers, 2016.

    • EducationData.org, “Student Loan Debt by Generation,” 2024.

    • Federal Reserve Bank of St. Louis, Gen Z Student Debt and Wealth Data Brief, 2022.

    • CNBC, “Gen Z vs. Their Parents: How the Generations Stack Up Financially,” 2024.

    • WUSF, “Generation Z’s Net Worth Is Being Undercut by College Debt,” 2024.

    • Newsweek, “Student Loan Update: Gen Z Hit with Highest Payments,” 2024.

    • The Kaplan Group, “How Student Debt Is Locking Millennials and Gen Z Out of Homeownership,” 2024.

    • CDC, Suicide Mortality in the United States, 2001–2022, National Center for Health Statistics, 2023.

    • Brookings Institution, “The Impact of AI on Labor Markets: Inequality and Automation,” 2024.

    • CNBC, “Crypto Crash Wipes Out Billions in Investor Wealth, Gen Z Most Exposed,” 2023.

    • Zillow, “U.S. Housing Affordability Reaches Lowest Point Since 1989,” 2024.

    Source link

  • More states adopt laws defining ‘man’ and ‘woman,’ adding to Title IX divide

    More states adopt laws defining ‘man’ and ‘woman,’ adding to Title IX divide

    This audio is auto-generated. Please let us know if you have feedback.

    Dive Brief:

    • More states are defining what it means to be a man and woman in state law, with Texas poised to become the 14th Republican-leaning state to do so since 2023. The state’s sex definition bill was approved last week and now awaits Gov. Greg Abbott’s signature. 
    • Two additional states — Nebraska and Indiana — regulate the definition of sex through state executive orders, according to the Movement Advancement Project, a nonprofit that tracks legislation related to LGBTQ+ issues. 
    • While the impact of these laws may vary from state to state, they set the stage to prevent transgender students from accessing facilities and joining athletic teams aligning with their gender identities.

    Dive Insight:

    Proponents of sex definition legislation say it protects women and girls from sex discrimination based on “immutable biological differences” that can be seen before or at birth. Advocates have used the same argument in recent years to interpret Title IX, the federal civil rights law preventing sex discrimination in education programs, to separate transgender students from girls and women athletic teams and spaces.

    The Texas legislation, for example, says “biological differences between the sexes mean that only females are able to get pregnant, give birth, and breastfeed children” and that “males are, on average, bigger, stronger, and faster than females.” These differences, it says, “are enduring and may, in some circumstances, warrant the creation of separate social, educational, athletic, or other spaces in order to ensure individuals’ safety and allow members of each sex to succeed and thrive.”

    The language closely mirrors an executive order issued by President Donald Trump upon his return to the Oval Office in January. That order established that “it is the policy of the United States to recognize two sexes, male and female.” The order said “these sexes are not changeable and are grounded in fundamental and incontrovertible reality,” and that the concept of “gender identity” is “disconnected from biological reality and sex and existing on an infinite continuum.”

    The language was also reflected in a draft resolution agreement proposed to the Maine Department of Education by the U.S. Department of Education after a short, one-month investigation by the federal agency’s Office for Civil Rights found the state was violating Title IX in its policy allowing transgender students to participate in girls’ and women’s sports teams.

    The agreement, which Maine refused to sign, would have had the state department and public schools define “females” and “males” in their policies and require the state to publicize the definitions on its website.

    The Maine agency would have been required to notify schools that “there are only two sexes (female and male) because there are only two types of gametes (eggs and sperm); and the sex of a human — female or male — is determined genetically at conception (fertilization), observable before birth, and unchangeable.”

    “Gender” would be the same as “sex” under the agreement.

    The case is currently pending with the U.S. Department of Justice, which took over enforcement of the investigation and its findings after the state refused to sign the agreement.

    The agreement would have also required the state to change its records to erase transgender girls’ athletic accomplishments on girls’ sports teams, which is also a potential side effect of the legislation in 13 states defining sex.

    Those opposing recent sex definition laws say they are transphobic, as they don’t recognize transgender people’s gender identity. 

    “These laws could have dangerous implications for transgender people when it comes to bathrooms, identity documents, and other areas of law or policy,” MAP said, “but because these government gender regulation laws are often vaguely written, the actual impact of these laws remains to be seen in each state.”

    Source link

  • Bridging the Skills Divide: Higher Education’s Role in Delivering the UK’s Plan for Change

    Bridging the Skills Divide: Higher Education’s Role in Delivering the UK’s Plan for Change

    • Dr Ismini Vasileiou is Associate Professor at De Montfort University, Director of the East Midlands Cyber Security Cluster and Director and Co-Chair of UKC3.

    Higher education has always played a critical role in skills development, from professional fields like Medicine, Dentistry, and Engineering to more recent models such as degree apprenticeships. However, as the UK’s digital economy evolves at an unprecedented pace, there is a growing need to rebalance provision, ensuring that universities continue to equip graduates with both theoretical expertise and industry-ready capabilities in areas such as AI, cybersecurity, and automation.

    The government’s strategic focus on workforce development underscores the importance of these changes, with higher education well-placed to lead the transformation. As industries adapt, the need for a highly skilled workforce has never been greater. The UK Government’s Plan for Jobs outlines a strategic vision for workforce development, placing skills at the heart of economic growth, national security, and regional resilience.

    With the new higher education reform expected in Summer 2025, the sector faces a pivotal moment. The Department for Education has announced that the upcoming changes will focus on improving student outcomes, employment pathways, and financial sustainability in HE. While universities are autonomous institutions, government policy and funding mechanisms are key drivers influencing institutional priorities. The increasing emphasis on workforce development – particularly in cybersecurity, AI, and other high-demand sectors- suggests that universities will likely need to adapt, particularly as new regulatory and funding structures emerge under the forthcoming HE reform.

    The National Skills Agenda: Why Higher Education Matters

    The skills gap is no longer an abstract policy concern; it is a pressing challenge with economic and security implications. The introduction of Degree Apprenticeships in 2015 was a landmark shift towards integrating academic learning with industry needs. Subsequent initiatives, including MSc conversion courses in AI and Data Science, Level 6 apprenticeships, and the Lifelong Learning Entitlement (LLE) serve as policy levers designed to encourage and facilitate a more skills-oriented higher education landscape, rather than evidence of an inherent need for change. Through mechanisms such as Degree Apprenticeships, AI conversion courses, and the Lifelong Learning Entitlement, the government is actively shaping pathways that incentivise greater emphasis on employability and applied learning within universities.

    The Plan for Change accelerates this momentum, funding over 30 regional projects designed to enhance cyber resilience and workforce readiness. One example is the CyberLocal programme, a government-backed initiative (Department for Science, Innovation and Technology) focused on upskilling local authorities, SMEs, and community organisations in cybersecurity. CyberLocal connects universities, businesses, and local governments to deliver tailored cyber resilience training, addressing the increasing threats to national digital security. More information can be found through CyberLocal’s page.

    Financial Pressures and the Case for Skills-Based Education

    At the same time, the financial landscape of HE is shifting. Declining student enrolments in traditional subjects, increasing operational costs, and a competitive global market have left many institutions reassessing their sustainability strategies. The upcoming higher education reform will shape policy from 2025 onwards, and universities must determine how best to adapt to new funding models and student expectations.

    While skills-based education is often positioned as a solution, it is not an immediate financial fix. Many Degree Apprenticeships are run at a loss due to administrative complexities, employer engagement challenges, and high operational costs. Several articles, including those previously published at HEPI, highlight that while demand is growing, institutions face significant challenges in delivering these programmes at scale.

    Government-backed funding in AI training and cybersecurity resilience offers targeted opportunities, but these remain limited in scope. Some universities have found success in co-designed upskilling and reskilling initiatives, particularly where regional economic growth strategies align with HE capabilities. The Institute of Coding, a national collaboration between universities and employers funded by the Office for Students, has developed industry-focused digital skills training, particularly in software development and cybersecurity. Additionally, the Office for Students Short Course trial has enabled universities to develop flexible, modular programmes that respond directly to employer demand in areas such as AI, digital transformation, and cybersecurity. Other examples include the National Centre for AI in Tertiary Education, which supports universities in embedding AI skills into their curricula to meet the growing demand for AI literacy across multiple sectors. However, a broader financial model that enables sustainable, scalable skills education is still required.

    Regional Collaboration and Workforce Development

    Since 2018, the Department for Education (DfE) has supported the creation of Institutes of Technology (IoTs), with 19 now operational across England and Wales. These institutions prioritise digital and cyber education, aligning with local skills needs and economic strategies. Strengthening collaboration between HE and IoTs could enable universities to support regionally tailored workforce development.

    Examples such as the East Midlands Freeport, the Leicester and Leicestershire Local Skills Observatory, and CyberLocal illustrate the power of localised approaches. The Collective Skills Observatory, a joint initiative between De Montfort University and the East Midlands Chamber, is leveraging real-time workforce data to ensure that training provision matches employer demand. These initiatives could provide a blueprint for future HE collaboration with regional skills networks, particularly as the UK government reviews post-2025 skills policy.

    Cyber Resilience, AI, and the Challenge of Adaptive Curricula

    The government’s focus on cyber resilience and AI-driven industries underscores the urgent need for skills development in these areas. With AI poised to reshape global industries, universities must ensure graduates are prepared for rapidly evolving job roles. However, one of the biggest challenges is the slow pace of curriculum development in higher education.

    Traditional course approval processes mean new degrees can take two to three years to develop. In fields like AI, where breakthroughs happen on a monthly rather than yearly basis, this presents a serious risk of curricula becoming outdated before they are even launched. Universities must explore faster, more flexible course design models, such as shorter accreditation cycles, modular learning pathways, and micro-credentials.

    Government-backed initiatives, such as the Institute of Coding, have demonstrated alternative models for responsive skills training. As the HE reform unfolds, universities will need to consider how existing governance structures can adapt to the demands of an AI-driven economy.

    A New Skills Ecosystem: HE’s Role in the Post-2025 Landscape

    The forthcoming higher education reform is expected to introduce significant policy changes, including revised funding structures, greater emphasis on employability and skills-based education, and stronger incentives for industry partnerships, particularly in STEM and digital sectors.  

    Higher education must position itself as a leader in skills development. The recent Universities UK (UUK) blueprint, calls for deeper collaboration between the further and higher education sectors, recognising their complementary strengths. Further education offers agility and vocational expertise, while higher education provides advanced research and higher-level skills training – together, they can create a seamless learner journey.

    At the same time, national initiatives such as Skills England, the Digital Skills Partnerships, and Degree Apprenticeships present opportunities for universities to engage in long-term skills planning. The integration of Lifelong Learning Entitlement (LLE) loans will further support continuous upskilling and career transitions, reinforcing the role of HE in lifelong workforce development.

    Conclusion: Shaping the Future of HE Through Skills and Collaboration

    With the HE reform announcement expected in Summer 2025, universities must act now to align with the government’s long-term skills agenda. The future of HE is being written now, and skills must be at the heart of it.

    Source link

  • Will GenAI narrow or widen the digital divide in higher education?

    Will GenAI narrow or widen the digital divide in higher education?

    by Lei Fang and Xue Zhou

    This blog is based on our recent publication: Zhou, X, Fang, L, & Rajaram, K (2025) ‘Exploring the digital divide among students of diverse demographic backgrounds: a survey of UK undergraduates’ Journal of Applied Learning and Teaching, 8(1).

    Introduction – the widening digital divide

    Our recent study (Zhou et al, 2025) surveyed 595 undergraduate students across the UK to examine the evolving digital divide across all forms of digital technologies. Although higher education is expected to narrow this divide and build students’ digital confidence, our findings revealed the opposite. We found that the gap in digital confidence and skills between widening participation (WP) and non-WP students widened progressively throughout the undergraduate journey. While students reported peak confidence in Year 2, this was followed by a notable decline in Year 3, when the digital divide became most pronounced. This drop coincides with a critical period when students begin applying their digital skills in real-world contexts, such as job applications and final-year projects.

    Based on our study (Zhou et al, 2025), while universities offer a wide range of support such as laptop loans, free access to remote systems, extracurricular digital skills training, and targeted funding to WP students, WP students often do not make use of these resources. The core issue lies not in the absence of support, but in its uptake. WP students are often excluded from the peer networks and digital communities where emerging technologies are introduced, shared, and discussed. From a Connectivist perspective (Siemens, 2005), this lack of connection to digital, social, and institutional networks limits their awareness, confidence, and ability to engage meaningfully with available digital tools.

    Building on these findings, this blog asks a timely question: as Generative Artificial Intelligence (GenAI) becomes embedded in higher education, will it help bridge this divide or deepen it further?

    GenAI may widen the digital divide — without proper strategies

    While the digital divide in higher education is already well-documented in relation to general technologies, the emergence of GenAI introduces new risks that may further widen this gap (Cachat-Rosset & Klarsfeld, 2023). This matters because students who are GenAI-literate often experience better academic performance (Sun & Zhou, 2024), making the divide not just about access but also about academic outcomes.

    Unlike traditional digital tools, GenAI often demands more advanced infrastructure — including powerful devices, high-speed internet, and in many cases, paid subscriptions to unlock full functionality. WP students, who already face barriers to accessing basic digital infrastructure, are likely to be disproportionately excluded. This divide is not only student-level but also institutional. A few well-funded universities are able to subscribe to GenAI platforms such as ChatGPT, invest in specialised GenAI tools, and secure campus-wide licenses. In contrast, many institutions, particularly those under financial pressure, cannot afford such investments. These disparities risk creating a new cross-sector digital divide, where students’ access to emerging technologies depends not only on their background, but also on the resources of the university they attend.

    In addition, the adoption of GenAI currently occurs primarily through informal channels via peers, online communities, or individual experimentation rather than structured teaching (Shailendra et al, 2024). WP students, who may lack access to these digital and social learning networks (Krstić et al, 2021), are therefore less likely to become aware of new GenAI tools, let alone develop the confidence and skills to use them effectively. Even when they do engage with GenAI, students may experience uncertainty, confusion, or fear about using it appropriately especially in the absence of clear guidance around academic integrity, ethical use, or institutional policy. This ambiguity can lead to increased anxiety and stress, contributing to wider concerns around mental health in GenAI learning environments.

    Another concern is the risk of impersonal learning environments (Berei & Pusztai, 2022). When GenAI are implemented without inclusive design, the experience can feel detached and isolating, particularly for WP students, who often already feel marginalised. While GenAI tools may streamline administrative and learning processes, they can also weaken the sense of connection and belonging that is essential for student engagement and success.

    GenAI can narrow the divide — with the right strategies

    Although WP students are often excluded from digital networks, which Connectivism highlights as essential for learning (Goldie, 2016), GenAI, if used thoughtfully, can help reconnect them by offering personalised support, reducing geographic barriers, and expanding access to educational resources.

    To achieve this, we propose five key strategies:

    • Invest in infrastructure and access: Universities must ensure that all students have the tools to participate in the AI-enabled classroom including access to devices, core software, and free versions of widely used GenAI platforms. While there is a growing variety of GenAI tools on the market, institutions facing financial pressures must prioritise tools that are both widely used and demonstrably effective. The goal is not to adopt everything, but to ensure that all students have equitable access to the essentials.
    • Rethink training with inclusion in mind: GenAI literacy training must go beyond traditional models. It should reflect Equality, Diversity and Inclusion principles recognising the different starting points students bring and offering flexible, practical formats. Micro-credentials on platforms like LinkedIn Learning or university-branded short courses can provide just-in-time, accessible learning opportunities. These resources are available anytime and from anywhere, enabling students who were previously excluded such as those in rural or under-resourced areas to access learning on their own terms.
    • Build digital communities and peer networks: Social connection is a key enabler of learning (Siemens, 2005). Institutions should foster GenAI learning communities where students can exchange ideas, offer peer support, and normalise experimentation. Mental readiness is just as important as technical skill and being part of a supportive network can reduce anxiety and stigma around GenAI use.
    • Design inclusive GenAI policies and ensure ongoing evaluation: Institutions must establish clear, inclusive policies around GenAI use that balance innovation with ethics (Schofield & Zhang, 2024). These policies should be communicated transparently and reviewed regularly, informed by diverse student feedback and ongoing evaluation of impact.
    • Adopt a human-centred approach to GenAI integration: Following UNESCO’s human-centred approach to AI in education (UNESCO, 2024; 2025), GenAI should be used to enhance, not replace the human elements of teaching and learning. While GenAI can support personalisation and reduce administrative burdens, the presence of academic and pastoral staff remains essential. By freeing staff from routine tasks, GenAI can enable them to focus more fully on this high-impact, relational work, such as mentoring, guidance, and personalised support that WP students often benefit from most.

    Conclusion

    Generative AI alone will not determine the future of equity in higher education, our actions will. Without intentional, inclusive strategies, GenAI risks amplifying existing digital inequalities, further disadvantaging WP students. However, by proactively addressing access barriers, delivering inclusive and flexible training, building supportive digital communities, embedding ethical policies, and preserving meaningful human interaction, GenAI can become a powerful tool for inclusion. The digital divide doesn’t close itself; institutions must embed equity into every stage of GenAI adoption. The time to act is not once systems are already in place, it is now.

    Dr Lei Fang is a Senior Lecturer in Digital Transformation at Queen Mary University of London. Her research interests include AI literacy, digital technology adoption, the application of AI in higher education, and risk management. [email protected]

    Professor Xue Zhou is a Professor in AI in Business Education at the University of Leicester. Her research interests fall in the areas of digital literacy, digital technology adoption, cross-cultural adjustment and online professionalism. [email protected]

    Author: SRHE News Blog

    An international learned society, concerned with supporting research and researchers into Higher Education

    Source link

  • The Growing Gender Divide in STEM Education

    The Growing Gender Divide in STEM Education

    Title: The Hidden STEM Gender Gap: Why Progress at Top Universities Masks a Growing Crisis

    Source: Brookings Institution

    Authors: Joseph R. Cimpian and Jo R. King

    A recent Brookings Institution article, “The Hidden STEM Gender Gap: Why Progress at Top Universities Masks a Growing Crisis,” paints a complex picture of the state of gender equity in STEM higher education. While top universities have made notable progress in narrowing the gender gap in physics, engineering, and computer science (PECS) majors, institutions serving students with lower math achievement are falling further behind.

    Over the past two decades, the male-to-female ratio in PECS majors decreased from 2.2:1 to 1.5:1 at universities with the highest average math SAT scores. However, at institutions with the lowest average scores, the gender gap has dramatically widened from 3.5:1 to 7.1:1. This disparity persists even when accounting for differences in math ability, confidence, interests, and academic preparation. The findings point to institutional barriers that disproportionately impact women at less selective schools.

    The institutions struggling most with gender equity serve the majority of American students, particularly students of color and those from lower-income families. PECS degrees offer a path to high-paying careers, and research suggests women may see an even greater earnings premium from these majors at less selective institutions compared to their more selective counterparts. By failing to recruit and retain women in PECS programs, we are denying millions the opportunity to benefit from these rewarding fields.

    The authors propose several strategies to shrink this gap:

    • Allocate resources strategically, directing support to the institutions facing the greatest challenges rather than those already making progress.
    • Adapt proven practices like undergraduate research and peer mentoring to the unique needs and constraints of less-resourced institutions, forging creative partnerships to ensure successful implementation at scale.
    • Mobilize external partners, from nonprofit organizations to industry groups, to strategically focus their outreach and pathway-building efforts on the schools and communities with the most severe gender imbalances.

    Achieving gender equity in STEM will require acknowledging where we are falling short and building the collective determination to change. The success of top universities shows that progress is possible, but it will take targeted interventions and a sustained commitment to extending opportunities to all students. Until then, our celebrations of narrowing gaps will ring hollow for the women left behind.

    To read the full Brookings Institution article, click here. The complete research is also available in the journal Science here.

    Alex Zhao


    If you have any questions or comments about this blog post, please contact us.

    Source link

  • The Trickiness of AI Bootcamps and the Digital Divide –

    The Trickiness of AI Bootcamps and the Digital Divide –

    As readers of this series know, I’ve developed a six-session design/build workshop series for learning design teams to create an AI Learning Design Assistant (ALDA). In my last post in this series, I provided an elaborate ChatGPT prompt that can be used as a rapid prototype that everyone can try out and experiment with.1 In this post, I’d like to focus on how to address the challenges of AI literacy effectively and equitably.

    We’re in a tricky moment with generative AI. In some ways, it’s as if writing has just been invented, but printing presses are already everywhere. The problem of mass distribution has already been solved. But nobody’s invented the novel yet. Or the user manual. Or the newspaper. Or the financial ledger. We don’t know what this thing is good for yet, either as producers or as consumers. We don’t know how, for example, the invention of the newspaper will affect the ways in which we understand and navigate the world.

    And, as with all technologies, there will be haves and have-nots. We tend to talk about economic and digital divides in terms of our students. But the divide among educational institutions (and workplaces) can be equally stark and has a cascading effect. We can’t teach literacy unless we are literate.

    This post examines the literacy challenge in light of a study published by Harvard Business School and reported on by Boston Consulting Group (BCG). BCG’s report and the original paper are both worth reading because they emphasize different findings. But the crux is the same:

    • Using AI does enhance the productivity of knowledge workers.
    • Weaker knowledge workers improve more than stronger ones.
    • AI is helpful for some kinds of tasks but can actually harm productivity for others.
    • Training workers in AI can hurt rather than help their performance if they learn the wrong lessons from it.

    The ALDA workshop series is intended to be a kind of AI literacy boot camp. Yes, it aspires to deliver an application that solves a serious institutional process by the end. But the real, important, lasting goal is literacy in techniques that can improve worker performance while avoiding the pitfalls identified in the study.

    In other words, the ALDA BootCamp is a case study and an experiment in literacy. And, unfortunately, it also has implications for the digital divide due to the way in which it needs to be funded. While I believe it will show ways to scale AI literacy effectively, it does so at the expense of increasing the digital divide. I will address that concern as well.

    The study

    The headline of the study is that AI usage increased the performance of consultants—especially less effective consultants—on “creative tasks” while decreasing their performance on “business tasks.” The study, in contrast, refers to “frontier” tasks, meaning tasks that generative AI currently does well, and “outside the frontier” tasks, meaning the opposite. While the study provides the examples used, it never clearly defines the characteristics of what makes a task “outside the frontier.” (More on that in a bit.) At any rate, the studies show gains for all knowledge workers on a variety of tasks, with particularly impressive gains from knowledge workers in the lower half of the range of work performance:

    As I said, we’ll get to the red part in a bit. Let’s focus on the performance gains and, in particular, the ability for ChatGPT to equalize performance gains among workers:

    Looking at these graphs reminds me of the benefits we’ve seen from adaptive learning in the domains where it works. Adaptive learning can help many students, but it is particularly useful in helping students who get stuck. Once they are helped, they tend to catch up to their peers in performance. This isn’t quite the same since the support is ongoing. It’s more akin to spreadsheet formulas for people who are good at analyzing patterns in numbers (like a pro forma, for example) but aren’t great at writing those formulas.

    The bad news

    For some tasks, AI made the workers worse. The paper refers to these areas as outside “the jagged frontier.” Why “jagged?” While the authors aren’t explicit, I’d say that (1) the boundaries of AI capabilities are not obviously or evenly bounded, (2) the boundary moves as the technology evolves, and (3) it can be hard to tell even in the moment which side of the boundary you’re on. On this last point, the BCG report highlights that some training made workers perform worse. They speculate it might be because of overconfidence.

    What are those tasks in the red zone of the study? The Harvard paper gives us a clue that has implications for how we approach teaching AI literacy. They write:

    In our study, since AI proved surprisingly capable, it was difficult to design a task in this experiment outside the AI’s frontier where humans with high human capital doing their job would consistently outperform AI. However, navigating AI’s jagged capabilities frontier remains challenging. Even for experienced professionals engaged in tasks akin to some of their daily responsibilities, this demarcation is not always evident. As the boundaries of AI capabilities continue to expand, often exponentially, it becomes incumbent upon human professionals to recalibrate their understanding of the frontier and for organizations to prepare for a new world of work combining humans and AI.

    Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality

    The experimental conditions that the authors created suggest to me that challenges can arise from critical context or experience that is not obviously missing. Put another way, the AI may perform poorly on synthetic thinking tasks that are partly based on experience rather than just knowledge. But that’s both a guess and somewhat beside the point. The real issue is that AI makes knowledge workers better except when it makes them worse, and it’s hard to know what it will do in a given situation.

    The BCG report includes a critical detail that I believe is likely related to the problem of the invisible jagged frontier:

    The strong connection between performance and the context in which generative AI is used raises an important question about training: Can the risk of value destruction be mitigated by helping people understand how well-suited the technology is for a given task? It would be rational to assume that if participants knew the limitations of GPT-4, they would know not to use it, or would use it differently, in those situations.

    Our findings suggest that it may not be that simple. The negative effects of GPT-4 on the business problem-solving task did not disappear when subjects were given an overview of how to prompt GPT-4 and of the technology’s limitations….

    Even more puzzling, they did considerably worse on average than those who were not offered this simple training before using GPT-4 for the same task. (See Exhibit 3.) This result does not imply that all training is ineffective. But it has led us to consider whether this effect was the result of participants’ overconfidence in their own abilities to use GPT-4—precisely because they’d been trained.

    How People Create—And Destroy—Value With Generative AI

    BCG speculates this may be due to overconfidence, which is a reasonable guess. If even the experts don’t know when the AI will perform poorly, then the average knowledge worker should be worse than the experts at predicting. If the training didn’t improve their intuitions about when to be careful, then it could easily exacerbate a sense of overconfidence.

    Let’s be clear about what this means: The AI prompt engineering workshops you’re conducting may actually be causing your people to perform worse rather than better. Sometimes. But you’re not sure when or how often.

    While I don’t have a confident answer to this problem, the ALDA project will pilot a relatively novel approach to it.

    Two-sided prompting and rapid prototype projects

    The ALDA project employs two approaches that I believe may help with the frontier invisibility problem and its effects. One is in the process, while the other is in the product.

    The process is simple: Pick a problem that’s a bit more challenging than a solo prompt engineer could take on or that you want to standardize across your organization. Deliberately pick a problem that’s on the jagged edge where you’re not sure where the problems will be. Run through a series of rapid prototype cycles using cheap and easy-to-implement methods like prompt engineering supported by Retrieval Augmented Generation. Have groups of practitioners test the application on a real-world problem with each iteration. Develop a lightweight assessment tool like a rubric. Your goal isn’t to build a perfect app or conduct a journal-worthy study. Instead, you want to build a minimum viable product while sharpening and updating the instincts of the participants regarding where the jagged line is at the moment. This practice could become habitual and pervasive in moderately resource-rich organizations.

    On the product side, the ALDA prototype I released in my last post demonstrates what I call “two-sided prompting.” By enabling the generative AI to take the lead on the conversation at a time, asking questions rather than giving answers, I effectively created a fluid UX in which the application guides the knowledge worker toward the areas where she can make her most valuable contributions without unduly limiting the creative flow. The user can always start a digression or answer a question with a question. A conversation between experts with complementary skills often takes the form of a series of turn-taking prompts between the two, each one offering analysis or knowledge and asking for a reciprocal contribution. This pattern should invoke all the lifelong skills we develop when having conversations with human experts who can surprise us with their knowledge, their limitations, their self-awareness, and their lack thereof.

    I’d like to see the BCG study compared to the literature on how often we listen to expert colleagues or consultants—our doctors, for example—how effective we are at knowing when to trust our own judgment, and how people who are good at it learn their skills. At the very least, we’d have a mental model that is old, widely used, and offers a more skeptical counterbalance to our idea of the all-knowing machine. (I’m conducting an informal literature review on this topic and may write something about it if I find anything provocative.)

    At any rate, the process and UX features of AI “BootCamps”—or, more accurately, AI hackathon-as-a-practice—are not ones I’ve seen in other generative AI training course designs I’ve encountered so far.

    The equity problem

    I mentioned that relatively resource-rich organizations could run these exercises regularly. They need to be able to clear time for the knowledge workers, provide light developer support, and have the expertise necessary to design these workshops.

    Many organizations struggle with the first requirement and lack the second one. Very few have the third one yet because designing such workshops requires a combination of skills that is not yet common.

    The ALDA project is meant to be a model. When I’ve conducted public good projects like these in the past, I’ve raised vendor sponsorship and made participation free for the organizations. But this is an odd economic time. The sponsors who have paid $25,000 or more into such projects in the past have usually been either publicly traded or PE-owned. Most such companies in the EdTech sector have had to tighten their belts. So I’ve been forced to fund the ALDA project as a workshop paid for by the participants at a price that is out of reach of many community colleges and other access-oriented institutions, where this literacy training could be particularly impactful. I’ve been approached by a number of smart, talented, dedicated learning designers at such institutions that have real needs and real skills to contribute but no money.

    So I’m calling out to EdTech vendors and other funders: Sponsor an organization. A community college. A non-profit. A local business. We need their perspective in the ALDA project if we’re going to learn how to tackle the thorny AI literacy problem. If you want, pick a customer you already work with. That’s fine. You can ride along with them and help.

    Contact me at [email protected] if you want to contribute and participate.

    Source link