Tag: good

  • Faculty Say AI Is Impactful, but Not In a Good Way

    Faculty Say AI Is Impactful, but Not In a Good Way

    Oleh Stefaniak/iStock/Getty Images Plus

    Faculty overwhelmingly agree that generative artificial intelligence will have an impact on teaching and learning in higher education, but whether that impact is positive or negative is still up for debate.

    Nine in 10 faculty members say that generative AI will diminish students’ critical thinking skills, and 95 percent say its impact will increase students’ overreliance on AI tools over time, according to a report out today from the American Association of Colleges and Universities and Elon University.

    In November, the groups surveyed 1,057 faculty members at U.S. institutions about their thoughts on generative AI’s impact. Eighty-three percent of faculty said the technology will decrease students’ attention spans, and 79 percent said they think the typical teaching model in their department will be affected by AI.

    Most professors—86 percent—said that the impact of AI on teachers will be “significant and transformative or at least noticeable,” the report states. Only 4 percent said that AI’s effect on teaching will “not amount to much.” About half of faculty respondents said AI will have a negative effect on students’ careers over the next five years, while 20 percent said it will have a positive effect and another 20 percent said it will be equally negative and positive.

    Faculty are largely unprepared for AI in the classroom, the report shows. About 68 percent of faculty said their institutions have not prepared faculty to use AI in teaching, student mentorship and scholarship. Most of their recent graduates are underprepared, too. Sixty-three percent of professors said that last spring’s graduates were not very or not at all prepared to use generative AI at work, and 71 percent said the graduates were not prepared to understand ethical issues related to AI use.

    About a quarter of faculty don’t use any AI tools at all, and about a third don’t use them in teaching, according to the report. This faculty resistance is a challenge, survey respondents say. About 82 percent of faculty said that resistance to AI or unfamiliarity with AI are hurdles in adopting the tools in their departments.

    “These findings explain why nearly half of surveyed faculty view the future impact of GenAI in their fields as more negative than positive, while only one in five see it as more positive than negative,” Lynn Pasquerella, president of the AAC&U, wrote in her introduction to the report. “Yet, this is not a story of simple resistance to change. It is, instead, a portrait of a profession grappling seriously with how to uphold educational values in a rapidly shifting technological landscape.”

    While most professors—78 percent—said AI-driven cheating is on the rise, they are split about what exactly constitutes cheating. Just over half of faculty said it’s cheating for a student to follow a detailed AI-generated outline when writing a paper, while just under half said it is either a legitimate use of AI or they’re not sure. Another 45 percent of faculty said that using generative AI to edit a paper is a legitimate use of the tool, while the remaining 55 percent said it was illegitimate or they were unsure.

    Despite their agreement on generative AI’s overall impact, faculty are split on whether AI literacy is important for students. About half of professors said AI literacy is “extremely or very important” to their students’ success, while 11 percent said it’s slightly important and 13 percent said it’s irrelevant.

    Professors held a few hopeful predictions about generative AI. Sixty-one percent of respondents said it will improve and customize learning in the future. Four in 10 professors said it will increase the ability of students to write clearly, and 41 percent said it will improve students’ research skills.

    Source link

  • House hearing: Is now a good time to regulate AI in schools?

    House hearing: Is now a good time to regulate AI in schools?

    This audio is auto-generated. Please let us know if you have feedback.

    Dive Brief:

    • House lawmakers shared bipartisan concerns over the risks of students using artificial intelligence — from overreliance on the technology to security of student data — during a House Committee on Education and Workforce hearing on Wednesday.
    • Whether and how AI is regulated and safeguarded in the K-12 classroom at the federal level continues to stir debate. Democrats at the hearing said more guardrails are necessary, but the Trump administration has made it harder to add those through executive orders aiming to block state-level regulations and its efforts to dismantle the U.S. Department of Education. 
    • Republicans, however, cautioned against rushing new regulations on AI to make sure innovation in education and the workforce isn’t stymied.

    Dive Insight:

    The House hearing — the first in a series to be held by the Education and Workforce Committee — came a month after President Donald Trump signed an executive order calling for the preemption of state laws regulating AI with exceptions for child safety protections.

    During the hearing’s opening remarks, the committee’s ranking member, Rep. Bobby Scott, D-Va, said Congress should not stand idly by while the Trump administration “may be ingratiating itself to big tech CEOs and preventing states from protecting Americans against” the dangers of AI.  

    Instead, Scott said, Congress should take an active role in developing thoughtful regulations to “balance protecting students, workers and families” while also “fostering economic growth.”

    The ability to study and regulate AI’s impacts on education has been hindered under the Trump administration, Scott added, through the shuttering of the Education Department’s Office of Educational Technology, federal funding cuts at the Institute of Educational Sciences, and attempts to significantly reduce staffing at the Office for Civil Rights.

    At the same time, the Trump administration is strongly encouraging schools to integrate AI tools in the classroom. Committee Chair Tim Walberg, R-Mich, praised the administration’s initiatives to support AI innovation in his opening statement.

    For Congress, Walberg said, “the goal should not be to rush into sweeping new rules and regulations, but to ensure schools, employers and training providers can keep pace with innovation while maintaining trust and prioritizing safety and privacy.”

    Some hearing witnesses also called for more transparency and guardrails for ed tech companies that roll out AI tools for students.  

    Because a lot of ed tech products lack transparency about their AI models, it’s more difficult for teachers and school administrators to make informed decisions about what AI tools to use in the classroom, said Alexandra Reeve Givens, president and CEO at the Center for Democracy & Technology.

    Key questions these companies need to publicly answer — but won’t typically disclose — she said, should include whether their tools are grounded in learning science and whether the tools have been tested for bias. “Do they have appropriate guardrails for use by young people? What are their security and privacy protections?” Reeve Givens asked. 

    Adeel Khan, founder and CEO of MagicSchool AI, also said in his testimony that shared standards and guardrails for AI tools in the classroom are necessary to protect students and understand which tools actually work.

    While AI education policy is primarily driven by state and local initiatives, Khan said that “the most constructive federal role is to support capacity and protections for children while investing in educator training, evidence building, procurement guidance and funding so districts can adopt responsibly.”

    The Brookings Institution also released a report Wednesday on AI in K-12 schools based on its analysis of over 400 research articles and hundreds of interviews with education stakeholders. 

    The institution’s report warns that AI’s risks currently outweigh its benefits for students. AI can threaten students in cognitive, emotional and social ways, the report said.

    To mitigate those risks, Brookings recommends a framework for K-12 as it continues to implement AI:

    • Teachers and students should be trained on when to instruct and learn with and without AI. The technology should also be used with evidence-based practices that engage students in deeper learning.
    • There needs to be holistic AI literacy that develops an understanding about AI’s capabilities, limitations and implications. Educators must also have robust professional development to use AI, and there should be clear plans for ethically using the technology while expanding equitable access to those tools in school communities.
    • Technology companies, governments, education systems, teachers and parents need to promote ethical and trustworthy design in AI tools as well as responsible regulatory frameworks to protect students.

    Source link

  • Decoder Replay: Isn’t all for one and one for all a good thing?

    Decoder Replay: Isn’t all for one and one for all a good thing?

    Under NATO, 32 countries have pledged to defend each other. Is the United States the glue that holds it all together?

    Source link

  • The good and bad of roaming the world

    The good and bad of roaming the world

    In six months I will move again. It will be my seventh move in less than two years.

    I’m not homesick for Calgary, Canada, where I started this journey. But I am tired of searching for new friendships and, sometimes, of carrying more clothes — and emotions — than I can fit into two suitcases

    When I try to describe what moving around is like, I remember one moment. It was my second day in Peru, and everywhere around me were mountains of sand. Not a single plant in sight, not even a cactus.

    The sun was strong and I felt the beginnings of a sunburn. After multiple stops and a wild dune buggy ride through the desert where I held on for dear life, I made it to the top of one of the sand dunes. I moved around to the other side and looked down. There it was: Oasis de la Huacachina, a shimmering pool of water surrounded by palm trees.

    The wind was blowing harshly. In that moment, I was grateful that my face was covered with the brightly coloured bandana I had bought from a vendor who was upset I could pay only in American dollars and warned me he would charge me more. I hadn’t had enough time to convert money to Peruvian soles.

    This is the cost of not being prepared with cash in the right currency for unexpected purchases that happen on a trip.

    Preparing for the unexpected

    Being a nomad is like going through a desert, trying to be as prepared as possible only to be faced with the unexpected — strong winds blowing sand in your face and getting overcharged for the things you didn’t know you would need.

    But once you get to the top of the sand dune and look down at the oasis, you appreciate the journey you’ve made.

    The nomadic life isn’t as romantic as the internet paints it to be. Between the excitement of new places and adventures is the challenge of creating and maintaining a sense of community.

    This journey started back in 2023 in Calgary. I was having dinner with friends and talking about the awful job market and how I’d managed to land only remote work on temporary contract.

    “You know, I think I’m probably going to leave Calgary soon,” I heard myself say.

    Embarking on a journey

    I had absolutely no plan for how that was going to happen. But almost a year later I got married. My husband had finished his first year of residency and needed to move for training opportunities in various cities. We would spend only one to three months in each city before moving on.

    Like most young adults, I left my hometown to start something new. The packing part was easy. The hard part was saying goodbye to the familiarity of Calgary — my family, friends, the parks I visited regularly and my favourite cafés.

    The journey began in Calgary, and carried us to Kingston, a town on Lake Ontario, for four months, from where we relocated to Montréal for one month. We moved to Toronto for three months, where we then traded the snowy weather for the warm desert in Lima, Peru for just over two months. From there, we returned to Toronto for another three months before arriving in Baltimore in the U.S. state of Maryland.

    As we moved from place to place over the course of 13 months, I realized I wasn’t homesick. Instead, I was weighed down by the things I’d grown so attached to. With each move, I faced this dilemma: Pack them up again or let them go before starting over again.

    After almost four months in Kingston, the time had come to pack up again. There was stuff everywhere. Bags of clothes sat on the living-room floor and overfilled boxes of household items covered the kitchen floor.

    What you pack and what you leave

    I couldn’t take everything with me, yet as I was folding clothes I found that my suitcases weren’t filling. The donation bags seemed to be getting bigger and bigger. At that point, I was repulsed by the number of clothes I had. Did I really need four pairs of jeans? In normal circumstances, my answer would have been yes. Then, I needed functionality and I didn’t know how to achieve that.

    What was replaceable if I later changed my mind?

    I was nostalgic as I sifted through the piles — recalling the memories attached to those items. “They’re just things,” I told myself. I found a folder filled with cards from friends and families. I didn’t have it in me to throw them out, so I stuffed them in my backpack. It wasn’t like they were replaceable items you can buy at a store.

    The worst part of moving so frequently is that distance doesn’t make the heart grow fonder. It makes communication challenging and if you can’t catch someone by phone, many things — life updates and check-ins — are lost through text messages.

    The best part of moving so frequently is you get to be a tourist while living like a local: You have the best of both worlds. You learn your neighbourhood so well you find shortcuts to get to your favourite places. You earn the right to learn about local gems and can still visit the cliché tourist spots without feeling the embarrassment a local would. That was the highlight of my month in Montreal — I’d finish work and hop on the subway to explore. Every day was its own adventure, trying new restaurants, shopping at local grocery stores and catching up with work colleagues in the area.

    Finding meaning in new places

    I celebrated my birthday in Montreal for the first time outside of my hometown. I’m not much of a birthday person, but I was disappointed that many of my friends had forgotten my birthday. On a positive note, some friends did remember, and those birthday text messages were special. I decided to celebrate with some “restaurant hopping,” trying a savoury meal at one restaurant, going for dessert at another and trying interesting snacks all in the same night. It was the first time I tried ramen, a Japanese noodle soup, and the first time I ordered in French.

    The month flew by and it was time to move to Toronto. The good news is I hadn’t fully unpacked, because I knew that my time in Montreal was short. I somehow did make friends, but we didn’t keep in touch after I moved.

    For some reason, surface-level friendships were easier than having to worry about whether people would want to keep in touch, and I wouldn’t feel the pressure of having to reach out or go through the cycle of feeling disappointed if they wouldn’t get back to me. I was still grieving how many of my close friendships in Calgary had gone static.

    A few weeks later, we moved to Toronto and I joined a running club. I was shy at first, but I slowly warmed up and made friends. I didn’t bother to take my new friends’ phone numbers or make plans outside of the running club, because I knew I would soon be leaving. One of my best friends in Calgary had a baby girl during my time in Toronto and I couldn’t visit over the holidays to celebrate because I was preparing for my next move.

    For some reason, deciding what stays and what goes never gets easier. You just get better at the time management part of it and start earlier — or stay up later getting the job done. We were packing until 3 a.m. on the day we were leaving for Lima, Peru, where my husband was going to take a tropical medicine course. A few hours later, we boarded the flight.

    Meeting people in Peru

    Lima is one of the most beautiful cities I’ve ever visited. It’s a desert that sits on the Pacific coast, offering the best of both worlds: an ocean and a stunning oasis.

    By this point, my work contract had ended and could not be renewed due to budget changes. I was initially worried that I would be bored or miss out on professional growth. I decided that it would instead be a once-in-a-lifetime opportunity to try new activities, travel and reflect on what I wanted my professional career to look like.

    I expected to encounter many English speakers in Lima, because it was the capital, but I was mistaken. I didn’t want to rely on Google Translate for basic conversations because I wanted to immerse myself in the culture and everyday life. So I enrolled in Spanish-language classes.

    I met people from all over the world who had come to Peru for all kinds of reasons, including business, backpacking across South America and simply to learn Spanish.

    This is probably my favourite part of moving around: You get to meet people from all walks of life, with various backgrounds and experiences, who teach you things you never otherwise would have learned.

    Nomads find each other

    I made friends with a girl my age who worked in marketing in London and was visiting her father, who had a business in Peru. One American man in his late 60s had married a Peruvian woman and was planning to retire in Lima.

    Another was a businessman who opened restaurants all around the world and was looking to break into the Peruvian market.

    And I met a Canadian from the Greater Toronto Area whom I probably would never have crossed paths with had it not been that we were in Peru at the same time. I had wonderful conversations with her during our walks in the Miraflores neighbourhood.

    While learning Spanish, I also stepped out of my comfort zone and tried new activities. I sand-boarded, where I rode down a sand dune in the desert south of Lima, surfed on the Pacific Ocean, hiked the famous Machu Picchu — an ancient Incan citadel located in the Andes Mountains — and took a chocolate-making class during which I roasted my own chocolate beans.

    It was through enjoying all of these adventures and writing about my experiences to family and friends that I decided to try journalism and a few months later,  applied for a fellowship in Journalism and Health Impact at the University of Toronto’s Dalla Lana School of Public Health.

    Accepting the changes that take place

    I returned to Toronto months later. It was spring and I got to see cherry trees in blossom, enjoy walks by the harbour and prepare for my next move, this one to Baltimore, Md. I reconnected with old friends, shared my adventures in South America and realized that although we don’t talk as much as we used to, living far apart does change the dynamic of a friendship. It’s not a bad thing, it’s just different and that’s okay. It’s fine to keep in touch with friends on a semi-annual basis and meet in person when given the chance.

    I discovered that it’s not fair to assume things will stay the same when I was the one who moved away.

    Shortly after moving to Baltimore, my childhood best friend got married in Calgary. The timing was difficult and I had to miss it. My friends who did attend FaceTimed me during the reception. It was like I was there, but I also wasn’t.

    It was difficult, but I came to learn that the way I conducted friendships also changed. Distance created challenges in the way I showed up and, although my friends never called me out on it, I’m certain now that they probably felt emotions similar to mine. Long-distance friendships are not easy and that’s part of the baggage that comes with nomadic living. My best advice is to show up when you can and reach out when you miss them.

    Flash forward to today: I did apply to the journalism fellowship and was accepted. I’m glad I did because I’m enjoying writing and reporting on health topics I’m interested in.

    In the meantime, I have another six months until I move again. I don’t know where I’m going next. I’m riding the wave and ready to embark on my next adventure when the time comes. I have a community of people with whom I meet regularly and, although I’m not sure how those relationships will change when I move again, I know these are the kinds of feelings that can fit in my suitcase.


    Questions to consider:

    1. What was one thing the author learned after moving from place to place?

    2. What is one disadvantage of moving every few months?

    3. If you were to move from the country you now live in what would you miss?

    Source link

  • Some stories that bring good cheer

    Some stories that bring good cheer

    Not everyone around the world celebrates Christmas. But it does seem that on December 25 of each year, much of the world takes a bit of a breather. In many countries, everything shuts down so that even those who don’t celebrate Christmas take the day off. 

    For this December 25, we give you some stories from across the world from our correspondents and students that you might have missed this past year and that might leave you feeling better about the world you inhabit. Wait till after the new year to begin working on those resolutions and worrying about obligations.

    Source link

  • Why the UK should commercialise research for social good – not just profit

    Why the UK should commercialise research for social good – not just profit

    Author:
    Huw Vasey

    Published:

    This blog was kindly authored by Huw Vasey, a Principal Consultant at Oxentia.

    For the past six years, I’ve worked across sectors to build an ecosystem that supports the commercialisation of research for social impact – not just profit. While existing schemes don’t exclude social outcomes, they’re primarily designed to attract funding for expensive technological or medical innovations. This often sidelines social value, which rarely offers a high financial return.

    Focusing on SHAPE disciplines – Social Sciences, Humanities, and the Arts for People and the Economy – has opened new possibilities. Unlike tech or biomedical innovations, SHAPE commercialisation typically involves service and process innovation, rarely includes protectable IP, and is often rooted in the deep expertise of a small group of researchers. These ventures are quicker to bring to market and require far fewer resources. This creates a unique opportunity: innovations with high social impact can scale sustainably, as long as they generate enough revenue to support themselves, without needing the kind of mega-investment required for a new drug or device.

    A common counterargument is that SHAPE academics aren’t interested in commercialisation. They see their work as a public good, not something to be monetised. However, recent programmes have shown that interest grows when incentives shift. Initiatives like the UK Research and Innovation (UKRI) Healthy Ageing Catalyst, the ARC Accelerator, the  SHAPE Catalyst, and the Economic and Social Research Council (ESRC) Food Systems Catalyst have drawn hundreds of SHAPE academics into commercialisation by offering a pathway to scale and sustain the impact of their research.

    So, we have a growing pipeline. But why should society at large embrace research commercialisation for social value?

    The case for SHAPE commercialisation: real-world impact at speed and scale

    • Sustaining and scaling impact beyond grants: Academic projects often deliver significant impact while funded, only to fade when grants end. Commercialisation offers a way to extend and grow that impact. For example, Cardiff University spin-out Nisien provides ethical online safeguarding services, and evolved from the ESRC-funded HateLab, a global hub for data and insight into hate speech and crime. The original lab had great success using AI to both measure and counter hate off and online, but it was faced with a familiar problem. How could it sustain its impact after the funding had ended? In particular, how could it retain key staff members who didn’t have university contracts? The answer they found was to commercialise – bringing in paid customers as well as conducting public research. Whilst this was great for the HateLab team, it was also a big win for both public funders of science and the wider public. Why? Because they get the benefits of research impact (better identification and countering of hate) without being saddled with the costs in the long-term, or losing the impact when an impactful project closes down
    • Fixing broken systems: Social ventures can address market failures or dysfunctional systems. For example, One World Together (OWT), a University of Manchester spin-out, aims to reform charitable giving and reshape the aid industry. Its aims are radical, and it addresses system-level change, which is rarely an attractive proposition for businesses. Furthermore, it required the deep knowledge and connections that only come from a long immersion in a problem space. Few outside academia would be able to achieve the type of change OWT seeks to achieve
    • Bottom-up social innovation: Other ventures tackle tangible local issues with scalable solutions – like Arcade, which repurposes disused spaces for community development, or Thin Ice Press, which revives forgotten industries to foster creativity and engagement. Developing such initiatives through commercialisation, rather than solely via grant funding, provides social benefits with a lower associated cost to the taxpayer. Furthermore, it brings academic knowledge and networks into bottom-up social innovation, helping to break down persistent barriers between universities and the communities they serve.

    Why This Matters Now

    This is a powerful mechanism for translating research into real-world change, both at scale and sustainably. Yet, it remains undervalued.

    Policy makers and social scientists often focus on influencing policy as the primary mode of impact. While important, this is an indirect second or third-order influence. Commercialisation, by contrast, allows researchers to do rather than merely influence. It provides the practical demonstration that policy makers often demand: “How do I know this will work in practice?”

    So why aren’t we harnessing this potential to meet our social challenges? Why isn’t it embedded in the UK Government’s missions or industrial strategy?

    We overlook this opportunity at our peril.

    How could we better support SHAPE commercialisation?

    So, what could be done at a practical and policy level? Here are three recommendations on how to keep the sector developing

    Firstly, we need to keep funding SHAPE commercialisation. Few universities have the resources or staff to do this themselves, so this needs to come from elsewhere. That may be funders like UKRI, or it might be utilising models such as shared technology transfer offices (TTOs) to de-risk the cost of SHAPE commercialisation for smaller or less expert institutions. It also means growing and developing the community of scholars and professional support who provide the blood, sweat and tears which get these enterprises off the ground. Whilst the growth potential for SHAPE commercialisation is very high, as demonstrated by Abdul Rahman et al’s latest work, the ecosystem is still at an early stage in its life cycle and is unlikely to grow successfully without nurture.

    Secondly, policymakers and practitioners need to keep celebrating SHAPE commercialisation and focusing its power on societal challenges. Events like RE:SHAPE are a great way of bringing attention to the potential of SHAPE commercialisation and showcasing its successes. Aligning commercialisation programmes to societal missions helps focus the power of SHAPE on our most pressing concerns. Not doing so was a glaring omission from the current configuration of the UK Government’s mission agenda.

    Finally, we need to truly understand the value of commercialisation for social impact, by which I mean we all (researchers, senior university leaders, funders and policymakers) need to start to see social impact as being on a par with income when thinking about research commercialisation. That’s not just a mindset change, but one which also suggests we need to think about how we measure and demonstrate social as well as financial impact. Whilst some may be uncomfortable with yet more metricisation in research history and experience teach that, in order for a new approach to be valued in policy circles, it needs to demonstrate its worth in a way that is comprehensible for policymakers and that will likely require some sort of impact measurements

    Source link

  • OPINION: Workforce Pell can lead to good jobs for students if they get the support needed for long-term success

    OPINION: Workforce Pell can lead to good jobs for students if they get the support needed for long-term success

    by Alexander Mayer, The Hechinger Report
    December 16, 2025

    Ohio resident Megan Cutright lost her hospitality job during the pandemic. At her daughter’s urging, she found her way to Lorain County Community College in Ohio and onto a new career path.  

    Community colleges will soon have a new opportunity to help more students like Megan achieve their career goals. Starting next summer, federal funds will be available through a program known as Workforce Pell, which extends federal aid to career-focused education and training programs that last between eight and 15 weeks. 

    Members of Congress advocating for Pell Grants to cover shorter programs have consistently highlighted Workforce Pell’s potential, noting that the extension will lead to “good-paying jobs.”  

    That could happen. But it will only happen if states and colleges thoughtfully consider the supports students need for success.  

    This is important, because helping students pay for workforce programs is not enough. They also need support and wraparound services, much like the kind Megan was offered at Lorain, where her program followed an evidence-based model known as ASAP that assigns each student a career adviser. 

    Related: Interested in innovations in higher education? Subscribe to our free biweekly higher education newsletter. 

    Megan’s adviser “helped me from day one,” she said, in a story posted on the college’s website. “I told her I was interested in the radiologic technology program but that I had no idea where to start. We just did everything together.”  

    Megan went on to secure a job as an assistant in the radiology department at her local hospital, where she had interned as a student. She knew what steps she needed to take because her community college supported and advised her, using an evidence-backed practice, illustrating something we have learned from the experience of the community colleges that use the ASAP model: Support is invaluable.  

    Megan also knew that her path to a full-time position in radiologic technology required her to pass a licensure test — scheduled for four days after graduation.  

    The students who will enroll in Workforce Pell programs deserve the same careful attention. To ensure that Workforce Pell is effective for students, we should follow the same three critical steps that helped drive the expansion of ASAP and brought it to Megan’s college: (1) experiment to see what works, (2) collect and follow the data and (3) ensure that colleges learn from each other to apply what works. 

    Before ASAP was developed, the higher education community had some ideas about what might work to help students complete their degrees and get good jobs. When colleges and researchers worked together to test these ideas and gathered reliable data, though, they learned that those strategies only helped students at the margins. 

    There was no solid evidence about what worked to make big, lasting improvements in college completion until the City University of New York (CUNY) worked with researchers at MDRC to test ASAP and its combination of longer-lasting strategies. They kept a close eye on the data and learned that while some strategies didn’t produce big effects on their own, the combined ASAP approach resulted in significant improvements in student outcomes, nearly doubling the three-year college completion rate.  

    CUNY and MDRC shared what they learned with higher education leaders and policymakers, inspiring other community colleges to try out the model. Those colleges started seeing results too, and the model kept spreading. Today, ASAP is used in more than 50 colleges in seven states. And it’s paying off — in Ohio, for example, students who received ASAP services ended up earning significantly more than those who did not. 

    That same experimentation and learning mindset will be needed for Workforce Pell, because while short-term training can lead to good careers, it’s far from guaranteed.  

    For example, phlebotomy technician programs are popular, but without additional training or credentials they often don’t lead to jobs that pay well. Similarly, students who complete short-term programs in information technology, welding and construction-related skills can continue to acquire stackable credentials that substantially increase their earning potential, although that also doesn’t happen automatically. The complexity of the credentialing marketplace can make it impossible for students and families to assess programs and make good decisions without help.  

    Related: OPINION: Too many college graduates are stranded before their careers can even begin. We can’t let that happen 

    A big question for Workforce Pell will be how to make sure students understand how to get onto a career path and continue advancing their wider career aspirations. Workforce Pell grants are designed to help students with low incomes overcome financial barriers, but these same students often face other barriers.  

    That’s why colleges should experiment with supports like career advising to help students identify stepping-stones to a good career, along with placement services to help them navigate the job market. In addition, states must expand their data collection efforts to formally include noncredit programs. Some, including Iowa, Louisiana and Virginia, have already made considerable progress linking their education and workforce systems.  

    Offering student support services and setting up data systems requires resources, but Workforce Pell will bring new funds to states and colleges that are currently financing job training programs. Philanthropy can also help by providing resources to test out what works best to get students through short-term programs and onto solid career paths.  

    Sharing what works — and what doesn’t — will be critical to the success of Workforce Pell in the long-term. The same spirit of learning that fueled innovation around the ASAP model should be embedded in Workforce Pell from the start.  

    Alexander Mayer is director of postsecondary education at MDRC, the nonprofit research association. 

    Contact the opinion editor at [email protected]. 

    This story about Workforce Pell was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter. 

    This <a target=”_blank” href=”https://hechingerreport.org/opinion-workforce-pell-can-lead-to-good-jobs-for-students-if-they-get-the-support-needed-for-long-term-success/”>article</a> first appeared on <a target=”_blank” href=”https://hechingerreport.org”>The Hechinger Report</a> and is republished here under a <a target=”_blank” href=”https://creativecommons.org/licenses/by-nc-nd/4.0/”>Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License</a>.<img src=”https://i0.wp.com/hechingerreport.org/wp-content/uploads/2018/06/cropped-favicon.jpg?fit=150%2C150&amp;ssl=1″ style=”width:1em;height:1em;margin-left:10px;”>

    <img id=”republication-tracker-tool-source” src=”https://hechingerreport.org/?republication-pixel=true&post=113877&amp;ga4=G-03KPHXDF3H” style=”width:1px;height:1px;”><script> PARSELY = { autotrack: false, onload: function() { PARSELY.beacon.trackPageView({ url: “https://hechingerreport.org/opinion-workforce-pell-can-lead-to-good-jobs-for-students-if-they-get-the-support-needed-for-long-term-success/”, urlref: window.location.href }); } } </script> <script id=”parsely-cfg” src=”//cdn.parsely.com/keys/hechingerreport.org/p.js”></script>

    Source link

  • Actually, It’s a Good Time to Be an English Prof (opinion)

    Actually, It’s a Good Time to Be an English Prof (opinion)

    It may sound perverse to say so. Our profession is under attack, our students are reading less, jobs are scarce and the humanities are first on the chopping block. But precisely because the outlook is dire, this is also a moment of clarity and possibility. The campaign against higher education, the AI gold rush and the dismantling of our public schools have made the stakes of humanistic teaching unmistakable. For those of us with the privilege of relative job security, there has never been a more urgent—or more opportune—time to do what we were trained to do.

    I am an English professor, so let me first address my own. Colleagues, this is the moment to make the affirmative case for our existence. This is our chance to demonstrate the worth of person-to-person pedagogy; to speak the language of knowledge formation and the pursuit of truth; to reinvigorate the canon while developing new methods for the study of ethnic, postcolonial, feminist, queer and minority literatures and cultural texts; to stand for the value of human intelligence. Now is when we seize the mantle and opportunity of “English” as a both a privileged signifier and a sign of humility as we fight alongside our colleagues in the non-Western languages and literatures who are even more endangered than we are— and for our students, without whom we have no future.

    I’m not being Pollyannaish. Between Trump 1 and Trump 2 sit the tumultuous COVID years, which means U.S. universities have been reeling, under direct attacks and pressures, for a decade. I started my first job in 2016, so that is the entirety of the time that I have worked as an academic. I spent six years in public universities in purple-red states, where austerity was the name of the game—and then I moved to Texas.

    There have been years of insults and incursions into the profession. We have been scapegoated as an out-of-touch elite and called enemies of the state. And no, we haven’t always responded well. In the face of austerity, we let our colleagues be sacrificed. Despite the bad-faith weaponization of “CRT,” “DEI” and “identity politics,” we disavowed identity. Against our better judgment, we assimilated wave after wave of new educational technologies, from MOOCs to course management platforms to Zoom.

    Now, we face a new onslaught: the supposedly unstoppable and inevitable rise of generative AI—a deliberately misleading misnomer for the climate-destroying linguistic probability machines that can automate and simulate numerous high-level tasks, but stop short of demonstrating human levels of intelligence, consciousness and imagination. “The ultimate unaccountability machine,” as Audrey Watters puts it.

    From Substack to The New York Times to new collaborative projects Against AI, humanities professors are sounding the alarm. At the start of this semester, philosopher Kate Manne reflected that her “job just got an awful lot harder.”

    Actually, I think our jobs just got a whole lot easier, because our purpose is sharper than ever. Where others see AI as the end of our profession, I see a clarifying opportunity to recommit to who we are. No LLM can reproduce the deep reading, careful dialogue and shared meaning-making of the humanities classroom. We college professors stand alongside primary and secondary school teachers who have already faced decades of deprofessionalization, deskilling and disrespect.

    There is a war on public education in this country. Statehouses in places like Texas are rapidly dismantling the infrastructure and independence of public institutions at all levels, from disbanding faculty senates to handing over curriculum development to technologists who have no understanding of the dialogical, improvisatory nature of teaching. These are folks who gleefully predict that robots with the capacity to press “play” on AI-generated slide decks can replace human teachers with years of experience. We need them out of our schools at every level.

    Counter to what university administrators and mainstream pundits seem to believe, students are not clamoring to use AI tools. Tech companies are aggressively pushing them. All over the country, school districts and universities are partnering with companies like Microsoft and OpenAI for fear of being left behind. My own institution has partnered with Google. Earlier this semester, “Google product experts” came to campus to instruct our students on how to “supercharge [their] creativity” and “boost [their] productivity” using Gemini and NotebookLM tools. Faculty have been invited to join AI-focused learning communities and enroll in trainings and workshops (or even a whole online class) on integrating AI tools into our teaching; funds have been allotted for new grant programs in AI exploration and course development.

    I didn’t spend seven years earning a doctorate to learn how to teach from Google product experts. And my students didn’t come to university to learn how to learn from Google product experts, either. Those folks have their work, motivations and areas of expertise. We have ours, and it is past time to defend them. We are keepers of canon and critique, of traditions and interventions, of discipline-specific discourses and a robust legacy of public engagement. The whole point of education is to hand over what we know to the next generation, not to chase fads alongside the students we are meant to equip with enduring skills. It is our job to strengthen minds, to resist what Rebecca Solnit calls the “technological invasion of consciousness, community, and culture.”

    Many of us have been trying to do this for some time, but it’s hard to swim against the tides. In 2024, I finally banned all electronics from my English literature classes. I realized that sensitivity to accessibility need not prevent us from exercising simple common sense. We know that students learn more and better when they take notes by hand, annotate texts and read in hard copy. Because my students do not have access to free printing, and because a university librarian told me that “we only go from print to digital, not the other way around,” I printed copies of every reading for every student. With the words on paper before them, they retained more, they made eye contact, they took marginal notes, they really responded to each other’s interpretations of the texts.

    That’s the easy part. As we college professors plan our return to blue books, in-class midterms and oral exams, the challenge is how to intervene before our students come to class. If AI is antithetical to the project of higher education, it’s even more insidious and damaging in the elementary, middle and high schools.

    My children attend Texas public schools in the particularly embattled Houston Independent School District, so I have seen firsthand the app-ification of education. Log in to the middle school student platform—which some “innovator” had the audacity to name “Clever”—and you’ll get a page with more than three dozen apps. Not just the usual suspects like Khan Academy and Epic, but also ABC-CLIO, Accelerate Learning, Active Classroom, Amplify, Britannica, BrainPOP, Canva, Carnegie Learning, CK-12 Foundation, Digital Theatre Plus, Discover Magazine, Edgenuity, Edmentum, eSebco, everfi, Gale Databases, Gizmos, IPC, i-Ready, iScience, IXL, JASON Learning, Language! Live, Learning Ally Audiobook, MackinVIA, McGraw Hill, myPLTW, Newsela, Raise, Read to Achieve, Savvas EasyBridge, STEMscopes, Summit K12, TeachingBooks, Vocabulary.com, World Book Online, Zearn …

    As both a professor and a parent, I have decided to intervene directly. Last year, I started leading a reading group for my 12-year-old daughter and a group of her classmates. They call it a book club. Really, it’s a seminar. Once a month, they convene around our dining table for 90 minutes, paperbacks in hand, to engage in close reading and analysis. They do all the stuff we English professors want our college students to do: They examine specific passages, which illuminate broader themes; they draw connections to other books we’ve read; they ask questions about the historical context; they make motivated references to current social, cultural and political issues; they plumb the space between their individual readings and the author’s intentions.

    No phones, no computers, no apps. We have books (and snacks). And conversation. After each meeting, my daughter and I debrief. About four months in, she said, “You know, a lot of the previous meetings I felt like we were each just giving our own takes. But this time, I feel like we arrived at a new understanding of the book by talking about it together.” The club members had challenged and pushed each other’s interpretations, and together exposed facets of the text they wouldn’t have seen alone.

    The literature classroom is a space of collaborative meaning-making—one of the last remaining potentially tech-free spaces out there. A precious space, that we need to renew and defend, not give up to the anti-intellectual mob and not transform at the behest of tech oligarchs. We have an opportunity here to stand up for who we are, for the mission of humanistic education, in affirmative, unapologetic terms—while finding ways to build new alliances and enact solidarity beyond the walls of our college classrooms.

    This moment is clarifying, motivating, energizing. It’s time to remember what we already know.

    Source link

  • Helping students to make good choices isn’t about more faulty search filters

    Helping students to make good choices isn’t about more faulty search filters

    A YouTube video about Spotify popped into my feed this weekend, and it’s been rattling around my head ever since.

    Partly because it’s about music streaming, but mostly because it’s all about what’s wrong with how we think about student choice in higher education.

    The premise runs like this. A guy decides to do “No Stream November” – a month without Spotify, using only physical media instead.

    His argument, backed by Barry Schwartz’s paradox of choice research and a raft of behavioural economics, is that unlimited access to millions of songs has made us less satisfied, not more.

    We skip tracks every 20 to 30 seconds. We never reach the guitar solo. We’re treating music like a discount buffet – trying a bit of everything but never really savouring anything. And then going back to the playlists we created earlier.

    The video’s conclusion is that scarcity creates satisfaction. Ritual and effort (opening the album, dropping the needle, sitting down to actually listen) make music meaningful.

    Six carefully chosen options produce more satisfaction than 24, let alone millions. It’s the IKEA effect applied to music – we value what we labour over.

    I’m interested in choice. Notwithstanding the debate over what a “course” is, Unistats data shows that there were 36,421 of them on offer in 2015/16. This year that figure is 30,801.

    That still feels like a lot, given that the University of Helsinki only offers 34 bachelor’s degree programmes.

    Of course a lot of the entries on DiscoverUni separately list “with a foundation year” and there’s plenty of subject combinations.

    But nevertheless, the UK’s bewildering range of programmes must be quite a nightmare for applicants to pick through – it’s just that once they’re on them, job cuts and switches to block teaching are delivering increasingly less choice in elective pathways than they used to.

    We appear to have a system that combines overwhelming choice at the point of least knowledge (age 17, alongside A-levels, with imperfect information) with rigid narrowness at the point of most knowledge (once enrolled, when students actually understand what they want to study and why). It’s the worst of both worlds.

    What the white paper promises

    The government’s vision for improving student choice runs to a couple of paragraphs in the Skills White Paper, and it’s worth quoting in full:

    We will work with UCAS, the Office for Students and the sector to improve the quality of information for individuals, informed by the best evidence on the factors that influence the choices people make as they consider their higher education options. Providing applicants with high-quality, impartial, personalised and timely information is essential to ensuring they can make informed decisions when choosing what to study. Recent UCAS reforms aimed at increasing transparency and improving student choice include historic entry grades data, allowing students, along with their teachers and advisers, to see both offer rates and the historic grades of previous successful applicants admitted to a particular course, in addition to the entry requirements published by universities and colleges.

    As we see more students motivated by career prospects, we will work with UCAS and Universities UK to ensure that graduate outcomes information spanning employment rates, earnings and the design and nature of work (currently available on Discover Uni) are available on the UCAS website. We will also work with the Office for Students to ensure their new approach to assessing quality produces clear ratings which will help prospective students understand the quality of the courses on offer, including clear information on how many students successfully complete their courses.”

    The implicit theory of change is straightforward – if we just give students more data about each of the courses, they’ll make better choices, and everyone wins. It’s the same logic that says if Spotify added more metadata to every track (BPM, lyrical themes, engineer credits), you’d finally find the perfect song. I doubt it.

    Pump up the Jam

    If the Department for Education (DfE) was serious about deploying the best evidence on the factors that influence the choices people make, it would know about the research showing that more information doesn’t solve choice overload, because choice overload is a cognitive capacity problem, not an information quality problem.

    Sheena Iyengar and Mark Lepper’s foundational 2000 study in the Journal of Personality and Social Psychology found that when students faced 30 essay topic options versus six options, completion rates dropped from 74 per cent to 60 per cent, and essay quality declined significantly on both content and form measures. That’s a 14 percentage point completion drop from excessive choice alone, and objectively worse work from those who did complete.

    A study on Jam showed customers were ten times more likely to buy when presented with six flavours rather than 24, despite 60 per cent more people initially stopping at the extensive display. More choice is simultaneously more appealing and more demotivating. That’s the paradox.

    CFE Research’s 2018 study for the Office for Students (back when providing useful research for the sector was something it did) laid this all out explicitly for higher education contexts.

    Decision making about HE is challenging because the system is complex and there are lots of alternatives and attributes to consider. Those considering HE are making decisions in conditions of uncertainty, and in these circumstances, individuals tend to rely on convenient but flawed mental shortcuts rather than solely rational criteria. There’s no “one size fits all” information solution, nor is there a shortlist of criteria that those considering HE use.

    The study found that students rely heavily on family, friends, and university visits, and many choices ultimately come down to whether a decision “feels right” rather than rational analysis of data. When asked to explain their decisions retrospectively, students’ explanations differ from their actual decision-making processes – we’re not reliable informants about why we made certain choices.

    A 2015 meta-analysis by Chernev, Böckenholt, and Goodman in the Journal of Consumer Psychology identified the conditions under which choice overload occurs – it’s moderated by choice set complexity, decision task difficulty, and individual differences in decision-making style. Working memory capacity limits humans to processing approximately seven items simultaneously. When options exceed this cognitive threshold, students experience decision paralysis.

    Maximiser students (those seeking the absolute best option) make objectively better decisions but feel significantly worse about them. They selected jobs with 20 per cent higher salaries yet felt less satisfied, more stressed, frustrated, anxious, and regretful than satisficers (those accepting “good enough”). For UK applicants facing tens of thousands of courses, maximisers face a nearly impossible optimisation problem, leading to chronic second-guessing and regret.

    The equality dimension is especially stark. Bailey, Jaggars, and Jenkins’s research found that students in “cafeteria college” systems with abundant disconnected choices “often have difficulty navigating these choices and end up making poor decisions about what programme to enter, what courses to take, and when to seek help.” Only 30 per cent completed three-year degrees within three years.

    First-generation students, students from lower socioeconomic backgrounds, and students of colour are systematically disadvantaged by overwhelming choice because they lack the cultural capital and family knowledge to navigate it effectively.

    The problem once in

    But if unlimited choice at entry is a cognitive overload problem, what happens once students enrol should balance that with flexibility and breadth. Students gain expertise, develop clearer goals, and should have more autonomy to explore and specialise as they progress.

    Except that’s not what’s happening. Financial pressures across the sector are driving institutions to reduce module offerings – exactly when research suggests students need more flexibility, not less.

    The Benefits of Hindsight research on graduate regret says it all. A sizeable share of applicants later wish they’d chosen differently – not usually to avoid higher education, but to pick a different subject or provider. The regret grows once graduates hit the labour market.

    Many students who felt mismatched would have liked to change course or university once enrolled – about three in five undergraduates and nearly two in three graduates among those expressing regret – but didn’t, often because they didn’t know how, thought it was too late, or feared the cost and disruption.

    The report argues there’s “inherent rigidity” in UK provision – a presumption that the initial choice should stick despite evolving interests, new information, and labour-market realities. Students described courses being less practical or less aligned to work than expected, or modules being withdrawn as finances tightened. That dynamic narrows options precisely when students are learning what they do and don’t want.

    Career options become the dominant reason graduates cite for wishing they’d chosen differently. But that’s not because they lacked earnings data at 17. It’s because their interests evolved, they discovered new fields, labour market signals changed, and the rigid structure gave them no way to pivot without starting again.

    The Competition and Markets Authority now explicitly identifies as misleading actions “where an HE provider gives a misleading impression about the number of optional modules that will be available.” Students have contractual rights to the module catalogue promised during recruitment. Yet redundancy rounds repeatedly reduce the size and scope of optional module catalogues for students who remain.

    There’s also an emerging consensus from the research on what actually works for module choice. An LSE analysis found that adding core modules within the home department was associated with higher satisfaction, whereas mandatory modules outside the home department depressed it. Students want depth and coherence in their chosen subject. They also value autonomous choice over breadth options.

    Research repeatedly shows that elective modules are evaluated more positively than required ones (autonomy effects), and interdisciplinary breadth is associated with stronger cross-disciplinary skills and higher post-HE earnings when it’s purposeful and scaffolded.

    What would actually work

    So what does this all suggest?

    As I’ve discussed on the site before, at the University of Helsinki – Finland’s flagship institution with 40,000 students – there’s 32 undergraduate programmes. Within each programme, students must take 90 ECTS credits in their major subject, but the other 75 ECTS credits must come from other programmes’ modules. That’s 42 per cent of the degree as mandatory breadth, but students choose which modules from clear disciplinary categories.

    The structure is simple – six five-credit introductory courses in your subject, then 60 credits of intermediate study with substantial module choice, including proseminars, thesis work, and electives. Add 15 credits for general studies (study planning, digital skills, communication), and you’ve got a degree. The two “modules” (what we’d call stages) get a single grade each on a one-to-five scale, producing a simple, legible transcript.

    Helsinki runs this on a 22.2 to one staff-student ratio, significantly worse than the UK average, after Finland faced €500 million in higher education cuts. It’s not lavishly resourced – it’s structurally efficient.

    Maynooth University in Ireland reduced CAO (their UCAS) entry routes from about 50 to roughly 20 specifically to “ease choice and deflate points inflation.” Students can start with up to four subjects in year one, then move to single major, double major, or major with minor. Switching options are kept open through first year. It’s progressive specialisation – broad exploration early when students have least context, increasing focus as they develop expertise.

    Also elsewhere on the site, Técnico in Lisbon – the engineering and technology faculty of the University of Lisbon – rationalised to 18 undergraduate courses following a student-led reform process. Those 18 courses contain hundreds of what the UK system would call “courses” via module combinations, but without the administrative overhead. They require nine ECTS credits (of 180) in social sciences and humanities for all engineering programmes because “engineers need to be equipped not just to build systems, but to understand the societies they shape.”

    Crucially, students themselves pushed for this structure. They conducted structured interviews, staged debates, and developed reform positions. They wanted shared first years, fewer concurrent modules to reduce cognitive load, more active learning methods, and more curricular flexibility including free electives and minors.

    The University of Vilnius allows up to 25 per cent of the degree as “individual studies” – but it’s structured into clear categories – minors (30 to 60 credits in a secondary field, potentially leading to double diploma), languages (20-plus options with specific registration windows), interdisciplinary modules (curated themes), and cross-institution courses (formal cooperation with arts and music academies). Not unlimited chaos, just structured exploration within categorical choices.

    What all these models share is a recognition that you can have both depth and breadth, structure and flexibility, coherence and exploration – if you design programmes properly. You need roughly 60 to 70 per cent core pathway in the major for depth and satisfaction, 20 to 30 per cent guided electives organised into three to five clear categories per decision point, and maybe 10 to 15 per cent completely free electives.

    The UK’s subject benchmark statements, if properly refreshed (and consolidated down a bit) could provide the regulatory infrastructure for it all. Australia undertook a version of this in 2010 through their Learning and Teaching Academic Standards project, which defined threshold learning outcomes for major discipline groupings through extensive sector consultation (over 420 meetings with more than 6,100 attendees). Those TLOs now underpin TEQSA’s quality regime and enable programme-level approval while protecting autonomy.

    Bigger programmes, better choice

    The white paper’s information provision agenda isn’t wrong – it’s just addressing the wrong problem at the wrong end of the process. Publishing earnings data doesn’t solve cognitive overload from tens of thousands of courses, quality ratings don’t help students whose interests evolve and who need flexibility to pivot, and historic entry grades don’t fix the rigidity that manufactures regret.

    What would actually help is structural reform that the international evidence consistently supports – consolidation to roughly 20 to 40 programmes per institution (aligned with subject benchmark statement areas), with substantial protected module choice within those programmes, organised into clear categories like minors, languages, and interdisciplinary options.

    Some of those groups of individual modules might struggle to recruit if they were whole courses – think music and languages. They may well (and across Europe, do) sustain research-active academics if they could exist in broader structures. Fewer, clearer programmes at entry when students have least context, and more, structured flexibility during the degree when students have expertise to choose wisely.

    The efficiency argument is real – maintaining thousands of separate course codes, each with approval processes, quality assurance, marketing materials, and UCAS coordination is absurd overhead for what’s often just different permutations of the same modules. See also hundreds of “programme leaders” each having to be chased to fill a form in.

    Fewer programme directors with more module convenors beneath them is far more rational. And crucially, modules serve multiple student populations (what other systems would call majors and minors, and students taking breadth from elsewhere), making specialist provision viable even with smaller cohorts.

    The equality case is compelling – guided pathways with structured choice demonstrably improve outcomes for first-in-family students, students of colour, and low-income students, populations that regulators are charged with protecting. If current choice architecture systematically disadvantages exactly these students, that’s not pedagogical preference – it’s a regulatory failure.

    And the evidence on what students actually want once enrolled validates it all – they value depth in their chosen subject, they want autonomous choice over breadth options (not forced generic modules), they benefit from interdisciplinary exposure when it’s purposeful, and they need flexibility to correct course when their goals evolve.

    The white paper could have engaged with any of this. Instead, we get promises to publish more data on UCAS. It’s more Spotify features when what students need is a curated record collection and the freedom to build their own mixtape once they know what they actually like.

    What little reform is coming is informed by the assumption that if students just had better search filters, unlimited streaming would finally work. It won’t.

    Source link

  • When Was Higher Education Truly a Public Good? (Glen McGhee)

    When Was Higher Education Truly a Public Good? (Glen McGhee)

    Like staring at the Sun too long, that brief window in time, when higher ed was a public good, has left a permanent hole for nostalgia to leak in, becoming a massive black hole for trillions of dollars, and a blind-spot for misguided national policies and scholars alike. 

    The notion that American higher education was ever a true public good is largely a myth. From the colonial colleges to the neoliberal university of today, higher education has functioned primarily as a mechanism of class reproduction and elite consolidation—with one brief, historically anomalous exception during the Cold War.


    Colonial Roots: Elite Reproduction in the New World (1636–1787)

    The first American colleges—Harvard, William and Mary, Yale, Princeton, and a handful of others—were founded not for the benefit of the public, but to serve narrow elite interests. Their stated missions were to train Protestant clergy and prepare the sons of wealthy white families for leadership. They operated under monopoly charters and drew funding from landowners, merchants, and slave traders.

    Elihu Yale, namesake of Yale University, derived wealth from his commercial ties to the East India Company and the slave trade. Harvard’s early trustees owned enslaved people. These institutions functioned as “old boys’ clubs,” perpetuating privilege rather than promoting equality. Their educational mission was to cultivate “gentlemen fit to govern,” not citizens of a democracy.


    Private Enterprise in the Republic (1790–1860)

    After independence, the number of colleges exploded—from 19 in 1790 to more than 800 by 1880—but not because of any commitment to the public good. Colleges became tools for two private interests: religious denominations seeking influence, and land speculators eager to raise property values.

    Ministers often doubled as land dealers, founding small, parochial colleges to anchor towns and boost prices. State governments played a minimal role, providing funding only in times of crisis. The Supreme Court’s 1819 Dartmouth College decision enshrined institutional autonomy, shielding private colleges from state interference. Even state universities were created mainly out of interstate competition—every state needed its own to “keep up with its neighbors.”


    Gilded Age and Progressive Era: Credential Capitalism (1880–1940)

    By the late 19th century, industrial capitalism had transformed higher education into a private good—something purchased for individual advancement. As family farms and small businesses disappeared, college credentials became the ticket to white-collar respectability.

    Sociologist Burton Bledstein called this the “culture of professionalism.” Families invested in degrees to secure middle-class futures for their children. By the 1920s, most students attended college not to seek enlightenment, but “to get ready for a particular job.”

    Elite universities such as Harvard, Yale, and Princeton solidified their dominance through exclusive networks. C. Wright Mills later observed that America’s “power elite” circulated through these same institutions and their associated clubs. Pierre Bourdieu’s concept of cultural capital helps explain this continuity: elite universities convert inherited privilege into certified merit, preserving hierarchy under the guise of meritocracy.


    The Morrill Acts: Public Promise, Private Gains (1862–1890)

    The Morrill Act of 1862 established land-grant colleges to promote “practical education” in agriculture and engineering. While often cited as a triumph of public-minded policy, the act’s legacy is ambivalent.

    Land-grant universities were built on land expropriated from Indigenous peoples—often without compensation—and the 1890 Morrill Act entrenched segregation by mandating separate institutions for Black Americans in the Jim Crow South. Even as these colleges expanded access for white working-class men, they simultaneously reinforced racial and economic hierarchies.


    Cold War Universities: The Brief Public Good (1940–1970)

    For roughly thirty years, during World War II and the Cold War, American universities functioned as genuine public goods—but only because national survival seemed to depend on them.

    The GI Bill opened college to millions of veterans, stabilizing the economy and expanding the middle class. Massive federal investments in research transformed universities into engines of technological and scientific innovation. The university, for a moment, was understood as a public instrument for national progress.

    Yet this golden age was marred by exclusion. Black veterans were often denied GI Bill benefits, particularly in the South, where discriminatory admissions and housing policies blocked their participation. The “military-industrial-academic complex” that emerged from wartime funding created a new elite network centered on research universities like MIT, Stanford, and Berkeley.


    Neoliberal Regression: Education as a Private Commodity (1980–Present)

    After 1970, the system reverted to its long-standing norm: higher education as a private good. The Cold War’s end, the tax revolt, and the rise of neoliberal ideology dismantled the postwar consensus.

    Ronald Reagan led the charge—first as California governor, cutting higher education funding by 20%, then as president, slashing federal support. He argued that tuition should replace public subsidies, casting education as an individual investment rather than a social right.

    Since 1980, state funding per student has fallen sharply while tuition at public universities has tripled. Students are now treated as “customers,” and universities as corporations—complete with branding departments, executive pay packages, and relentless tuition hikes.


    The Circuit of Elite Network Capital

    Today, the benefits of higher education flow through a closed circuit of power that links elite universities, corporations, government agencies, and wealthy families.

    1. Elite Universities consolidate wealth and prestige through research funding, patents, and endowments.

    2. Corporations recruit talent and license discoveries, feeding the same institutions that produce their executives.

    3. Government and Military Agencies are staffed by alumni of elite universities, reinforcing a revolving door of privilege.

    4. Elite Professions—law, medicine, finance, consulting—use degrees as gatekeeping mechanisms, driving credential inflation.

    5. Wealthy Families invest in elite education as a means of preserving status across generations.

    What the public receives are only residual benefits—technologies and medical innovations that remain inaccessible without money or insurance.


    Elite Network Capital, Not Public Good

    The idea of higher education as a public good has always been more myth than reality. For most of American history, colleges and universities have functioned as institutions of elite reproduction, not engines of democratic uplift.

    Only during the extraordinary conditions of the mid-20th century—when global war and ideological conflict made mass education a national imperative—did higher education briefly align with the public interest.

    Today’s universities continue to speak the language of “public good,” but their actions reveal a different truth. They serve as factories of credentialism and as nodes in an elite network that translates privilege into prestige. What masquerades as a public good is, in practice, elite network capital—a system designed not to democratize opportunity, but to manage and legitimize inequality.


    Sources:

    Labaree (2017), Bledstein (1976), Bourdieu (1984, 1986), Mills (1956), Geiger (2015), Thelin (2019), and McGhee (2025).

    Source link