Tag: students

  • What really shapes the future of AI in education?

    What really shapes the future of AI in education?

    This post originally appeared on the Christensen Institute’s blog and is reposted here with permission.

    Key points:

    A few weeks ago, MIT’s Media Lab put out a study on how AI affects the brain. The study ignited a firestorm of posts and comments on social media, given its provocative finding that students who relied on ChatGPT for writing tasks showed lower brain engagement on EEG scans, hinting that offloading thinking to AI can literally dull our neural activity. For anyone who has used AI, it’s not hard to see how AI systems can become learning crutches that encourage mental laziness.

    But I don’t think a simple “AI harms learning” conclusion tells the whole story. In this blog post (adapted from a recent series of posts I shared on LinkedIn), I want to add to the conversation by tackling the potential impact of AI in education from four angles. I’ll explore how AI’s unique adaptability can reshape rigid systems, how it both fights and fuels misinformation, how AI can be both good and bad depending on how it is used, and why its funding model may ultimately determine whether AI serves learners or short-circuits their growth.

    What if the most transformative aspect of AI for schools isn’t its intelligence, but its adaptability?

    Most technologies make us adjust to them. We have to learn how they work and adapt our behavior. Industrial machines, enterprise software, even a basic thermostat—they all come with instructions and patterns we need to learn and follow.

    Education highlights this dynamic in a different way. How does education’s “factory model” work when students don’t come to school as standardized raw inputs? In many ways, schools expect students to conform to the requirements of the system—show up on time, sharpen your pencil before class, sit quietly while the teacher is talking, raise your hand if you want to speak. Those social norms are expectations we place on students so that standardized education can work. But as anyone who has tried to manage a group of six-year-olds knows, a class of students is full of complicated humans who never fully conform to what the system expects. So, teachers serve as the malleable middle layer. They adapt standardized systems to make them work for real students. Without that human adaptability, the system would collapse.

    Same thing in manufacturing. Edgar Schein notes that engineers aim to design systems that run themselves. But operators know systems never work perfectly. Their job—and often their sense of professional identity—is about having the expertise to adapt and adjust when things inevitably go off-script. Human adaptability in the face of rigid systems keeps everything running.

    So, how does this relate to AI? AI breaks the mold of most machines and systems humans have designed and dealt with throughout history. It doesn’t just follow its algorithm and expect us to learn how to use it. It adapts to us, like how teachers or factory operators adapt to the realities of the world to compensate for the rigidity of standardized systems.

    You don’t need a coding background or a manual. You just speak to it. (I literally hit the voice-to-text button and talk to it like I’m explaining something to a person.) Messy, natural human language—the age-old human-to-human interface that our brains are wired to pick up on as infants—has become the interface for large language models. In other words, what makes today’s AI models amazing is their ability to use our interface, rather than asking us to learn theirs.

    For me, the early hype about “prompt engineering” never really made sense. It assumed that success with AI required becoming an AI whisperer who knew how to speak AI’s language. But in my experience, working well with AI is less about learning special ways to talk to AI and more about just being a clear communicator, just like a good teacher or a good manager.

    Now imagine this: what if AI becomes the new malleable middle layer across all kinds of systems? Not just a tool, but an adaptive bridge that makes other rigid, standardized systems work well together. If AI can make interoperability nearly frictionless—adapting to each system and context, rather than forcing people to adapt to it—that could be transformative. It’s not hard to see how this shift might ripple far beyond technology into how we organize institutions, deliver services, and design learning experiences.

    Consider two concrete examples of how this might transform schools. First, our current system heavily relies on the written word as the medium for assessing students’ learning. To be clear, writing is an important skill that students need to develop to help them navigate the world beyond school. Yet at the same time, schools’ heavy reliance on writing as the medium for demonstrating learning creates barriers for students with learning disabilities, neurodivergent learners, or English language learners—all of whom may have a deep understanding but struggle to express it through writing in English. AI could serve as that adaptive layer, allowing students to demonstrate their knowledge and receive feedback through speech, visual representations, or even their native language, while still ensuring rigorous assessment of their actual understanding.

    Second, it’s obvious that students don’t all learn at the same pace—yet we’ve forced learning to happen at a uniform timeline because individualized pacing quickly becomes completely unmanageable when teachers are on their own to cover material and provide feedback to their students. So instead, everyone spends the same number of weeks on each unit of content and then moves to the next course or grade level together, regardless of individual readiness. Here again, AI could serve as that adaptive layer for keeping track of students’ individual learning progressions and then serving up customized feedback, explanations, and practice opportunities based on students’ individual needs.

    Third, success in school isn’t just about academics—it’s about knowing how to navigate the system itself. Students need to know how to approach teachers for help, track announcements for tryouts and auditions, fill out paperwork for course selections, and advocate for themselves to get into the classes they want. These navigation skills become even more critical for college applications and financial aid. But there are huge inequities here because much of this knowledge comes from social capital—having parents or peers who already understand how the system works. AI could help level the playing field by serving as that adaptive coaching layer, guiding any student through the bureaucratic maze rather than expecting them to figure it out on their own or rely on family connections to decode the system.

    Can AI help solve the problem of misinformation?

    Most people I talk to are skeptical of the idea in this subhead—and understandably so.

    We’ve all seen the headlines: deep fakes, hallucinated facts, bots that churn out clickbait. AI, many argue, will supercharge misinformation, not solve it. Others worry that overreliance on AI could make people less critical and more passive, outsourcing their thinking instead of sharpening it.

    But what if that’s not the whole story?

    Here’s what gives me hope: AI’s ability to spot falsehoods and surface truth at scale might be one of its most powerful—and underappreciated—capabilities.

    First, consider what makes misinformation so destructive. It’s not just that people believe wrong facts. It’s that people build vastly different mental models of what’s true and real. They lose any shared basis for reasoning through disagreements. Once that happens, dialogue breaks down. Facts don’t matter because facts aren’t shared.

    Traditionally, countering misinformation has required human judgment and painstaking research, both time-consuming and limited in scale. But AI changes the equation.

    Unlike any single person, a large language model (LLM) can draw from an enormous base of facts, concepts, and contextual knowledge. LLMs know far more facts from their training data than any person can learn in a lifetime. And when paired with tools like a web browser or citation database, they can investigate claims, check sources, and explain discrepancies.

    Imagine reading a social media post and getting a sidebar summary—courtesy of AI—that flags misleading statistics, offers missing context, and links to credible sources. Not months later, not buried in the comments—instantly, as the content appears. The technology to do this already exists.

    Of course, AI is not perfect as a fact-checker. When large language models generate text, they aren’t producing precise queries of facts; they’re making probabilistic guesses at what the right response should be based on their training, and sometimes those guesses are wrong. (Just like human experts, they also generate answers by drawing on their expertise, and they sometimes get things wrong.) AI also has its own blind spots and biases based on the biases it inherits from its training data. 

    But in many ways, both hallucinations and biases in AI are easier to detect and address than the false statements and biases that come from millions of human minds across the internet. AI’s decision rules can be audited. Its output can be tested. Its propensity to hallucinate can be curtailed. That makes it a promising foundation for improving trust, at least compared to the murky, decentralized mess of misinformation we’re living in now.

    This doesn’t mean AI will eliminate misinformation. But it could dramatically increase the accessibility of accurate information, and reduce the friction it takes to verify what’s true. Of course, most platforms don’t yet include built-in AI fact-checking, and even if they did, that approach would raise important concerns. Do we trust the sources that those companies prioritize? The rules their systems follow? The incentives that guide how their tools are designed? But beyond questions of trust, there’s a deeper concern: when AI passively flags errors or supplies corrections, it risks turning users into passive recipients of “answers” rather than active seekers of truth. Learning requires effort. It’s not just about having the right information—it’s about asking good questions, thinking critically, and grappling with ideas. That’s why I think one of the most important things to teach young people about how to use AI is to treat it as a tool for interrogating the information and ideas they encounter, both online and from AI itself. Just like we teach students to proofread their writing or double-check their math, we should help them develop habits of mind that use AI to spark their own inquiry—to question claims, explore perspectives, and dig deeper into the truth. 

    Still, this focuses on just one side of the story. As powerful as AI may be for fact-checking, it will inevitably be used to generate deepfakes and spin persuasive falsehoods.

    AI isn’t just good or bad—it’s both. The future of education depends on how we use it.

    Much of the commentary around AI takes a strong stance: either it’s an incredible force for progress or it’s a terrifying threat to humanity. These bold perspectives make for compelling headlines and persuasive arguments. But in reality, the world is messy. And most transformative innovations—AI included—cut both ways.

    History is full of examples of technologies that have advanced society in profound ways while also creating new risks and challenges. The Industrial Revolution made it possible to mass-produce goods that have dramatically improved the quality of life for billions. It has also fueled pollution and environmental degradation. The internet connects communities, opens access to knowledge, and accelerates scientific progress—but it also fuels misinformation, addiction, and division. Nuclear energy can power cities—or obliterate them.

    AI is no different. It will do amazing things. It will do terrible things. The question isn’t whether AI will be good or bad for humanity—it’s how the choices of its users and developers will determine the directions it takes. 

    Because I work in education, I’ve been especially focused on the impact of AI on learning. AI can make learning more engaging, more personalized, and more accessible. It can explain concepts in multiple ways, adapt to your level, provide feedback, generate practice exercises, or summarize key points. It’s like having a teaching assistant on demand to accelerate your learning.

    But it can also short-circuit the learning process. Why wrestle with a hard problem when AI will just give you the answer? Why wrestle with an idea when you can ask AI to write the essay for you? And even when students have every intention of learning, AI can create the illusion of learning while leaving understanding shallow.

    This double-edged dynamic isn’t limited to learning. It’s also apparent in the world of work. AI is already making it easier for individuals to take on entrepreneurial projects that would have previously required whole teams. A startup no longer needs to hire a designer to create its logo, a marketer to build its brand assets, or an editor to write its press releases. In the near future, you may not even need to know how to code to build a software product. AI can help individuals turn ideas into action with far fewer barriers. And for those who feel overwhelmed by the idea of starting something new, AI can coach them through it, step by step. We may be on the front end of a boom in entrepreneurship unlocked by AI.

    At the same time, however, AI is displacing many of the entry-level knowledge jobs that people have historically relied on to get their careers started. Tasks like drafting memos, doing basic research, or managing spreadsheets—once done by junior staff—can increasingly be handled by AI. That shift is making it harder for new graduates to break into the workforce and develop their skills on the job.

    One way to mitigate these challenges is to build AI tools that are designed to support learning, not circumvent it. For example, Khan Academy’s Khanmigo helps students think critically about the material they’re learning rather than just giving them answers. It encourages ideation, offers feedback, and prompts deeper understanding—serving as a thoughtful coach, not a shortcut. But the deeper issue AI brings into focus is that our education system often treats learning as a means to an end—a set of hoops to jump through on the way to a diploma. To truly prepare students for a world shaped by AI, we need to rethink that approach. First, we should focus less on teaching only the skills AI can already do well. And second, we should make learning more about pursuing goals students care about—goals that require curiosity, critical thinking, and perseverance. Rather than training students to follow a prescribed path, we should be helping them learn how to chart their own. That’s especially important in a world where career paths are becoming less predictable, and opportunities often require the kind of initiative and adaptability we associate with entrepreneurs.

    In short, AI is just the latest technological double-edged sword. It can support learning, or short-circuit it. Boost entrepreneurship—or displace entry-level jobs. The key isn’t to declare AI good or bad, but to recognize that it’s both, and then to be intentional about how we shape its trajectory. 

    That trajectory won’t be determined by technical capabilities alone. Who pays for AI, and what they pay it to do, will influence whether it evolves to support human learning, expertise, and connection, or to exploit our attention, take our jobs, and replace our relationships.

    What actually determines whether AI helps or harms?

    When people talk about the opportunities and risks of artificial intelligence, the conversation tends to focus on the technology’s capabilities—what it might be able to do, what it might replace, what breakthroughs lie ahead. But just focusing on what the technology does—both good and bad—doesn’t tell the whole story. The business model behind a technology influences how it evolves.

    For example, when advertisers are the paying customer, as they are for many social media platforms, products tend to evolve to maximize user engagement and time-on-platform. That’s how we ended up with doomscrolling—endless content feeds optimized to occupy our attention so companies can show us more ads, often at the expense of our well-being.

    That incentive could be particularly dangerous with AI. If you combine superhuman persuasion tools with an incentive to monopolize users’ attention, the results will be deeply manipulative. And this gets at a concern my colleague Julia Freeland Fisher has been raising: What happens if AI systems start to displace human connection? If AI becomes your go-to for friendship or emotional support, it risks crowding out the real relationships in your life.

    Whether or not AI ends up undermining human relationships depends a lot on how it’s paid for. An AI built to hold your attention and keep you coming back might try to be your best friend. But an AI built to help you solve problems in the real world will behave differently. That kind of AI might say, “Hey, we’ve been talking for a while—why not go try out some of the things we’ve discussed?” or “Sounds like it’s time to take a break and connect with someone you care about.”

    Some decisions made by the major AI companies seem encouraging. Sam Altman, OpenAI’s CEO, has said that adopting ads would be a last resort. “I’m not saying OpenAI would never consider ads, but I don’t like them in general, and I think that ads-plus-AI is sort of uniquely unsettling to me.” Instead, most AI developers like OpenAI and Anthropic have turned to user subscriptions, an incentive structure that doesn’t steer as hard toward addictiveness. OpenAI is also exploring AI-centric hardware as a business model—another experiment that seems more promising for user wellbeing.

    So far, we’ve been talking about the directions AI will take as companies develop their technologies for individual consumers, but there’s another angle worth considering: how AI gets adopted into the workplace. One of the big concerns is that AI will be used to replace people, not necessarily because it does the job better, but because it’s cheaper. That decision often comes down to incentives. Right now, businesses pay a lot in payroll taxes and benefits for every employee, but they get tax breaks when they invest in software and machines. So, from a purely financial standpoint, replacing people with technology can look like a smart move. In the book, The Once and Future Worker, Oren Cass discusses this problem and suggests flipping that script—taxing capital more and labor less—so companies aren’t nudged toward cutting jobs just to save money. That change wouldn’t stop companies from using AI, but it would encourage them to deploy it in ways that complement, rather than replace, human workers.

    Currently, while AI companies operate without sustainable business models, they’re buoyed by investor funding. Investors are willing to bankroll companies with little or no revenue today because they see the potential for massive profits in the future. But that investor model creates pressure to grow rapidly and acquire as many users as possible, since scale is often a key metric of success in venture-backed tech. That drive for rapid growth can push companies to prioritize user acquisition over thoughtful product development, potentially at the expense of safety, ethics, or long-term consequences. 

    Given these realities, what can parents and educators do? First, they can be discerning customers. There are many AI tools available, and the choices they make matter. Rather than simply opting for what’s most entertaining or immediately useful, they can support companies whose business models and design choices reflect a concern for users’ well-being and societal impact.

    Second, they can be vocal. Journalists, educators, and parents all have platforms—whether formal or informal—to raise questions, share concerns, and express what they hope to see from AI companies. Public dialogue helps shape media narratives, which in turn shape both market forces and policy decisions.

    Third, they can advocate for smart, balanced regulation. As I noted above, AI shouldn’t be regulated as if it’s either all good or all bad. But reasonable guardrails can ensure that AI is developed and used in ways that serve the public good. Just as the customers and investors in a company’s value network influence its priorities, so too can policymakers play a constructive role as value network actors by creating smart policies that promote general welfare when market incentives fall short.

    In sum, a company’s value network—who its investors are, who pays for its products, and what they hire those products to do—determines what companies optimize for. And in AI, that choice might shape not just how the technology evolves, but how it impacts our lives, our relationships, and our society.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Psychology Course Encourages College Students to Make Friends

    Psychology Course Encourages College Students to Make Friends

    Starting college can be an exciting time for students to learn new things, make friends and live away from home for the first time. But not every student takes advantage of the opportunity.

    Emmanuel College psychology professor Linda Lin said she’s seen students reluctant to engage with peers in public spaces, including on their own dorm floor, out of fear of being perceived as odd or intrusive.

    “At the beginning of the semester, I always offer students extra credit points if they come see me for a 10-minute meeting and I just check in with them,” Lin said. Typically, a significant share of those students will say they have yet to make friends and get connected on campus.

    “It’s become almost half or maybe a majority of the students are really struggling to find their people on campus and find their way,” Lin said.

    Nationally, college students express high levels of social anxiety. One study, by the College Student Wellness Advocacy Coalition and the Hi, How Are You Project, found that 65 percent of students said they feel stress often or all the time, and 57 percent reported feeling anxious, worried or overwhelmed frequently.

    Lin thinks this could be due in part to the pandemic’s role in hindering social skill development as well as changing social norms among adults in the U.S., who now prioritize relationships built online or via phone-enabled connections, rather than in shared physical spaces.

    In response, Lin designed a course on positive psychology and happiness to demonstrate the evidence-based practices that can improve student well-being and push them out of their comfort zones.

    How it works: The course covers topics in positive psychology and the research behind those principles. Content includes stress management, connection to nature, exercise and mental health, gratitude, spirituality, optimism, self-compassion, mindfulness, and generosity.

    The class is an upper-level psychology elective, so the majority of students enrolled are junior or seniors majoring in psychology, though about 20 percent are nonmajors, Lin said.

    Throughout the semester, students receive assignments to practice various techniques to boost their own well-being, ranging from taking a nature walk to writing a letter expressing thankfulness or performing a random act of kindness.

    Lin’s most controversial assignment is asking students to talk to three people they don’t know over two or three days. “It can be a stranger you’re making small talk with, or someone that you see in your regular day that you’ve never introduced yourself to,” she said.

    Students have said they’d rather drop her class than do the assignment, Lin said. “The social anxiety is so high, they anticipate it being super awkward, super anxiety-provoking, that people are gonna think they’re weird.”

    But so far, none of her students has reported a bad experience; instead they’ve come back pleasantly surprised by the interactions. Some have even made lasting friends.

    The impact: The class has received an overwhelmingly positive review from students who have taken it, Lin said, with some graduating seniors telling her it had a huge impact on them or that the course changed their life.

    “A lot of students, generally, by the end of the course, are shocked that these little things make them feel better,” Lin said. “A lot of them were saying, ‘I technically know I should be doing these things, but this course gave me an opportunity to actually do them.’”

    Some students shared her lecture recordings (PowerPoints with audio overlaid) and assignments with their families and friends, in the hopes that the content could benefit their health and well-being, as well.

    Lin also conducts pre- and postassessments of student happiness and well-being throughout the term. She found that from the first class in September to the final one in December, students report a 20 percent jump in their scores. And that’s on top of seasonal blues and stressful final exam season feelings, Lin said.

    The practices helped all students boost their happiness and well-being, but the greatest gains were among students who were already struggling, especially those receiving clinical mental health support.

    “One student was like, ‘My therapist wants to talk to you—this made such a big difference in my life,’” Lin said.

    Lin is collecting data from the course for future research and has also taken her curriculum out of the classroom, training resident advisers and other campus community members on how to make friends.

    “I think everybody’s a little bit concerned about this, and I’m just trying to go out and take the science everywhere, because I think this should not be behind a paywall,” Lin said.

    Are you noticing and responding to a lack of peer engagement and community on your campus? Tell us more about it.

    Source link

  • DOJ Says UCLA Violated Jewish Students’ Civil Rights

    DOJ Says UCLA Violated Jewish Students’ Civil Rights

    The U.S. Department of Justice issued a notice to the University of California, Los Angeles, on Tuesday alleging that it violated civil rights law. The move came just hours after the university announced a $6.45 million settlement to end a lawsuit brought by Jewish students over allegations of antisemitism last year.  

    “The Department has concluded that UCLA’s response to the protest encampment on its campus in the spring of 2024 was deliberately indifferent to a hostile environment for Jewish and Israeli students in violation of the Equal Protection Clause and Title VI,” the notice read. It also said an investigation into the University of California system is ongoing.

    The message made no mention of the settlement; UCLA divided the funds between the plaintiffs and Jewish advocacy and community organizations. The settlement also said the university cannot exclude Jewish students or staff from educational facilities and opportunities “based on religious beliefs concerning the Jewish state of Israel.” (Jewish student plaintiffs argued they were barred by pro-Palestinian protesters from entering certain areas of campus.)

    According to the federal notice, UCLA now has until Aug. 5 to contact the DOJ to seek a voluntary resolution agreement “to ensure that the hostile environment is eliminated and reasonable steps are taken to prevent its recurrence.” DOJ officials said they’re prepared to file a complaint in federal district court by Sept. 2 “unless there is reasonable certainty that we can reach an agreement in this matter.”

    “Our investigation into the University of California system has found concerning evidence of systemic anti-Semitism at UCLA that demands severe accountability from the institution,” Attorney General Pamela Bondi said in a statement. “This disgusting breach of civil rights against students will not stand: DOJ will force UCLA to pay a heavy price for putting Jewish Americans at risk and continue our ongoing investigations into other campuses in the UC system.”

    Source link

  • Preparing Grad Students to Defend Academic Freedom (opinion)

    Preparing Grad Students to Defend Academic Freedom (opinion)

    Defending academic freedom is an all-hands-on-deck emergency. From the current administration’s scrutiny of (and executive orders related to) higher education, to state legislative overreach and on-campus bad actors, threats to academic freedom are myriad and dire.

    As leader of a program focused on free expression and academic freedom, I see faculty and campus leaders who are flummoxed about how to respond: Where to begin? What can be done to make a difference in defending academic freedom?

    I have an answer, at least if you’re graduate faculty, a dean or director of graduate studies, or a provost: Make a plan to prepare graduate students—tomorrow’s professors—to defend academic freedom.

    Graduate students often feel too pressed to focus on anything other than their coursework or dissertation and so are unlikely to study academic freedom on their own, even if they know where to find solid information. It is incumbent on faculty to put academic freedom in front of graduate students as a serious and approachable topic. If their professors and directors of graduate study do not teach them about academic freedom, they will be ill prepared to confront academic freedom issues when they arise, as they surely will, especially in today’s climate.

    An example: When I met with advanced graduate students at an R-1 university, one student recounted an experience as a junior team member reviewing submissions for a journal. He reported that another team member argued for rejecting a manuscript because its findings could be used to advance a public policy position favored by some politicians that this colleague opposed. The student was rightly troubled about political factors being weighed along with methodology and scholarship but reported he didn’t have the knowledge or confidence to respond effectively. Bottom line: His graduate school preparation had incompletely prepared him to understand and act on academic freedom principles.

    Here is a summer action plan for graduate faculty, deans and provosts to ensure we don’t leave the next generation of scholars uncertain about academic freedom principles and how they apply in teaching, scholarship and extracurricular settings.

    Add an academic freedom session to orientation. Orientation for matriculating graduate students is a can’t-miss chance to begin education about academic freedom.

    Patrick Kain, associate professor of philosophy at Purdue University, provides a primer on graduate students’ academic freedom rights and responsibilities during his department’s graduate student orientation. His session covers the First Amendment, state law and campus policies. He provides written guidance about what to do, especially in their roles as teaching assistants (“pay attention to the effects of your expression on others”); what not to do (“don’t compel speech”); and what they should expect (“students’ experiences and sensitivity to others’ expression will vary”).

    Reflecting on his experiences leading these orientation sessions, Kain said, “Graduate students, especially those joining us from quite different cultures and institutions, really appreciate a clear explanation of the ground rules of academic freedom and free expression on campus.” He added, “It puts them at ease to be able to imagine how they can pursue their own work with integrity in these trying times, and what they can expect from others when disagreements arise.”

    However, orientation cannot be a “one and done” for a topic as complex as academic freedom. Additional steps to take this summer include:

    Revisit the professional development seminar. Most graduate students take a professional development seminar before preliminary exams. When I took that seminar three decades ago, academic freedom wasn’t a topic—and my inquiries suggest academic freedom hasn’t been added to many professional development seminars since. This must change. In addition to sessions on writing a publishable article and giving a job talk, include sessions on the history and norms of academic freedom and free inquiry. Assign foundational academic freedom documents, such as the American Association of University Professors’ 1940 Statement on the Principles of Academic Freedom and Tenure and the 1967 Joint Statement on Rights and Freedoms of Students, alongside a text offering an overview of academic freedom principles, such as Henry Reichman’s Understanding Academic Freedom (Johns Hopkins Press, 2025).

    Schedule an academic freedom workshop. Graduate students at all stages—and your faculty colleagues, too!—can benefit from stand-alone workshops. Include tabletop exercises that allow students to appreciate nuances of academic freedom principles. For example, tabletop exercises let students test possible responses to a peer who is putting a thumb on the scale against publishing a manuscript submission on nonacademic grounds, to department colleagues who are exerting pressure on them to sign a joint statement with which they disagree or to administrators bowing inappropriately to donor wishes or political pressures. The reports of the Council of Independent Colleges’ Academic Leaders Task Force on Campus Free Expression include ready-for-use tabletop exercises.

    Bolster classroom training for teaching assistants. Professors with teaching assistants can provide an insider’s look into their process for designing a course and planning class meetings, with a focus on how they build trust and incorporate divergent viewpoints, and their approach to teaching potentially controversial topics. In weekly TA meetings, professors and TAs can debrief about what worked to foster robust discussion and what didn’t. Centers for teaching and learning can equip graduate students with strategies that build their confidence for leading discussions, including strategies to uphold free expression and inclusive values when a student speaks in ways that others think is objectionable or violates inclusion norms. The University of Michigan’s Center for Research on Learning and Teaching offers programs tailored to graduate students and postdocs, including a teaching orientation program.

    Look for opportunities to provide mentorship. An academic career isn’t only about teaching and scholarship but also entails serving on department and university committees, providing—and being subject to—peer review, and planning conferences. Academic freedom questions come up with regularity during these activities. Graduate faculty serve as mentors and should be alert to opportunities to discuss these questions. One idea: Take a “ripped from the headlines” controversy about journal retractions, viral faculty social media posts or how universities are responding to Trump administration pressures and plan a brown-bag lunch discussion with graduate students.

    Take the next step in rethinking graduate student preparation. While the steps above can be taken this summer, with a longer planning horizon, it is possible to rethink graduate preparation for a changed higher education landscape. Morgan State University, a public HBCU in Maryland, offers Morgan’s Structured Teaching Assistant Program (MSTAP), an award-winning course series to prepare graduate students as teachers. Mark Garrison, who as dean of the School of Graduate Studies led the development of MSTAP, explained, “In our required coursework for teaching assistants, we are intensely focused on establishing ground rules for TAs” around how to guide “student engagement that is accepting and encouraging without the intrusion of the TA’s personal views.”

    Garrison added, “This makes free expression a component of instruction that must be cherished and nourished. We cannot assume that the novice instructor will come to this view naturally, and we do our best to embrace a reflective teaching model.”

    Academic freedom is under threat. As Mary Clark, provost and executive vice chancellor at the University of Denver, observed, “Graduate students are developing identities as scholars, learning what academic freedom means in their research and in the classroom—and how their scholarly identity intersects with their extracurricular speech as citizens and community members. It is critical that we support them in developing these understandings.” This summer is the time to plan to do exactly that.

    Jacqueline Pfeffer Merrill is senior director of the Civic Learning and Free Expression Projects at the Council of Independent Colleges.

    Source link

  • UCLA Settles Lawsuit With Jewish Students for $6.45M

    UCLA Settles Lawsuit With Jewish Students for $6.45M

    The University of California, Los Angeles, agreed to pay $6.45 million to settle a lawsuit brought by Jewish students, the Los Angeles Times reported. The agreement, which would be in effect for 15 years, now awaits approval from the judge overseeing the case.

    The lawsuit, brought by three Jewish students and a medical school professor in June 2024, alleged UCLA enabled pro-Palestinian activists to cut off Jewish students’ access to parts of campus, violating their civil rights.

    Violence broke out in and around an encampment established at UCLA in spring 2024 when pro-Israel counterprotesters attacked it with fireworks and other projectiles. Hours of chaos ensued between protesters and counterprotesters before campus police intervened. UCLA’s former chancellor Gene D. Block, named in the lawsuit alongside other UCLA officials, was among the higher ed leaders called before Congress for campus antisemitism hearings.

    As part of the settlement agreement, each plaintiff will receive $50,000. Another $320,000 will go toward a campus initiative to combat antisemitism. About $2.3 million will be donated to eight different Jewish community and advocacy groups, including Hillel at UCLA, the Academic Engagement Network, the Anti-Defamation League, the Jewish Federation Los Angeles Campus Impact Network and the Film Collaborative Inc., to produce a film related to the Holocaust.

    UCLA also agreed that it is “prohibited from knowingly allowing or facilitating the exclusion of Jewish students, faculty, and/or staff from ordinarily available portions of UCLA’s programs, activities, and/or campus areas,” which includes “exclusion … based on religious beliefs concerning the Jewish state of Israel.”

    Source link

  • Reaching (Not Just Teaching) Today’s Students: A Communication Cheatsheet – Faculty Focus

    Reaching (Not Just Teaching) Today’s Students: A Communication Cheatsheet – Faculty Focus

    Source link

  • Intl students not behind rent crisis: RBA – Campus Review

    Intl students not behind rent crisis: RBA – Campus Review

    A Reserve Bank of Australia (RBA) analysis has found the post-pandemic international student boom only marginally drove up the price of rent and inflation.

    Please login below to view content or subscribe now.

    Membership Login

    Source link

  • Howard Students Crowdsource to Cover Unpaid Balances

    Howard Students Crowdsource to Cover Unpaid Balances

    Howard University students have taken to social media to crowdsource funds after some found out they owe thousands of dollars to the institution following its transition to a new student financial platform, NBC News reported.

    The social media campaigns began after about 1,000 students received notice that the university put their accounts on hold because of unpaid balances. Some students received emails on June 4 saying that if the balances weren’t paid off by the end of the month, their bills would be sent to an external collections agency, according to The Root. Students in “pre-collection” have until the end of August to pay their bills. As long as a hold remains on their account, they can’t register for classes or student housing.

    Half of the cases have been resolved, according to a statement from Howard on Friday.

    “We are taking active steps to assist students experiencing challenges related to financial aid and account balances,” the statement read. “The University reaffirms its unwavering commitment to student success and to helping ensure that students are financially equipped to begin the academic year.”

    Howard officials also promised to offer virtual and in-person office hours, financial counseling, flexible payment plans, and, when possible, emergency support to affected students.

    On social media, students said they were blindsided by the news of how much they owed.

    “Myself included, many of us that have these balances on our account were not notified prior … which is why we’re struggling to pay them, because we had no idea,” said sophomore Makiah Goodman in one of multiple TikTok videos she made about the issue. She also said she discovered that a scholarship she earned couldn’t be applied to her debt. In another video, she noted that transferring out of Howard is “on the table” if she can’t pay.

    Alissa Jones, also a student, told NBC4 she was a few classes short of graduating when she found out she owed more than $57,000, despite only paying $15,000 per year for the last four years because of scholarship money.

    “Right now, it says I owe $57,540-something, like, I owe the whole thing,” Jones said. “If you have any type of hold, you cannot register for class, but with these, obsessive amounts of money that they’re saying we owe, it’s almost like, that’s not one semester’s worth of tuition, at all.”

    The breakdown in communication seems to have come as Howard transitioned from its old student financial platform, BisonWeb, to a new version, BisonHub. During the process, some student account updates were delayed between January and June of this year, according to Howard’s statement on Friday. (An earlier update from the university said between May and June.)

    Howard officials wrote in the statement that students were informed last October and November that their data would be transferred over to the new platform and that could come with “potential impacts.”

    Protests and Fundraisers

    A group of students has since launched a protest via an Instagram account called @whosehowardisit.

    The group came out with a set of demands, including an immediate in-person meeting with the Board of Trustees, more investment in financial aid and scholarships, and the resignations of some Howard administrators. They also called for student representatives to be added to hiring committees for various administrative positions going forward, particularly directors of student-facing departments. The group provided email templates for students, parents and other stakeholders to amplify their discontent.

    “For too long, students have raised concerns about communication failures, inaccessible leadership, and a lack of transparency around critical issues,” the group wrote in a “Get Involved Guide” shared on social media. “This movement is bigger than past due balances; it’s about how Howard University’s actions, or lack thereof, mirror the patterns of white supremacy, classism, and exclusion that oppress lower-income Black and brown students.”

    In their recent statement, Howard officials acknowledged students’ outspokenness about the issue.

    “While we are addressing the challenges related to the timing of the transition of students’ account data, we are also seeing an increase in the number of students who are publicly expressing frustration and concerns over rising financial pressures and the ability to continue their education,” they said, noting that Howard disproportionately serves low-income students.

    They added, “Recent federal cuts to research grants, education programs, and fellowships have compounded financial pressures on both students and faculty.”

    Students also shared to the @whosehowardisit Instagram account a central hub for the GoFundMe campaigns. Currently, about 70 students’ crowdsourcing campaigns are listed. (The site notes that the campaigns haven’t been “personally verified.”) Run by broadcast journalism student Ssanyu Lukoma, the site also features a GoFundMe submission form and a directory for possible scholarships and other financial resources.

    Some of the fundraising efforts have already paid off. Goodman’s GoFundMe campaign, for example, has so far raised more than $4,000 toward her $6,000 goal. Another campaign for Brandon Hawkins, a rising sophomore, hit $13,000, which is approaching his goal of $16,000. He said in a July 23 update that he’s now met his outstanding balance to Howard and any additional funds will go toward his tuition next year.

    “I hold a very personal and powerful mission: to be the first Black man in my family to graduate from college and create a new legacy for future generations,” Hawkins wrote on his GoFundMe page. “However, despite my academic achievements and unwavering passion, I face serious financial barriers that are threatening my ability to return to Howard and continue pursuing my degree.”



    Source link

  • Why Grad Students Can’t Afford to Ignore AI  (opinion)

    Why Grad Students Can’t Afford to Ignore AI  (opinion)

    I recently found myself staring at my computer screen, overwhelmed by the sheer pace of AI developments flooding my inbox. Contending with the flow of new tools, updated models and breakthrough announcements felt like trying to drink from a fire hose. As someone who coaches graduate students navigating their academic and professional journeys, I realized I was experiencing the same anxiety many of my students express: How do we keep up with something that’s evolving faster than we can learn?

    But here’s what I’ve come to understand through my own experimentation and reflection: The question isn’t whether we can keep up, but whether we can afford not to engage. As graduate students, you’re training to become the critical thinkers, researchers and leaders our world desperately needs. If you step back from advances in AI, you’re not just missing professional opportunities; you’re abdicating your responsibility to help shape how these powerful tools impact society.

    The Stakes Are Higher Than You Think

    The rapid advancement of artificial intelligence isn’t just a tech trend but a fundamental shift that will reshape every field, from humanities research to scientific discovery. As graduate students, you have a unique opportunity and responsibility. You’re positioned at the intersection of deep subject matter expertise and flexible thinking. You can approach AI tools with both the technical sophistication to use them effectively and the critical perspective to identify their limitations and potential harms.

    When I reflect on my own journey with AI tools, I’m reminded of my early days learning to navigate complex organizational systems. Just as I had to develop strategic thinking skills to thrive in bureaucratic environments, we now need to develop AI literacy to thrive in an AI-augmented world. The difference is the timeline: We don’t have years to adapt gradually. We have months, maybe weeks, before these tools become so embedded in professional workflows that not knowing how to use them thoughtfully becomes a significant disadvantage.

    My Personal AI Tool Kit: Tools Worth Exploring

    Rather than feeling paralyzed by the abundance of options, I’ve taken a systematic approach to exploring AI tools. I chose the tools in my current tool kit not because they’re perfect, but because they represent different ways AI can enhance rather than replace human thinking.

    • Large Language Models: Beyond ChatGPT

    Yes, ChatGPT was the breakthrough that captured everyone’s attention, but limiting yourself to one LLM is like using only one search engine. I regularly experiment with Claude for its nuanced reasoning capabilities, Gemini for its integration with Google’s ecosystem and DeepSeek for being an open-source model. Each has distinct strengths, and understanding these differences helps me choose the right tool for specific tasks.

    The key insight I’ve gained is that these aren’t just fancy search engines or writing assistants. They’re thinking partners that can help you explore ideas, challenge assumptions and approach problems from multiple angles, if you know how to prompt them effectively.

    • Executive Function Support: Goblin Tools

    One discovery that surprised me was Goblin Tools, an AI-powered suite of tools designed to support executive function. As someone who juggles multiple projects and deadlines and is navigating an invisible disability, I’ve found the task breakdown and time estimation features invaluable. For graduate students managing research, coursework and teaching responsibilities, tools like this can provide scaffolding for the cognitive load that often overwhelms even the most organized among us.

    • Research Acceleration: Elicit and Consensus

    Perhaps the most transformative tools in my workflow are Elicit and Consensus. These platforms don’t just help you find research papers, but also help you understand research landscapes, identify gaps in literature and synthesize findings across multiple studies.

    What excites me most about these tools is how they augment rather than replace critical thinking. They can surface connections you might miss and highlight contradictions in the literature, but you still need the domain expertise to evaluate the quality of sources and the analytical skills to synthesize findings meaningfully.

    • Real-Time Research: Perplexity

    Another tool that has become indispensable in my research workflow is Perplexity. What sets Perplexity apart is its ability to provide real-time, cited responses by searching the internet and academic sources simultaneously. I’ve found this particularly valuable for staying current with rapidly evolving research areas and for fact-checking information. When I’m exploring a new topic or need to verify recent developments in a field, Perplexity serves as an intelligent research assistant that not only finds relevant information but also helps me understand how different sources relate to each other. The key is using it as a starting point for deeper investigation, not as the final word on any topic.

    • Visual Communication: Beautiful.ai, Gamma and Napkin

    Presentation and visual communication tools represent another frontier where AI is making significant impact. Beautiful.ai and Gamma can transform rough ideas into polished presentations, while Napkin excels at creating diagrams and visual representations of complex concepts.

    I’ve found these tools particularly valuable not just for final presentations, but for thinking through ideas visually during the research process. Sometimes seeing your argument laid out in a diagram reveals logical gaps that weren’t apparent in text form.

    • Staying Informed: The Pivot 5 Newsletter

    With so much happening so quickly, staying informed without becoming overwhelmed is crucial. I subscribe to the Pivot 5 newsletter, which provides curated insights into AI developments without the breathless hype that characterizes much AI coverage. Finding reliable, thoughtful sources for AI news is as important as learning to use the tools themselves.

    Beyond the Chat Bots: Developing Critical AI Literacy

    Here’s where I want to challenge you to think more deeply. Most discussions about AI in academia focus on policies about chat bot use in assignments—important, but insufficient. The real opportunity lies in developing what I call critical AI literacy: understanding not just how to use these tools, but when to use them, how to evaluate their outputs and how to maintain your own analytical capabilities.

    This means approaching AI tools with the same rigor you’d apply to any research methodology. What are the assumptions built into these systems? What biases might they perpetuate? How do you verify AI-generated insights? These aren’t just philosophical questions; they’re practical skills that will differentiate thoughtful AI users from passive consumers.

    A Strategic Approach to AI Engagement

    Drawing from the strategic thinking framework I’ve advocated for in the past, here’s how I suggest you approach AI engagement:

    • Start with purpose: Before adopting any AI tool, clearly identify what problem you’re trying to solve. Are you looking to accelerate research, improve writing, manage complex projects or enhance presentations? Different tools serve different purposes.
    • Experiment systematically: Don’t try to learn everything at once. Choose one or two tools that align with your immediate needs and spend time understanding their capabilities and limitations before moving on to others.
    • Maintain critical distance: Use these tools as thinking partners, not thinking replacements. Always maintain the ability to evaluate and verify AI outputs against your own expertise and judgment.
    • Share and learn: Engage with peers about your experiences. What works? What doesn’t? What ethical considerations have you encountered? This collective learning is crucial for developing best practices.

    The Cost of Standing Still

    I want to be clear about what’s at stake. This isn’t about keeping up with the latest tech trends or optimizing productivity, even though those are benefits. It’s about ensuring that the most important conversations about AI’s role in society include the voices of critically trained, ethically minded scholars.

    If graduate students, future professors, researchers, policymakers and industry leaders retreat from AI engagement, we leave these powerful tools to be shaped entirely by technologists and venture capitalists. The nuanced understanding of human behavior, ethical frameworks and social systems that you’re developing in your graduate programs is exactly what’s needed to guide AI development responsibly.

    The pace of change isn’t slowing down. In fact, it’s accelerating. But that’s precisely why your engagement matters more, not less. The world needs people who can think critically about these tools, who understand both their potential and their perils, and who can help ensure they’re developed and deployed in ways that benefit rather than harm society.

    Moving Forward With Intention

    As you consider how to engage with AI tools, remember that this isn’t about becoming a tech expert overnight. It’s about maintaining the curiosity and critical thinking that brought you to graduate school in the first place. Start small, experiment thoughtfully and always keep your analytical mind engaged.

    The future we’re building with AI won’t be determined by the tools themselves, but by the people who choose to engage with them thoughtfully and critically. As graduate students, you have the opportunity—and, I’d argue, the responsibility—to be part of that conversation.

    The question isn’t whether AI will transform your field. It’s whether you’ll help shape that transformation or let it happen to you. The choice, as always, is yours to make.

    Dinuka Gunaratne (he/him) has worked across several postsecondary institutions in Canada and the U.S. and is a member of several organizational boards, including Co-operative Education and Work-Integrated Learning Canada, CERIC—Advancing Career Development in Canada, and the leadership team of the Administrators in Graduate and Professional Student Services knowledge community with NASPA: Student Affairs Administrators in Higher Education.

    Source link

  • AUD $50bn net gain from students with minimal rent price impact

    AUD $50bn net gain from students with minimal rent price impact

    International students are not responsible for sky-high rental price hikes, according to the latest analysis produced by Australia’s central bank, the Reserve Bank of Australia.

    In its latest bulletin assessing the role international students play in Australia’s economy, it estimated a AUS$50bn net gain from students and underlined their value as employees too.

    Spending by international students was also an important contributor to growth in consumer demand in Australia following the pandemic, it declared.

    “In periods of strong inflows of students, such as just after borders reopened after the pandemic, this likely had an important effect on aggregate demand in the economy.”

    And the report pointed out that international students constitute the second largest group of temporary visa holders with work rights in Australia after New Zealand citizens.

    “A greater share of international students work in accommodation and food, as well as retail, compared with the share of the total labour force,” detailed report authors.

    “Further, an increasing share of students are now working in health care, consistent with strong labour demand in this sector.”

    The report noted this contribution was important in helping businesses in these sectors facing labour shortages in the tight labour market that emerged post-pandemic.

    The timing of the report is useful, as new ESOS legislation is considered and the government is facing calls from the sector to stop stifling international student demand – with the latest calls relating to the new visa application fee which is killing demand from short-term students.

    When it comes to the political hot potato of international student populations squeezing out domestic renters or contributing to accommodation price surges, RBA was dismissive of that thesis.

    The rise in international student numbers is likely to have accounted for only a small share of the rise in rents since the onset of the pandemic
    Reserve Bank of Australia

    Models of the housing market used by the RBA suggest that a 50,000 increase in population would raise private rents by around 0.5 per cent compared with a baseline projection. The marginal effect of an additional renter may be greater in periods where the rental market is tight and vacancy rates are low, such as occurred post-pandemic.

    “Nonetheless, the rise in international student numbers is likely to have accounted for only a small share of the rise in rents since the onset of the pandemic, with much of the rise in advertised rents occurring before borders were reopened.”

    One area where higher international student numbers have generated a supply response has been in purpose-built student accommodation, noted the report, with rapid growth in building approvals for such projects in recent years.

    Note the gov plan to expand cap for insttutions investing in PBSA.

    Another interesting fact shared was that International students make up around one-third of Australia’s permanent resident intake –  around 30 per cent of international students went on to apply for temporary graduate visas in the five years to 2022, said the report citing 2022 data.

    There is less expected flow into temporaray labour market now – “this is because the recent tightening in visa policy has targeted groups of students who were more likely to be seeking to work” explained RBA.

    “That is, those international students who do receive visas going forward are less likely to be focused on employment opportunities in Australia on average,” said the report, citing Andrew Norton.

    In sum, “rapid growth in the international student stock post-pandemic likely contributed to some of the upward pressure on inflation from 2022 to early 2023, especially as arriving students frontloaded their spending as they set up in Australia and took time to join the labour market. However, the increase in international students was just one of many other forces at play in this time that drove demand above supply in the economy, and hence higher inflation. For instance, supply-side factors were the biggest driver of the increase in inflation in 2022 and 2023 (RBA 2023; Beckers, Hambur and Williams 2023) while strong domestic demand arising from supportive fiscal and monetary policy also played an important role.”

    Source link