Tag: human

  • Don’t Underestimate Value of a Human Network (opinion)

    Don’t Underestimate Value of a Human Network (opinion)

    This week is Thanksgiving in the United States, a time when many of us come together with family and friends to express gratitude for the positive things in our lives. The holiday season can also be a challenging time for those who are far from family and grappling with the prevalent loneliness of our modern era.

    Perhaps worse than missing the company of others over the holidays is being with family who hold different views and beliefs from your own. The fact is, though, that when we come together with a large, diverse group of people at events we are bound to find a variety of viewpoints and personalities in the room.

    People are complex and messy, and engaging with them is often a lot of work. Sometimes it seems easier to just not deal with them at all and “focus on ourselves” instead. Similarly, the vast amount of information available online often leads many graduate students and postdocs to think they can effectively engage in professional development, explore career options and navigate their next step on their own. Indeed, there are many amazing online tools and resources to help with a lot of this but only by engaging other people in conversation can we fully come to understand how various practices, experiences and occupations apply to us as unique beings in the world. Generic advice is fine, but it can only be tailored through genuine dialogue with another person, though some believe they can find it in a machine.

    Generative artificial intelligence (AI) technology has accelerated since the launch of ChatGPT in November 2022 and now many people lean on AI chatbots for advice and even companionship. The problem with this approach is that AI chatbots are, at least currently, quite sycophantic and don’t, by default, challenge a user’s worldview. Rather, they can reinforce one’s current beliefs and biases. Furthermore, since we as humans have a tendency to anthropomorphize things, we perceive the output of AI chatbots as “human” and think we are getting the type of “social” relationship and advice we need from a bot without all the friction of dealing with another human being in real life. So, while outsourcing your problems to a chatbot may feel easy, it cannot fully support you as you navigate your life and career. Furthermore, generative AI has made the job application, screening and interview process incredibly impersonal and ineffective. One recent piece in The Atlantic put it simply (if harshly): “The Job Market is Hell.”

    What is the solution to this sad state of affairs?

    I am here to remind readers of the importance of engaging with real, human people to help you navigate your professional development, job search and life. Despite the fear of being rejected, making small talk or hearing things that may challenge you, engaging with other people will help you learn about professional roles available to you, discover unexpected opportunities, build critical interpersonal skills and, in the process, understand yourself (and how you relate with others) better.

    For graduate students and postdocs today, it’s easy to feel isolated or spend too much time in your own head focusing on your perceived faults and deficiencies. You need to remember, though, that you are doing hard things, including leading research projects seeking to investigate questions no one else has reported on before. But as you journey through your academic career and into your next step professionally, I encourage you to embrace the fact that true strength and resilience lies in our connections—with colleagues, mentors, friends and the communities we build.

    Networks enrich your perspectives, foster resilience and can help you find not only jobs, but joy and fulfillment along the way. Take intentional steps to build and lean on your community during your time as an academic and beyond. Invest time, gratitude and openness in your relationships. Because when you navigate life’s challenges with others by your side, you don’t just survive—you thrive.

    Practical Tips for Building and Leveraging Networks

    For graduate students and postdocs, here are some action steps to foster meaningful networks to help you professionally and personally:

    Tip 1: Seek Diverse Connections

    Attend seminars, departmental events, professional conferences and interest groups—both within and outside your field.

    Join and engage in online forums, LinkedIn groups and professional organizations that interest you. Create a career advisory group.

    Tip 2: Practice Gratitude and Generosity

    Thank peers and mentors regularly—showing appreciation strengthens relationships, opens doors and creates goodwill.

    Offer help, such as reviewing your peers’ résumés, sharing job leads or simply listening. Reciprocity is foundational to strong networks.

    Tip 3: Be Vulnerable and Authentic

    Share struggles and setbacks. Vulnerability invites others to connect, offer advice and foster mutual support.

    Be honest about your goals; don’t feel pressured to follow predefined paths set by others or by societal norms.

    Tip 4: Leverage Formal Resources

    Enroll in career design workshops or online courses, such as Stanford University’s “Designing Your Career.”

    Utilize university career centers, alumni networks and faculty advisers for information and introductions.

    Tip 5: Make Reflection a Habit

    Set aside time weekly or monthly to review progress, map goals and consider input from your network.

    Use journaling or guided exercises to deepen self-insight and identify what you want from relationships and careers.

    Tip 6: Cultivate Eulogy Virtues

    Focus not just on professional “résumé virtues,” but also on “eulogy virtues”—kindness, honesty, courage and the quality of relationships formed.

    These provide lasting meaning and fuel deep, authentic connections that persist beyond job titles and paychecks.

    Strategies for Overcoming Isolation

    Graduate students and postdocs are at particular risk for isolation and burnout, given the demands of research and the often-solitary nature of scholarship. Community is a proven antidote. Consider forming small groups with fellow students and postdocs to share resources, celebrate milestones and troubleshoot professional challenges together. Regular meetings can foster motivation and accountability. These can be as simple as monthly coffee chats to something more structured such as regular writing or job search support groups. And, while online communities are not a perfect substitute for support, postdocs can leverage Future PI Slack and graduate students can use their own Slack community for help and advice. You can also lean on your networks for emotional support and practical help, especially during stressful periods or setbacks.

    Another practical piece of advice to build your network and connections is volunteer engagement. This could mean volunteering in a professional organization, committees at your institution or in your local community. Working together with others on shared projects in this manner helps build connections without the challenges many have with engaging others at purely social events. In addition, volunteering can help you develop leadership, communication and management skills that can become excellent résumé material.

    Networking to Launch Your Career

    Through the process of engaging with more people through an expanded network you also open yourself up to serendipity and opportunities that could enhance your overall training and career. Career theorists call this “planned happenstance.” The idea is simple: By putting yourself in community with others—attending talks, joining professional groups, volunteering for committees—you increase the odds that unexpected opportunities will cross your path. You meet people who do work you hadn’t considered, learn about opportunities before they’re posted and hear about initiatives that need someone with your skills earlier than most.

    When I was a postdoc at Vanderbilt University, I volunteered for the National Postdoctoral Association (NPA), starting small by writing for their online newsletter (The POSTDOCket), and also became increasingly involved in the Vanderbilt Postdoctoral Association (VPA). These experiences were helpful as I transitioned to working in postdoctoral affairs as a higher education administrator after my postdoc. Writing for The POSTDOCket as a postdoc allowed me to interview administrators and leaders in postdoctoral affairs, in the process learning about working in the space. My leadership in VPA showed I understood some of the needs of the postdoctoral community and could organize programming to support postdocs. I have become increasingly involved in the NPA over the past six years, culminating in being chair of our Board of Directors in 2025. This work has allowed me to increase my national visibility and has resulted in invites to speak to postdocs at different institutions, the opportunity to serve on a National Academies Roundtable, and I believe helped me land my current role at Virginia Tech.

    I share all this to reiterate that in uncertain job markets, it’s tempting to focus on polishing résumés or applying to ever more positions online. Those things can matter—but they’re not enough. Opportunities often come through both expanding your network and engaging with people and activities we care about. They can present themselves to you via your network long before they appear in writing and they often can’t be fully anticipated when you initially engage with these “extracurricular activities.” A good first step to open yourself up to possibilities is to get involved in communities outside your direct school or work responsibilities. Doing so will improve your sense of purpose, help you build key transferrable skills, increase your connections and aid in your transition to your next role.

    Your training and career should not be a solitary climb, but rather a collaborative, evolving process of growth and discovery. A strong community and network are critical to your longterm wellbeing and success. And, in a world where setbacks and uncertainty are inevitable, connection is the constant that turns possibility into progress.

    Chris Smith is Virginia Tech’s postdoctoral affairs program administrator. He serves on the National Postdoctoral Association’s Board of Directors and is a member of the Graduate Career Consortium—an organization providing a national voice for graduate-level career and professional development leaders.

    Source link

  • UK university censors human rights research on abuses in China

    UK university censors human rights research on abuses in China

    Last year, FIRE launched the Free Speech Dispatch, a regular series covering new and continuing censorship trends and challenges around the world. Our goal is to help readers better understand the global context of free expression. Want to make sure you don’t miss an update? Sign up for our newsletter.


    Yet another university erodes academic freedom to appease Beijing

    In August, I released Authoritarians in the Academy, my book about the relationship between higher education, authoritarian regimes, and the censorship that internationalization has introduced into colleges and universities. And this month, an investigation released by The Guardian provided a perfect example of how this influence and censorship play out, in this case in the UK. 

    Earlier this year, Sheffield Hallam University told professor Laura Murphy, whose work the university had previously touted, to abandon her research into Uyghurs and rights abuses in China. The ban ultimately lasted for eight months until the school reversed course and issued an apology in October after Murphy threatened legal action. The Guardian reports that “the instruction for Murphy to halt her research came six months after the university decided to abandon a planned report on the risk of Uyghur forced labour in the critical minerals supply chain.”

    China’s censorship goes global — from secret police stations to video games

    2025 is off to a repressive start, from secret police stations in New York to persecution in Russia, Kenya, and more.


    Read More

    There are multiple alleged reasons for the university’s decision to disavow research critical of the CCP, but they all boil down to fear of legal or financial retaliation from the same government at the center of academics’ investigations. Murphy suggested that Sheffield Hallam was “explicitly trading my academic freedom for access to the Chinese student market.” And this is a real challenge among university administrations today: fear that vindictive governments will punish noncompliant universities by cutting off their access to lucrative international student tuition. 

    Another likely reason was a warning from Sheffield Hallam’s insurance provider that it would no longer cover work produced by the university’s Helena Kennedy Centre for International Justice after a defamation suit from a company named in its research. The HKC has raised the ire of Chinese government officials before, leading to a block of Sheffield Hallam’s websites behind the Great Firewall. Regarding the ill will between CCP officials and the HKC, a university administrator wrote that “attempting to retain the business in China and publication of the [HKC] research are now untenable bedfellows” and complained of the negative effects on recruitment in the country, which looks to have suffered.

    Most disturbing was a visit Chinese state security officials conducted in 2024 to the university’s Beijing office, where they questioned employees about the HKC’s research and the “message to cease the research activity was made clear.” An administrator said that “immediately, relations improved” when the university informed officials the research into human rights abuses would be dropped. 

    The university’s apology and reversal may not spell the end of the story. A South Yorkshire Police spokesperson suggested that, because of potential engagement with security officials in China, Sheffield Hallam may face investigation under the National Security Act related to a provision on “assisting a foreign intelligence service.”

    NYC indie film festival falls victim to transnational repression

    One of the most common misconceptions about free expression today is that nations with better speech protections are immune from the censorship in less free countries. Case in point: New Yorkers hoping to attend the IndieChina Film Festival, set to begin on Nov. 8, could not do so because of repression in China.

    Organizer Zhu Rikun said relentless pressure necessitated the cancellation of the event, with film directors in and outside China telling him en masse that they could not attend or requesting their films not be shown. Human Rights Watch also reports that Chinese artist Chiang Seeta warned that “nearly all participating directors in China faced intimidation” and even those abroad “reported that their relatives and friends in China were receiving threatening calls from police.”

    Zhu, whose parents and friends in China are reportedly facing harassment as well, thought it would “be better” after moving to the U.S. “It turns out I was wrong,” he said. 

    Worrying UN cybercrime treaty nets dozens of signatures, with a notable exception

    Late last month, 72 nations including France, Qatar, and China signed a treaty purportedly intended to fight “cybercrime,” but that leaves the door open for authoritarian nations to use it to enlist other nations — free and unfree — in their campaign to punish political expression on the internet. As I explained last year as the proposal went to the General Assembly, among other problems, the treaty fails to sufficiently define a “serious” crime taking place on computer networks other than that it’s punishable by a four-year prison sentence or more. 

    You might see the immediate problem here: Many nations, including some who ultimately signed on to the treaty, regularly punish online expression with long prison terms. A single TikTok video or an X post that offends or insults government officials, monarchs, or religious bodies can land people around the world in prison — sometimes for decades. 

    Despite earlier statements of support from a representative for the United States on the Ad Hoc Committee on Cybercrime, the U.S. ultimately did not sign the treaty and “is unlikely to sign or ratify unless and until we see implementation of meaningful human rights and other legal protections by the convention’s signatories.”

    That’s not all. There’s plenty more news about speech, tech, and the internet:

    • New amendments to Kenya’s Computer Misuse and Cybercrimes Act are worrying activists in the country, including one that grants the National Computer and Cybercrimes Coordination Committee authority to block material that “promotes illegal activities” or “extreme religious and cultic practices.”
    • Influencers, beware: the Cyberspace Administration of China released new regulations requiring social media users publishing material on “sensitive” topics like law and medicine to prove their qualifications to do so. Platforms will also be required to assist in verifying those qualifications.
    • The much-maligned Online Safety Act continues to create new concerns for free expression in the UK. TechRadar reports that regulatory body OfCom is “using an unnamed third-party tool to monitor VPN use,” one likely employing AI capabilities. VPN use is, to no surprise, spiking in the UK in response to mandated age-checks under the online safety regulations.
    • Brazil is employing a new AI-powered online speech monitor to collect material from social media and blogs that can be used for prosecution of hate speech offenders in the country. Hate speech convictions can result in serious punishment in Brazil, like the one levied against a comedian sentenced to over eight years for offensive jokes this year.
    • The European Union Council’s “Chat Control” proposal to scan online communications and files for CSAM appears to be moving forward. The latest proposal removes the obligation for service providers to scan all material but encourages it to be done voluntarily. However, the text of the proposal allows for a “mitigation measure” requiring providers deemed high risk to take “all appropriate risk mitigation measures.”
    • Apple and Android removed gay dating apps from their app stores in China after “an order from the Cyberspace Administration of China.” A spokesperson for Apple said, “We follow the laws in the countries where we operate.”
    • India has somewhat narrowed the scope of its vast internet takedown machine, limiting the authority of those who can demand platforms block material to officials who reach a certain rank of power. Those ordering removals will now also be required to “clearly specify the legal basis and statutory provision invoked” and “the nature of the unlawful act.”
    • Chief Minister Siddaramaiah of the Indian state Karnataka is threatening a new law against misinformation that will punish those “giving false information to people, and disturbing communal harmony.”
    • Swiss man Emanuel Brünisholz will spend ten days in prison next month after choosing not to pay a 600 Swiss francs fine from his incitement to hatred conviction. Brünisholz’s offense was this 2022 Facebook comment: “If you dig up LGBTQI people after 200 years, you’ll only find men and women based on their skeletons. Everything else is a mental illness promoted through the curriculum.”
    • A Spanish court acquitted a Catholic priest of hate speech charges after a yearslong investigation into his online criticisms of Islam, including a 2016 article, “The Impossible Dialogue with Islam.”

    Russian censorship laws should not dictate expression in the NHL

    NHL teams have decided to entirely abandon Pride warm up jerseys from their programming out of fear of retaliation against their Russian players.


    Read More

    • Continuing its widespread censorship of what it deems “gay propaganda” or “extremist” material, Russian media regulator Roskomnadzor banned the world’s largest anime database last month. Roskomnadzor blamed the block on MyAnimeList’s content “containing information propagating non-traditional sexual relations and/or preference.”
    • Singapore plans to roll out a new online safety commission with authority to order platforms to block posts and ban users and to demand internet service providers censor material as well. Initially, it intends to address harms like stalking but will eventually also target “the incitement of enmity.”
    • South Sudan’s National Security Service released comedian Amath Jok after four days in detainment for insulting President Salva Kiir on TikTok, who she called “a big thief wearing a hat.” But Jok isn’t out of the woods yet. Authorities have indefinitely banned her from using social media. 

    South Korea seeks to punish expression targeting other nations

    In response to controversial protests against China, a Democratic Party of Korea lawmaker is pushing for legislation to punish those who “defame or insult” countries and their residents or ethnic groups. The bill would punish false information with fines and prison terms up to five years, and “insulting” speech with up to a year. 

    That effort garnered support this month when President Lee Jae Myung said that “hate speech targeting specific groups is being spread indiscriminately, and false and manipulated information is flooding” social media. He called it “criminal behavior” beyond the bounds of free expression.

    Media censorship from Israel to Kyrgyzstan to Tunisia 

    • The BBC has apologized to President Trump over “the manner” in which a clip of his speech on Jan. 6, 2021, was edited to give “the mistaken impression that President Trump had made a direct call for violent action,” but notes that its UK-aired “Trump: A Second Chance?” program was not defamatory. It remains unclear whether Trump will still follow through on his threat to file a suit against the British outlet, but in earlier comments he claimed to have an “obligation” to do so.
    • By a vote of 50 to 41, Israel’s Knesset passed the first of three steps in the approval of the Law to Prevent Harm to State Security by a Foreign Broadcasting Authority, which would give authorities permanent power to shut down and seize foreign media they deem “harmful” without needing judicial review or approval.
    • A BBC journalist and Vietnamese citizen who returned home to renew their passport has not been allowed to leave the country for months. The journalist was reportedly held by police for questioning about their journalism.
    • Thai activist Nutthanit Duangmusit was sentenced to two years for lèse majesté for her part in conducting a 2022 opinion poll to “gauge public opinion about whether they agree with the King being allowed to exercise his authority as he wishes.”
    • A Kyrgyz court’s ruling declared two investigative media outlets as “extremist,” banned them from publishing, and made distribution of their work illegal.
    • Investigative outlet Nawaat received a disturbing surprise from Tunisian authorities on Oct. 31: a notice slipped under their office door without even a knock, warning them to suspend all activities for a month. 

    Tanzanian police warn against words or images causing “distress”

    In response to protests over President Samia Suluhu Hassan’s reelection, Tanzanian authorities issued a disturbing warning to the country: text messages or online posts could have serious consequences. The mass text sent to Tanzanian residents warned, “Avoid sharing images or videos that cause distress or degrade someone’s dignity. Doing so is a criminal offense and, if found, strict legal action will be taken.”

    Hundreds have indeed been charged with treason, including one woman whose offense was recommending that protesters buy gas masks for protection at demonstrations.

    Masih Alinejad’s would-be killers sentenced to 25 years in prison 

    In 2022, journalist and women’s rights activist Masih Alinejad was the target of an Iran-coordinated assassination plot that culminated in a hit man arriving outside her New York home with an AK-47. Late last month, two men were sentenced for their involvement in the attempt. The men, Rafat Amirov and Polad Omarov, were handed 25 years each in a Manhattan federal court. Regarding the verdict, Alinejad said: “I love justice.”

    Ailing novelist granted pardon from Algerian president

    Some parting good news: Boualem Sansal, an 81-year-old French-Algerian novelist who is suffering from cancer, has been granted a presidential pardon after serving one year of a five year sentence. Sansal was arrested late last year and convicted of undermining national unity and insulting public institutions. His humanitarian pardon from Algerian president Abdelmadjid Tebboune comes after months of advocacy from European leaders.

    Source link

  • Schools Should Help Students Be Human

    Schools Should Help Students Be Human

    One of the great ironies and great frustrations of my career teaching first-year college writing was having students enter our class armed with a whole host of writing strategies which they had been explicitly told they needed to know “for college,” and yet those strategies—primarily the following of prescriptive templates—were entirely unsuited to the experience students were going to have over the next 15 weeks of our course (and beyond).

    I explored and diagnosed these frustrations in Why They Can’t Write, and while many other writing teachers in both high school and college shared that they’d seen and been equally (or more) frustrated by the same things. In the intervening years, there’s been some progress, but frankly, not enough, primarily because the structural factors that distorted how writing is taught precollege have not been addressed.

    As long as writing is primarily framed as workforce preparation to be tested through standardization and quantification, students will struggle when invited into a more nuanced conversation that requires them to mine their own thoughts and experiences of the world and put those thoughts and experiences in juxtaposition with the ideas of others. The good news, in my experience, is that once invited into this struggle, many students are enthusiastic to engage, at least once they genuinely believe that you are interested in the contours of their minds and their experiences.

    This divide between the driving ethos of schools and schooling, as often seen in high school and the ideals of deep humanistic inquiry (ideally) animating postsecondary education courses, is the subject of a highly recommended piece by Anna E. Clark, who has taught at both the high school and university level, published as part of a recent special series at Public Books focused on higher education.

    Clark calls for a “higher ed and secondary ed alliance” based in the values we all at least claim to share: free inquiry, self-determination and an appreciation for lives that are more than the “skills” we’re supposed to bring to our employers.

    Something I can’t help but note is that the challenges college instructors are having getting students to steer clear of outsourcing their thinking to large language models would be significantly lessened if students had a greater familiarity with thinking during their secondary education years. Unfortunately, the system of indefinite future reward that has been reduced to pure transactions in exchange for grades and credentials has signaled that the outputs of the homework machine are satisfactory, so why not just give in?

    When I go to campuses and schools and have the opportunity to speak to students, I try to list all kinds of reasons why they shouldn’t just give in, reasons which, in the end, boil down to the fact that being a big dumb-dumb who doesn’t know anything and can’t do anything without the aid of a predictive text-generation machine is simply an unfulfilling and unpleasant way to go through life.

    In short, they will not be happy, even if they find ways to navigate their “work” with the aid of AI, because humans simply need more than this from our existences.

    I can now add another reason to my list: According to a raft of business insiders cited in a recent article at Inc. it is the liberal arts degree whose value is going to rise in this age of AI.

    In a world where machines can handle the technical knowledge, the only differentiator is being human.

    This is not news to those of us with those degrees, like my sister-in-law, who took her liberal arts degree from Denison University all the way to a general counsel job at a Fortune 300 company, or someone else with a far humbler résumé … me.

    As I wrote in 2013 in this very space, the key to my success as an adult who has had to repeatedly adapt to a changing world is my liberal arts degrees, degrees that armed me with foundational and enduring skills that have served me quite well.

    But, of course, it is about more than these skills. My pursuit of these degrees also allowed me to consider what a good life should be. That knowledge has put me in a position where—knock wood—I wake up just about every morning looking forward to what I have to do that day.

    This is true even as the things I most care about—education, reading/writing, uh … democracy—appear to be inexorably crumbling around me. Perhaps this is because my knowledge of the value of humanistic study as something more than a route to a good job makes me more willing to fight for its continuation.

    Sometimes when I encounter some hand-wringing about the inevitability of AI and the uncertainty of the future, I want to remind the fretful that we actually have a very sound idea of what we should be emphasizing, the same stuff we always should have been emphasizing—teaching, learning, living, being human.

    We have clear notions of what this looks like. The main question now is if we have the collective will to move toward that future, or if we will give in to something much darker, much less satisfying and much less human.

    Source link

  • The difficult human work behind responsible AI use in college operations

    The difficult human work behind responsible AI use in college operations

    This audio is auto-generated. Please let us know if you have feedback.

    COLUMBUS, OHIO — Artificial intelligence-based products and software for college admissions and operations are proliferating in the higher education world. 

    How to choose from among them? Well, leaders can start by identifying a problem that is actually in need of an AI solution. 

    That is one of the core pieces of advice from a panel on deploying AI technology responsibly in college administration at the National Association for College Admission Counseling’s conference last week.

    Jasmine Solomon, senior associate director of systems operations at New York University, described a “flooded marketplace” of AI products advertised for a range of higher ed functions, from tutoring systems to retention analytics to admissions chatbots. 

    “Define what your AI use case is, and then find the purpose-built tool for that,” Solomon said. “If you’re using a general AI model or AI tool for an unintended purpose, your result is going to be poor.” 

    Asking why before you buy

    It’s also worth considering whether AI is the right tool. 

    “How does AI solve this problem better? Because maybe your team or the tools that you already have can solve this problem,” Solomon said. “Maybe you don’t need an AI tool for this.”

    Experts on the panel pointed out that administrators also need to think about who will use the tool, the potential privacy pitfalls of it, and its actual quality. 

    As Solomon put it, “Those built-in AI features — are they real? Are they on a future-release schedule, or is it here now? And if it’s here now, is it ready for prime time or is it ‘here now, and we’re beta testing.’” 

    Other considerations in deploying AI include those related to ethics, compliance and employee contracts.

    Institutions need to be mindful of workflows, staff roles, data storage, privacy and AI stipulations in collective bargaining contracts, said Becky Mulholland, director of first-year admission and operations at the University of Rhode Island

    “For those who are considering this, please, please, please make sure you’re familiar with those aspects,” Mulholland said. “We’ve seen this not go well in some other spaces.”

    On top of all that is the environmental impact of AI. One estimate found that AI-based search engines can use as much as 30 times more energy than traditional search. The technology also uses vast amounts of water to cool data centers.

    Panelists had few definitive answers for resolving AI’s environmental problems at the institutional level. 

    “There’s going to be a space for science to find some better solutions,” Mulholland said. “We’re not there right now.” 

    Solomon pointed to the pervasiveness of AI tools already embedded in much of our digital technology and argued untrained use could worsen the environmental impact. 

    “If they’re prompting [AI] 10, 20 times just to get the answer they want, they’ve used far more energy than if they understood prompt engineering,” Solomon said. 

    Transparency is also important. At NYU, Solomon said the university was careful to ensure prospective students knew they were talking with AI when interacting with its chatbot — so much so that they named the tool “NYUAdmissionsBot” to make its virtual nature as explicit as possible. 

    “We wanted to inform them every step of the way that you were talking to AI when you were using this chatbot,” Solomon said. 

    ‘You need time to test it’

    After all the big questions are asked and answered, and an AI solution chosen, institutions still have the not-so-small task of rolling the technology out in a way that is effective in both the short and long term. 

    The rollout of NYU’s chatbot in spring 2024 took “many, many months,” according to Solomon. “If a vendor tells you, ‘We will be up in a week,’ multiply that by like a factor of 10. You need time to test it.” The extra time can ensure a feature is actually ready when it’s unveiled for use. 

    The upside to all that time and effort for something like an admissions chatbot, Solomon noted, is that the AI feature can be available around-the-clock to answer inquiries, and it can quickly address the most commonly asked questions that would normally be flooding the inboxes of admissions staff. 

    But even after a successful initial rollout of an AI tool or feature, operations staff aren’t done. 

    Solomon described a continuous cycle of developing key metrics of success, running controlled experiments with an AI product and carefully examining data from AI use, including by having a human looking over the shoulder of the robots. In NYU’s case, this included looking at responses the chatbot gave to inquiries from prospective students.

    Source link

  • Future-Proof Students’ (and Our) Careers by Building Uniquely Human Capacities – Faculty Focus

    Future-Proof Students’ (and Our) Careers by Building Uniquely Human Capacities – Faculty Focus

    Source link

  • Super-universities and human sized art schools

    Super-universities and human sized art schools

    The creation of a new super university in South East England, through the merger of Kent and Greenwich, signals both a turning point and a warning.

    Advocates see consolidation as the promise of scale and resilience.

    Critics fear homogenisation, loss of identity, and narrowing of choice.

    Both could be right.

    What matters most is not the merger itself but the logic that underpins it. In the absence of a shared national mission for higher education, mergers are now framed as solutions: a form of market rationalisation presented as vision.

    The vacuum where mission should be

    Since the 2012 funding reforms, higher education has been treated less as civic infrastructure and more as a competitive market. Public investment was replaced by loans. Students were told to think like investors. Degrees became receipts.

    Into the gap left by an absence of national purpose rushed hyper-regulation: metrics, thresholds, and questions of fiscal viability. Within this narrowed frame, mergers appear logical. Bigger looks cheaper. Consolidation looks like progress. But without a shared mission, the deeper questions go unanswered.

    The long contraction

    For much of the last century, almost every town in Britain had its own art school: civic in origin, modest in scale, and rooted in place. In the 1960s there were over 150 across England. Over time, that dispersed civic network was redrawn. Some schools were absorbed into polytechnics, some federated into new structures, many disappeared.

    From this history, four models emerged: the consolidated metropolitan brand, uniting multiple colleges under one identity; the regional federation spread across towns and cities; the specialist regional provider rooted in place; and the art school absorbed into a larger university. All four persist, but history shows how quickly the civic and regional variants were erased in the pursuit of scale. That remains the risk.

    The limits of consolidation

    Super universities are most often justified through promises of efficiency and resilience. The patterns of merger and acquisition are familiar, exercised through cuts, closures, and the stripping back of provision. Contraction is presented as progress.

    And what follows: a merger into an “Ultra Super University”, a “Mega University”? The logic of consolidation always points in that direction. Fewer institutions. The illusion that size solves structural problems.

    But what if the future of universities is regional, hybrid and networked? Do mergers enable this? Or do they reduce it, by erasing local presence in the pursuit of efficiency?

    The risk is not only that provision shrinks, but that our regional and civic anchors are lost. A university’s resilience lies not in the absence of difference but in its presence: in the tolerance of variety, the recognition of locality, and the capacity to sustain attachment.

    Federation of art schools

    UCA grew from a federation of art schools, distributed rather than centralised, holding to a civic model of place. This has been hard to sustain in today’s free market. However, our University has become a place for those who find belonging in community, for outliers and outsiders at home in the intimacy of a civic setting, rather than the intensity of the metropolis. Our resilience shows how creative specialist schools can generate strength from vulnerability. Our story also foreshadows the systemic pressures now confronting universities everywhere.

    The Kent–Greenwich merger now brings new possibilities for Medway, positioned between Greenwich and Kent and home to a university campus for them both. If approached with care, it could restore creative presence to a place long on the periphery.

    Our civic project persists at Canterbury School of Art, Architecture and Design. Our founder, Sidney Cooper, a local painter, established Canterbury’s School of Art in 1868 as a gift to the city. It has survived every reform since. In the 1960s it moved into a modernist building, future-facing yet rooted in the Garden of England.

    That identity carried it through polytechnic consolidation, university expansion, and marketisation. It remains its strength now: an art school for the city, of the city, and in the city. Creativity is lived as much as it is taught.

    A human-sized proposition

    For us at UCA Canterbury, the alternative is clear. Ours is a human-sized proposition: intimate, civic, distinctive. A place where students are known by name, where teaching is close, and where creativity is inseparable from civic life.

    We intend that our graduates remain in creative professions for life, not because of economies of scale but because of the depth of their formation. Small institutions enable what scale cannot: intimacy, belonging, and the tolerance of difference. They cultivate attachment to place, the character of community, and the fragile conditions in which nuture and trust can grow. These are not marginal gains. They are the essence of education itself. Vulnerability, when named and advocated for, becomes strength.

    This is the measure against which any super university must be judged: not whether it scales, but whether it sustains the human scale within it. The crisis in higher education is not only financial but cultural. It is about whether universities can still act as places of meaning, attachment, and public need.

    Our founder, Sidney Cooper, understood in 1868 that education was not about scale but about purpose. That mission still speaks. In the shadow of consolidation and the spectre of Artificial Intelligence, what must endure is the human scale of learning and belonging.

    To sustain it is a choice we must keep making.

    Source link

  • 100 Ways the Trump Administration Has Undermined the Environment, Human Rights, World and Domestic Peace, Labor, and Knowledge

    100 Ways the Trump Administration Has Undermined the Environment, Human Rights, World and Domestic Peace, Labor, and Knowledge

    The Trump administration, since returning to power in 2025, has escalated attacks on the foundations of democracy, the environment, world peace, human rights, and intellectual inquiry. While the administration has marketed itself as “America First,” its policies have more often meant profits for the ultra-wealthy, repression for the working majority, and escalating dangers for the planet.

    Below is a running list of 100 of the most dangerous actions and policies—a record of how quickly a government can dismantle hard-won protections for people, peace, and the planet.


    I. Attacks on the Environment

    1. Withdrawing from the Paris Climate Agreement—again.

    2. Dismantling the EPA’s authority to regulate greenhouse gases.

    3. Opening federal lands and national parks to oil, gas, and mining leases.

    4. Gutting protections for endangered species.

    5. Allowing coal companies to dump mining waste in rivers and streams.

    6. Rolling back vehicle fuel efficiency standards.

    7. Subsidizing fossil fuel companies while defunding renewable energy programs.

    8. Suppressing climate science at federal agencies.

    9. Greenlighting pipelines that threaten Indigenous lands and water supplies.

    10. Promoting offshore drilling in fragile ecosystems.

    11. Weakening Clean Water Act enforcement.

    12. Dismantling environmental justice programs that protect poor communities.

    13. Politicizing NOAA and censoring weather/climate warnings.

    14. Undermining international climate cooperation at the UN.

    15. Allowing pesticides banned in Europe to return to U.S. farms.


    II. Undermining World Peace and Global Stability

    1. Threatening military action against Iran, Venezuela, and North Korea.

    2. Expanding the nuclear arsenal instead of pursuing arms control.

    3. Cutting funding for diplomacy and the State Department.

    4. Withdrawing from the World Health Organization (WHO).

    5. Weakening NATO alliances with inflammatory rhetoric.

    6. Escalating drone strikes and loosening rules of engagement.

    7. Providing cover for authoritarian leaders worldwide.

    8. Walking away from peace negotiations in the Middle East.

    9. Blocking humanitarian aid to Gaza, Yemen, and other war-torn areas.

    10. Expanding weapons sales to Saudi Arabia despite human rights abuses.

    11. Using tariffs and sanctions as blunt instruments against allies.

    12. Politicizing intelligence briefings to justify military adventurism.

    13. Abandoning refugee protections and asylum agreements.

    14. Treating climate refugees as security threats.

    15. Reducing U.S. participation in the United Nations.


    III. Attacks on Human Rights and the Rule of Law

    1. Expanding family separation policies at the border.

    2. Targeting asylum seekers for indefinite detention.

    3. Militarizing immigration enforcement with National Guard troops.

    4. Attacking reproductive rights and defunding women’s health programs.

    5. Rolling back LGBTQ+ protections in schools and workplaces.

    6. Reinstating bans on transgender service members in the military.

    7. Undermining voting rights through purges and voter ID laws.

    8. Packing the courts with extremist judges hostile to civil rights.

    9. Weaponizing the Justice Department against political opponents.

    10. Expanding surveillance powers with little oversight.

    11. Encouraging police crackdowns on protests.

    12. Expanding use of federal troops in U.S. cities.

    13. Weakening consent decrees against abusive police departments.

    14. Refusing to investigate hate crimes tied to far-right violence.

    15. Deporting long-term immigrants with no criminal record.


    IV. Attacks on Domestic Peace and Tranquility

    1. Encouraging militias and extremist groups with dog whistles.

    2. Using inflammatory rhetoric that stokes racial and religious hatred.

    3. Equating journalists with “enemies of the people.”

    4. Cutting funds for community-based violence prevention.

    5. Politicizing natural disaster relief.

    6. Treating peaceful protests as national security threats.

    7. Expanding federal use of facial recognition surveillance.

    8. Undermining local control with federal overreach.

    9. Stigmatizing entire religious and ethnic groups.

    10. Promoting conspiracy theories from the presidential podium.

    11. Encouraging violent crackdowns on labor strikes.

    12. Undermining pandemic preparedness and response.

    13. Allowing corporations to sidestep workplace safety rules.

    14. Shutting down diversity and inclusion training across agencies.

    15. Promoting vigilante violence through online platforms.


    V. Attacks on Labor Rights and the Working Class

    1. Weakening the Department of Labor’s enforcement of wage theft.

    2. Blocking attempts to raise the federal minimum wage.

    3. Undermining collective bargaining rights for federal workers.

    4. Supporting right-to-work laws across states.

    5. Allowing employers to misclassify gig workers as “independent contractors.”

    6. Blocking new OSHA safety standards.

    7. Expanding exemptions for overtime pay.

    8. Weakening rules on child labor in agriculture.

    9. Cutting unemployment benefits during economic downturns.

    10. Favoring union-busting corporations in federal contracts.

    11. Rolling back protections for striking workers.

    12. Encouraging outsourcing of jobs overseas.

    13. Weakening enforcement of anti-discrimination laws in workplaces.

    14. Cutting funding for worker retraining programs.

    15. Promoting unpaid internships as a “pathway” to jobs.


    VI. Attacks on Intellectualism and Knowledge

    1. Defunding the Department of Education in favor of privatization.

    2. Attacking public universities as “woke indoctrination centers.”

    3. Promoting for-profit colleges with predatory practices.

    4. Restricting student loan forgiveness programs.

    5. Undermining Title IX protections for sexual harassment.

    6. Defunding libraries and public broadcasting.

    7. Politicizing scientific research grants.

    8. Firing federal scientists who contradict administration narratives.

    9. Suppressing research on gun violence.

    10. Censoring federal climate and environmental data.

    11. Promoting creationism and Christian nationalism in schools.

    12. Expanding surveillance of student activists.

    13. Encouraging book bans in schools and libraries.

    14. Undermining accreditation standards for higher education.

    15. Attacking historians who challenge nationalist myths.

    16. Cutting humanities funding in favor of military research.

    17. Encouraging political litmus tests for professors.

    18. Treating journalists as combatants in a “culture war.”

    19. Promoting AI-driven “robocolleges” with no faculty oversight.

    20. Gutting federal student aid programs.

    21. Allowing corporate donors to dictate university policy.

    22. Discouraging international students from studying in the U.S.

    23. Criminalizing whistleblowers who reveal government misconduct.

    24. Promoting conspiracy theories over peer-reviewed science.

    25. Normalizing ignorance as a political strategy.        

    Source link

  • Human connection still drives school attendance

    Human connection still drives school attendance

    Key points:

    At ISTE this summer, I lost count of how many times I heard “AI” as the answer to every educational challenge imaginable. Student engagement? AI-powered personalization! Teacher burnout? AI lesson planning! Parent communication? AI-generated newsletters! Chronic absenteeism? AI predictive models! But after moderating a panel on improving the high school experience, which focused squarely on human-centered approaches, one district administrator approached us with gratitude: “Thank you for NOT saying AI is the solution.”

    That moment crystallized something important that’s getting lost in our rush toward technological fixes: While we’re automating attendance tracking and building predictive models, we’re missing the fundamental truth that showing up to school is a human decision driven by authentic relationships.

    The real problem: Students going through the motions

    The scope of student disengagement is staggering. Challenge Success, affiliated with Stanford’s Graduate School of Education, analyzed data from over 270,000 high school students across 13 years and found that only 13 percent are fully engaged in their learning. Meanwhile, 45 percent are what researchers call “doing school,” going through the motions behaviorally but finding little joy or meaning in their education.

    This isn’t a post-pandemic problem–it’s been consistent for over a decade. And it directly connects to attendance issues. The California Safe and Supportive Schools initiative has identified school connectedness as fundamental to attendance. When high schoolers have even one strong connection with a teacher or staff member who understands their life beyond academics, attendance improves dramatically.

    The districts that are addressing this are using data to enable more meaningful adult connections, not just adding more tech. One California district saw 32 percent of at-risk students improve attendance after implementing targeted, relationship-based outreach. The key isn’t automated messages, but using data to help educators identify disengaged students early and reach out with genuine support.

    This isn’t to discount the impact of technology. AI tools can make project-based learning incredibly meaningful and exciting, exactly the kind of authentic engagement that might tempt chronically absent high schoolers to return. But AI works best when it amplifies personal bonds, not seeks to replace them.

    Mapping student connections

    Instead of starting with AI, start with relationship mapping. Harvard’s Making Caring Common project emphasizes that “there may be nothing more important in a child’s life than a positive and trusting relationship with a caring adult.” Rather than leave these connections to chance, relationship mapping helps districts systematically identify which students lack that crucial adult bond at school.

    The process is straightforward: Staff identify students who don’t have positive relationships with any school adults, then volunteers commit to building stronger connections with those students throughout the year. This combines the best of both worlds: Technology provides the insights about who needs support, and authentic relationships provide the motivation to show up.

    True school-family partnerships to combat chronic absenteeism need structures that prioritize student consent and agency, provide scaffolding for underrepresented students, and feature a wide range of experiences. It requires seeing students as whole people with complex lives, not just data points in an attendance algorithm.

    The choice ahead

    As we head into another school year, we face a choice. We can continue chasing the shiny startups, building ever more sophisticated systems to track and predict student disengagement. Or we can remember that attendance is ultimately about whether a young person feels connected to something meaningful at school.

    The most effective districts aren’t choosing between high-tech and high-touch–they’re using technology to enable more meaningful personal connections. They’re using AI to identify students who need support, then deploying caring adults to provide it. They’re automating the logistics so teachers can focus on relationships.

    That ISTE administrator was right to be grateful for a non-AI solution. Because while artificial intelligence can optimize many things, it can’t replace the fundamental human need to belong, to feel seen, and to believe that showing up matters.

    The solution to chronic absenteeism is in our relationships, not our servers. It’s time we started measuring and investing in both.

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Co-curricular space is where students can find human experience outside the AI bubble

    Co-curricular space is where students can find human experience outside the AI bubble

    In his criminally underread 1978 book The Grasshopper, the philosopher Bernard Suits takes seriously the science-fiction commonplace that, once robots are doing everything for us, humans will have to find something else to do.

    His response is that we’d play, living lives of leisure, like Aesop’s grasshopper, and engaging in activities with a lusory attitude: living playfully, engaging in activities not because we have to, but because we want to. Fully-automated luxury play! As much as I could easily play videogames all day, work isn’t going anywhere any time soon. But the rise of AI in education has prompted me to revisit this topic.

    Universities are, quite rightly, thinking very carefully about what their staff and students do with AI, emphasising the ways in which it can enhance, and perhaps even replace, aspects of our work. But there are separate, parallel questions: what can’t AI do for us, and what shouldn’t it do? And what are we going to do with all the time it saves us?

    Doing and being seen to have done

    Human lives are full of experiences, and there’s a danger with the rise of AI that we weaken our connection with the actual doing of things. AI might help us to plan a holiday itinerary, book a hotel or draft a jealousy-inducing social media post (or even deepfake pics from a holiday that didn’t happen), but it can’t go on holiday for us. And similarly, in learning environments, whilst it can enhance learning, overreliance on AI runs the risk of hollowing out the experiential core of learning and leaving students not having actually done anything.

    A real challenge for educators is to know how to get students to understand the value of experience in a world that incentivises taking shortcuts. I lead Rise at Manchester Met: a co-curricular programme that is designed to draw together all the things that students do that aren’t their degree, and our team works hard to help students to understand that they are more than their degree subject.

    The traditional catch-all term for this is “extra-curricular” – it’s the things that students do in addition to the curricula they are following. But in practice “co-curricular” is a more accurate term. “Co” indicates that activity happens alongside and with the curriculum. There is a crossover in the experiences that students are having. Picture the curriculum and co-curricular activities as two streams that are sometimes totally divergent, sometimes parallel, and often overlapping in productive ways.

    Identity shapes participation

    Students don’t stop being students when they engage in co-curricular activities, but similarly they don’t stop being a community organiser, or a hockey player, or a freelance arts journalist, when they’re in the classroom. My doctoral thesis argued that half of the “game” of higher education is students understanding how they can bring their own identities to transform their participation, and “position-switch” between roles. The co-curricular is at its most powerful when these distinct identities and experiences begin to transform and enhance each other.

    Moving from “extra” to “co” also challenges the primacy of the core curriculum as the foundation of student experience, and acknowledges that, for many of our students, “student” might not be their primary identity. We must accept that, for some students, their co-curricular activity might be more engaging, more relevant and more career-focused than their core degree programme. For others, the stuff they are doing outside their degree programme might be necessary and unavoidable, and will often pre-date their involvement at university; paid-work and caring responsibilities tend to take precedence over lectures, and there may be ways to make this count too.

    However, when you type “co-curricular” into your search engine of choice, you won’t really see university websites. It’s a term that, at present, seems to be owned by the upper-end of British private boarding schools. In a sense this stands to reason; pupils essentially live in these schools during term time, and activities take place as part of their wider life at school. Here “co-curricular” is an expectation, and provides the social and cultural capital building for which British private schools are famous.

    This conceptual dominance raises an issue of social justice, though. There is a sense that all of the “extra” stuff, at both schools and universities, is the domain of students who are privileged enough to take part, and who have the time and resources to make it happen. Working outside the curriculum is too often seen as a privilege for the privileged, and effectively becomes self-fulfilling as students with the free time to volunteer reap the developmental benefits of volunteering their time. Other students are already on the back foot when it comes to claiming their share of experience.

    Embarrassment of riches

    Rise was set up to challenge this narrative, by giving students time and resource to develop their social capital in flexible ways, and to recognise developmental activities that might not have traditionally been included under the extra-curricular umbrella. There’s a broader conversation to be had in the sector, not about how we encourage already busy students to do more, but about how we encourage students to recognise their learning beyond the curriculum.

    In Manchester and beyond, the skills pendulum seems to swinging once more away from digital skills and towards “soft skills” – again, reflecting AI’s dominance of education conversations. Co-curricular space has a valuable contribution to make to developing empathy, critical thinking and interacting with other human beings. It is, ultimately, about sharing experiences, and the more we can expand this, the more everyone will benefit.

    Students will have experiences outside of their degree programmes whether we design for it or not, but a renewed emphasis on co-curricular activity would allow them (and us) to understand that formal education settings don’t have a monopoly on learning and development. We worry so much about students being “time-poor”; what happens if we understand this as “experience-rich” instead, and recognise their learning accordingly?

    In an AI-dominated dystopia, the co-curricular might be where we find the last vestiges of human experience in higher education. Being more optimistic, in a Grasshopper-influenced utopia, we’d all have the time to luxuriate in human experience. Co-curricular space provides insight into what this might look like, and gives students ways to develop away from the curriculum that might speak to future possibilities.

    Interested in thinking more about co-curricular experience? At Manchester Met we’re pulling together a cross-sector group of HE professionals working in co-curricular space, and we’d love your input. Click here to sign up for updates.

    Source link

  • AI and Art Collide in This Engineering Course That Puts Human Creativity First – The 74

    AI and Art Collide in This Engineering Course That Puts Human Creativity First – The 74

    I see many students viewing artificial intelligence as humanlike simply because it can write essays, do complex math or answer questions. AI can mimic human behavior but lacks meaningful engagement with the world.

    This disconnect inspired my course “Art and Generative AI,” which was shaped by the ideas of 20th-century German philosopher Martin Heidegger. His work highlights how we are deeply connected and present in the world. We find meaning through action, care and relationships. Human creativity and mastery come from this intuitive connection with the world. Modern AI, by contrast, simulates intelligence by processing symbols and patterns without understanding or care.

    In this course, we reject the illusion that machines fully master everything and put student expression first. In doing so, we value uncertainty, mistakes and imperfection as essential to the creative process.

    This vision expands beyond the classroom. In the 2025-26 academic year, the course will include a new community-based learning collaboration with Atlanta’s art communities. Local artists will co-teach with me to integrate artistic practice and AI.

    The course builds on my 2018 class, Art and Geometry, which I co-taught with local artists. The course explored Picasso’s cubism, which depicted reality as fractured from multiple perspectives; it also looked at Einstein’s relativity, the idea that time and space are not absolute and distinct but part of the same fabric.

    What does the course explore?

    We begin with exploring the first mathematical model of a neuron, the perceptron. Then, we study the Hopfield network, which mimics how our brain can remember a song from just listening to a few notes by filling in the rest. Next, we look at Hinton’s Boltzmann Machine, a generative model that can also imagine and create new, similar songs. Finally, we study today’s deep neural networks and transformers, AI models that mimic how the brain learns to recognize images, speech or text. Transformers are especially well suited for understanding sentences and conversations, and they power technologies such as ChatGPT.

    In addition to AI, we integrate artistic practice into the coursework. This approach broadens students’ perspectives on science and engineering through the lens of an artist. The first offering of the course in spring 2025 was co-taught with Mark Leibert, an artist and professor of the practice at Georgia Tech. His expertise is in art, AI and digital technologies. He taught students fundamentals of various artistic media, including charcoal drawing and oil painting. Students used these principles to create art using AI ethically and creatively. They critically examined the source of training data and ensured that their work respects authorship and originality.

    Students also learn to record brain activity using electroencephalography – EEG – headsets. Through AI models, they then learn to transform neural signals into music, images and storytelling. This work inspired performances where dancers improvised in response to AI-generated music.

    The Improv AI performance at Georgia Institute of Technology on April 15, 2025. Dancers improvised to music generated by AI from brain waves and sonified black hole data.

    Why is this course relevant now?

    AI entered our lives so rapidly that many people don’t fully grasp how it works, why it works, when it fails or what its mission is.

    In creating this course, the aim is to empower students by filling that gap. Whether they are new to AI or not, the goal is to make its inner algorithms clear, approachable and honest. We focus on what these tools actually do and how they can go wrong.

    We place students and their creativity first. We reject the illusion of a perfect machine, but we provoke the AI algorithm to confuse and hallucinate, when it generates inaccurate or nonsensical responses. To do so, we deliberately use a small dataset, reduce the model size or limit training. It’s in these flawed states of AI that students step in as conscious co-creators. The students are the missing algorithm that takes back control of the creative process. Their creations do not obey AI but reimagine it by the human hand. The artwork is rescued from automation.

    What’s a critical lesson from the course?

    Students learn to recognize AI’s limitations and harness its failures to reclaim creative authorship. The artwork isn’t generated by AI, but it’s reimagined by students.

    Students learn chatbot queries have an environmental cost because large AI models use a lot of power. They avoid unnecessary iterations when designing prompts or using AI. This helps reducing carbon emissions.

    The Improv AI performance on April 15, 2025, featured dancer Bekah Crosby responding to AI-generated music from brain waves.

    The course prepares students to think like artists. Through abstraction and imagination they gain the confidence to tackle the engineering challenges of the 21st century. These include protecting the environment, building resilient cities and improving health.

    Students also realize that while AI has vast engineering and scientific applications, ethical implementation is crucial. Understanding the type and quality of training data that AI uses is essential. Without it, AI systems risk producing biased or flawed predictions.

    Uncommon Courses is an occasional series from The Conversation U.S. highlighting unconventional approaches to teaching.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.

    Source link