Tag: magazine

  • Turkish police arrested magazine staff over Muhammad cartoon, but it doesn’t actually depict the prophet

    Turkish police arrested magazine staff over Muhammad cartoon, but it doesn’t actually depict the prophet

    Last year, FIRE launched the Free Speech Dispatch, a regular series covering new and continuing censorship trends and challenges around the world. Our goal is to help readers better understand the global context of free expression. Want to make sure you don’t miss an update? Sign up for our newsletter.


    Five arrests over cartoon “publicly demeaning religious values”

    Turkish police officers walking down street in Istanbul. (Shutterstock.com)

    Cartoons depicting Muhammad are a common feature in censorship news but the latest developments out of Turkey are a little unusual in that the magazine involved is adamant that the cartoon under fire…does not actually depict the prophet. 

    On June 30, Turkish police arrested four employees of satirical magazine LeMan on charges of “publicly demeaning religious values,” with one cartoonist also charged with “insulting the president.” They raided the magazine’s office as well and, two weeks later, arrestedLeMan editor at Istanbul’s airport upon his return from France. The arrests followed an attack on the LeMan office, with a mob breaking open windows and doors.

    The origin of the dispute? A June 26 LeMan edition with an anti-war cartoon depicting two winged men — one depicted as Muslim and introducing himself as Muhammad and the other as Jewish and calling himself Moses — shaking hands as they ascend over a burning city with bombs raining down. The Muhammad character, the magazine said, “is fictionalised as a Muslim killed in Israel’s bombardments” and is named so because it’s the “most commonly given and populous name in the world.”

    The magazine remains adamant its staff is being arrested on the basis of a willful misunderstanding, but for now Turkish officials — including President Erdogan, who called it a “vile provocation” that must be “held accountable before the law” — are intent on prosecution and have seized copies of the edition.

    There’s more free speech news out of Turkey. A new law granted the country’s Presidency of Religious Affairs the authority to ban Quran translations it deems “do not correspond to the basic characteristics of Islam,” including online and audio versions. Meanwhile, a Turkish court blocked some content produced by xAI’s Grok for insulting Erdogan and religious values.

    And Spotify has threatened to leave the Turkish market in part over a censorship dispute with the deputy minister of culture and tourism, who has accused the site of hosting “content that targets our religious and national values and insults the beliefs of our society.” That content apparently includes playlists like “The songs Emine Erdogan listens to while cleaning the palace,” which mocks Erdogan’s wife’s allegedly lavish spending. 

    UK’s free speech controversies online and off — and in American visa policy

    The UK’s free speech issues are nothing new, but this time the U.S. is part of the story, too. UK prosecutors had already announced an investigation into Belfast rap trio Kneecap earlier this year — which, as of last week, has been dropped — but now duo Bob Vylan is on the list. 

    Bob Vylan caught global attention last month in a controversial Glastonbury set which included a “Death, death to the IDF” chant led by the band. Avon and Somerset Police confirmed they were reviewing footage to confirm “whether any offences may have been committed that would require a criminal investigation.” Prime Minister Keir Starmer also objected to the “appalling hate speech” and demanded answers from the BBC about its broadcast of the set. Shadow Home Secretary Chris Philp also said the BBC “appears to have also broken the law.”

    Then the Trump administration joined in. Deputy Secretary of State Christopher Landau announced shortly after the incident that the U.S. revoked the visa of Bob Vylan members ahead of the band’s upcoming tour. “Foreigners who glorify violence and hatred are not welcome visitors to our country,” he wrote.

    Speech controversies also bloomed outside Glastonbury. UK police have now arrested dozens of demonstrators for attending events opposing the ban on Palestine Action, an activist group restricted under British anti-terrorism legislation for damaging military planes in a protest. Simply “expressing support” for the banned group is a crime. 

    The Wall Street Journal covered the UK’s (and Europe’s) “far and wide” crackdown on speech in a July 7 piece that also discussed the recent targeting of activist Peter Tatchell, arrested by police in London for a “racially and religiously aggravated breach of the peace.” Tatchell’s offense was holding a sign “that criticized Israel for its Gaza campaign as well as Hamas for kidnapping, torturing and executing a 22-year-old.”

    Also, in more unsurprising news, the UK’s troubling Online Safety Act is making its mark on the internet as social media platforms begin the process of age verification for UK-based users. Bluesky users will be required to use Kid Web Services or face content blocks and app limitations. Reddit users must verify too, or lose access to categories of material including “content that promotes or romanticizes depression, hopelessness and despair” and “content that promotes violence.”

    And, finally, is the UK getting a government-imposed swear jar? A district council in Kent is considering a £100 fine for swearing in public. That definitely won’t backfire. 

    Fake news, social media for teens, and more in the latest tech and speech developments 

    • Last week, Russian legislators passed rules issuing fines for people who “deliberately searched for knowingly extremist materials,” with heightened fines for those using a VPN to access them. That’s not just censorship of what you say, but also of what you simply try to see.
    • The European Court of Human Rights ruled in Google’s favor in its dispute with Russia over penalties the government issued against the company over its decision not to remove some political content and to suspend a channel tied to sanctions. Russia, it found, “exerted considerable pressure on Google LLC to censor content on YouTube, thereby interfering with its role as a provider of a platform for the free exchange of ideas and information.”
    • The Indian state of Karnataka is considering legislation that would punish fake news, misinformation, and other verboten forms of speech with fines and prison terms up to seven years.
    • India’s Allahabad High Court refused bail to a man who had posted “heavily edited and objectionable” videos of Prime Minister Modi relating to the country’s recent conflict with Pakistan. “Freedom of speech and expression does not stretch to permit a person posting videos and other posts disrespecting the Prime Minister of India,” the court wrote.
    • An 8-3 vote from Brazil’s Supreme Court ruled that social media companies will be held liable for failure to monitor and remove “content involving hate speech, racism, and incitement to violence.”
    • German police conducted a search of more than 65 properties in a crackdown on online hate speech, seeking out offenders allegedly engaged in “inciting hatred, insulting politicians and using symbols of terrorist groups or organizations that are considered to be unconstitutional.”
    • Dozens of online gay erotica writers, mostly young women, have been arrested in recent months in China for “producing and distributing obscene material.”
    • The Pakistan Telecommunication Authority has now blocked over 100,000 URLs across the internet for “blasphemous content.”
    • An Australian Administrative Review Tribunal ruling reversed a March order by the country’s eSafety Commissioner requiring X to take down a post from Canadian activist Chris Elston or face a $782,500 fine. Elston had called Teddy Cook, a trans man appointed to a World Health Organization panel, a “woman” who “belong[s] in psychiatric wards.”
    • New guidelines issued by the European Commission press for EU nations’ adoption of tools to verify internet users’ age to protect them against harmful content. The verification methods should be “accurate, reliable, robust, non-intrusive and non-discriminatory” — quite a Herculean feat to expect.
    • China is introducing a new digital ID system transferring the possession of users’ identifying information away from internet companies and into government hands. The process, voluntary at this time, will require users to submit personal information, including a facial scan.

    Former Panamanian president alleges U.S. visa revocation for his political speech 

    Martín Torrijos, a former president of Panama, says the U.S. canceled his visa over his opposition to political agreements made between the two countries. Torrijos suggested his signature on the “National Unity and Defense of Sovereignty” statement, which criticized “expansionist and hegemonic intentions” by the United States, also contributed to the revocation. 

    “I want to emphasize that this is not just about me, neither personally nor in my capacity as former president of the Republic,” Torrijos said. “It is a warning to all Panamanians: that criticism of the actions of the Government of Panama regarding its relations with the United States will not be tolerated.”

    Free press news, from Azerbaijan to Arad 

    • Zimbabwe Independent editor Faith Zaba penned a satirical column about the country’s role in the Southern African Development Community — and was then arrested by police and charged with “undermining the authority of the president.”
    • Yair Maayan, mayor of Israeli city Arad, announced he intended to ban the sale of Haaretz over the newspaper’s investigation into the IDF.
    • Tel Aviv police arrested journalist Israel Frey on suspicion of incitement to terrorism for his response to the death of five IDF soldiers. “The world is a better place this morning, without five young men who partook in one of the most brutal crimes against humanity,” he posted on social media.
    • The Baku Court of Serious Crimes sentenced seven staffers at Azerbaijani investigative outlet Abzas Media to prison terms ranging from seven to more than nine years on various tax and fraud charges. Press freedom advocates say the charges are in retaliation for the outlet’s reporting on presidential corruption.
    • A German court overturned the ban on Alternative for Germany-linked magazine Compact, which Interior Minister Nancy Faeser had called “a central mouthpiece of the right-wing extremist scene.” The court found that the measure was not justified.
    • The Democratic Republic of the Congo’s military arrested journalist Serge Sindani after he shared a photo showing military planes at Bangoka International Airport.
    • At least two journalists were injured during recent protests in Kenya, where the country’s Communications Authority demanded “all television and radio stations to stop any live coverage of the demonstrations” or risk “regulatory action.”
    • Police in Nepal are ignoring a court order and attempting to hunt down and arrest journalist Dil Bhushan Pathak for his reporting alleging political corruption.  

    Changes on the horizon in higher education abroad

    New wide-ranging guidance from the UK’s Office for Students includes the recommendation that universities amend or terminate international partnerships and agreements if necessary to protect the speech rights of their community. This is welcome advice given global higher education’s failure to acknowledge and account for the challenges internationalization has posed to expressive rights, a problem I discuss in my forthcoming book Authoritarians in the Academy, out Aug. 19 and available for pre-order now.

    And, like in the United States, universities in Australia are facing pressure over allegations of campus antisemitism. The nation’s Special Envoy’s Plan to Combat Antisemitism advocates various measures, including adoption of the International Holocaust Remembrance Alliance’s definition and its examples. Universities that “facilitate, enable or fail to act against antisemitism” may face defunding. (FIRE has repeatedly expressed concerns about these applications of the IHRA definition in the U.S. and the likelihood it will censor or chill protected political speech.) The report also advises that non-citizens, which would include international students, “involved in antisemitism should face visa cancellation and removal from Australia.”

    Source link

  • The Epic, Must-Read Coverage in New York Magazine (Derek Newton)

    The Epic, Must-Read Coverage in New York Magazine (Derek Newton)

    Issue 364

    Subscribe below to join 4,663 (+6) other smart people who get “The Cheat Sheet.” New Issues every Tuesday and Thursday.

    The Cheat Sheet is free. Although, patronage through paid subscriptions is what makes this newsletter possible. Individual subscriptions start at $8 a month ($80 annual), and institutional or corporate subscriptions are $250 a year. You can also support The Cheat Sheet by giving through Patreon.

    New York Magazine Goes All-In, And It’s Glorious

    Venerable New York Magazine ran an epic piece (paywall) on cheating and cheating with AI recently. It’s a thing of beauty. I could have written it. I should have. But honestly, I could not have done much better.

    The headline is brutal and blunt:

    Everyone Is Cheating Their Way Through College

    To which I say — no kidding.

    The piece wanders around, in a good way. But I’m going to try to put things in a more collected order and share only the best and most important parts. If I can. Whether I succeed or not, I highly encourage you to go over and read it.

    Lee and Cheating Everything

    The story starts with Chungin “Roy” Lee, the former student at Columbia who was kicked out for selling cheating hacks and then started a company to sell cheating hacks. His story is pretty well known at this point, but if you want to review it, we touched on it in Issue 354.

    What I learned in this story is that, at Columbia, Lee:

    by his own admission, proceeded to use generative artificial intelligence to cheat on nearly every assignment. As a computer-science major, he depended on AI for his introductory programming classes: “I’d just dump the prompt into ChatGPT and hand in whatever it spat out.” By his rough math, AI wrote 80 percent of every essay he turned in.

    And:

    “Most assignments in college are not relevant,” [Lee] told me. “They’re hackable by AI, and I just had no interest in doing them.” While other new students fretted over the university’s rigorous core curriculum, described by the school as “intellectually expansive” and “personally transformative,” Lee used AI to breeze through with minimal effort.

    The article says Lee’s admissions essay for Columbia was AI too.

    So, for all the people who were up in arms that Columbia would sanction a student for building a cheating app, maybe there’s more to it than just that. Maybe Lee built a cheating app because he’s a cheater. And, as such, has no place in an environment based on learning. That said, it’s embarrassing that Columbia did not notice a student in such open mockery of their mission. Seriously, embarrassing.

    Continuing from the story:

    Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing. “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” he said.

    Also embarrassing for Columbia. But seriously, Lee has no idea what he is talking about. Consider this:

    Lee explained to me that by showing the world AI could be used to cheat during a remote job interview, he had pushed the tech industry to evolve the same way AI was forcing higher education to evolve. “Every technological innovation has caused humanity to sit back and think about what work is actually useful,” he said. “There might have been people complaining about machinery replacing blacksmiths in, like, the 1600s or 1800s, but now it’s just accepted that it’s useless to learn how to blacksmith.”

    I already regret writing this — but maybe if Lee had done a little more reading, done any writing at all, he could make a stronger argument. His argument here is that of a precocious eighth grader.

    OpenAI/ChatGPT and Students

    Anyway, here are sections and quotes from the article about students using ChatGPT to cheat. I hope you have a strong stomach.

    As a brief aside, having written about this topic for years now, I cannot tell you how hard it is to get students to talk about this. What follows is the highest quality journalism. I am impressed and jealous.

    From the story:

    “College is just how well I can use ChatGPT at this point,” a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.

    More:

    Sarah, a freshman at Wilfrid Laurier University in Ontario, said she first used ChatGPT to cheat during the spring semester of her final year of high school.

    And:

    After getting acquainted with the chatbot, Sarah used it for all her classes: Indigenous studies, law, English, and a “hippie farming class” called Green Industries. “My grades were amazing,” she said. “It changed my life.” Sarah continued to use AI when she started college this past fall. Why wouldn’t she? Rarely did she sit in class and not see other students’ laptops open to ChatGPT. Toward the end of the semester, she began to think she might be dependent on the website. She already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit, where she writes under the username maybeimnotsmart. “I spend so much time on TikTok,” she said. “Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.”

    This really is where we are. These students are not outliers.

    Worse, being as clear here as I know how to be — 95% of colleges do not care. At least not enough to do anything about it. They are, in my view, perfectly comfortable with their students faking it, laughing their way through the process, because fixing it is hard. It’s easier to look cool and “embrace” AI than to acknowledge the obvious and existential truth.

    But let’s keep going:

    now, as one student put it, “the ceiling has been blown off.” Who could resist a tool that makes every assignment easier with seemingly no consequences?

    Please mentally underline the “no consequences” part. These are not bad people, the students using ChatGPT and other AI products to cheat. They are making an obvious choice — easy and no penalty versus actual, serious work. So long as this continues to be the equation, cheating will be as common as breathing. Only idiots and masochists will resist.

    Had enough? No? Here:

    Wendy, a freshman finance major at one of the city’s top universities, told me that she is against using AI. Or, she clarified, “I’m against copy-and-pasting. I’m against cheating and plagiarism. All of that. It’s against the student handbook.” Then she described, step-by-step, how on a recent Friday at 8 a.m., she called up an AI platform to help her write a four-to-five-page essay due two hours later.

    Of course. When you ask students if they condone cheating, most say no. Most also say they do not cheat. Then, when you ask about what they do specifically, it’s textbook cheating. As I remember reading in Cheating in College, when you ask students to explain this disconnect, they often say, “Well, when I did it, it was not cheating.” Wendy is a good example.

    In any case, this next section is long, and I regret sharing all of it. I really want people to read the article. But this, like so much of it, is worth reading. Even if you read it here.

    More on Wendy:

    Whenever Wendy uses AI to write an essay (which is to say, whenever she writes an essay), she follows three steps. Step one: “I say, ‘I’m a first-year college student. I’m taking this English class.’” Otherwise, Wendy said, “it will give you a very advanced, very complicated writing style, and you don’t want that.” Step two: Wendy provides some background on the class she’s taking before copy-and-pasting her professor’s instructions into the chatbot. Step three: “Then I ask, ‘According to the prompt, can you please provide me an outline or an organization to give me a structure so that I can follow and write my essay?’ It then gives me an outline, introduction, topic sentences, paragraph one, paragraph two, paragraph three.” Sometimes, Wendy asks for a bullet list of ideas to support or refute a given argument: “I have difficulty with organization, and this makes it really easy for me to follow.”

    Once the chatbot had outlined Wendy’s essay, providing her with a list of topic sentences and bullet points of ideas, all she had to do was fill it in. Wendy delivered a tidy five-page paper at an acceptably tardy 10:17 a.m. When I asked her how she did on the assignment, she said she got a good grade. “I really like writing,” she said, sounding strangely nostalgic for her high-school English class — the last time she wrote an essay unassisted. “Honestly,” she continued, “I think there is beauty in trying to plan your essay. You learn a lot. You have to think, Oh, what can I write in this paragraph? Or What should my thesis be? ” But she’d rather get good grades. “An essay with ChatGPT, it’s like it just gives you straight up what you have to follow. You just don’t really have to think that much.”

    I asked Wendy if I could read the paper she turned in, and when I opened the document, I was surprised to see the topic: critical pedagogy, the philosophy of education pioneered by Paulo Freire. The philosophy examines the influence of social and political forces on learning and classroom dynamics. Her opening line: “To what extent is schooling hindering students’ cognitive ability to think critically?” Later, I asked Wendy if she recognized the irony in using AI to write not just a paper on critical pedagogy but one that argues learning is what “makes us truly human.” She wasn’t sure what to make of the question. “I use AI a lot. Like, every day,” she said. “And I do believe it could take away that critical-thinking part. But it’s just — now that we rely on it, we can’t really imagine living without it.”

    Unfortunately, we’ve read this before. Many times. Use of generative AI to outsource the effort of learning is rampant.

    Want more? There’s also Daniel, a computer science student at the University of Florida:

    AI has made Daniel more curious; he likes that whenever he has a question, he can quickly access a thorough answer. But when he uses AI for homework, he often wonders, If I took the time to learn that, instead of just finding it out, would I have learned a lot more? At school, he asks ChatGPT to make sure his essays are polished and grammatically correct, to write the first few paragraphs of his essays when he’s short on time, to handle the grunt work in his coding classes, to cut basically all cuttable corners. Sometimes, he knows his use of AI is a clear violation of student conduct, but most of the time it feels like he’s in a gray area. “I don’t think anyone calls seeing a tutor cheating, right? But what happens when a tutor starts writing lines of your paper for you?” he said.

    When a tutor starts writing your paper for you, if you turn that paper in for credit you receive, that’s cheating. This is not complicated. People who sell cheating services and the people who buy them want to make it seem complicated. It’s not.

    And the Teachers

    Like the coverage of students, the article’s work with teachers is top-rate. And what they have to say is not one inch less important. For example:

    Brian Patrick Green, a tech-ethics scholar at Santa Clara University, immediately stopped assigning essays after he tried ChatGPT for the first time. Less than three months later, teaching a course called Ethics and Artificial Intelligence, he figured a low-stakes reading reflection would be safe — surely no one would dare use ChatGPT to write something personal. But one of his students turned in a reflection with robotic language and awkward phrasing that Green knew was AI-generated. A philosophy professor across the country at the University of Arkansas at Little Rock caught students in her Ethics and Technology class using AI to respond to the prompt “Briefly introduce yourself and say what you’re hoping to get out of this class.”

    Students are cheating — using AI to outsource their expected learning labor — in a class called Ethics and Artificial Intelligence. And in an Ethics and Technology class. At what point does reality’s absurdity outpace our ability to even understand it?

    Also, as I’ve been barking about for some time now, low-stakes assignments are probably more likely to be cheated than high-stakes ones (see Issue 64). I don’t really get why professional educators don’t get this.

    But returning to the topic:

    After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,”

    To read about Jollimore’s outstanding essay, see Issue 346.

    And, of course, there’s more. Like the large section above, I regret copying so much of it, but it’s essential reading:

    Many teachers now seem to be in a state of despair. In the fall, Sam Williams was a teaching assistant for a writing-intensive class on music and social change at the University of Iowa that, officially, didn’t allow students to use AI at all. Williams enjoyed reading and grading the class’s first assignment: a personal essay that asked the students to write about their own music tastes. Then, on the second assignment, an essay on the New Orleans jazz era (1890 to 1920), many of his students’ writing styles changed drastically. Worse were the ridiculous factual errors. Multiple essays contained entire paragraphs on Elvis Presley (born in 1935). “I literally told my class, ‘Hey, don’t use AI. But if you’re going to cheat, you have to cheat in a way that’s intelligent. You can’t just copy exactly what it spits out,’” Williams said.

    Williams knew most of the students in this general-education class were not destined to be writers, but he thought the work of getting from a blank page to a few semi-coherent pages was, above all else, a lesson in effort. In that sense, most of his students utterly failed. “They’re using AI because it’s a simple solution and it’s an easy way for them not to put in time writing essays. And I get it, because I hated writing essays when I was in school,” Williams said. “But now, whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them.”

    By November, Williams estimated that at least half of his students were using AI to write their papers. Attempts at accountability were pointless. Williams had no faith in AI detectors, and the professor teaching the class instructed him not to fail individual papers, even the clearly AI-smoothed ones. “Every time I brought it up with the professor, I got the sense he was underestimating the power of ChatGPT, and the departmental stance was, ‘Well, it’s a slippery slope, and we can’t really prove they’re using AI,’” Williams said. “I was told to grade based on what the essay would’ve gotten if it were a ‘true attempt at a paper.’ So I was grading people on their ability to use ChatGPT.”

    The “true attempt at a paper” policy ruined Williams’s grading scale. If he gave a solid paper that was obviously written with AI a B, what should he give a paper written by someone who actually wrote their own paper but submitted, in his words, “a barely literate essay”? The confusion was enough to sour Williams on education as a whole. By the end of the semester, he was so disillusioned that he decided to drop out of graduate school altogether. “We’re in a new generation, a new time, and I just don’t think that’s what I want to do,” he said.

    To be clear, the school is ignoring the obvious use of AI by students to avoid the work of learning — in violation of stated policies — and awarding grades, credit, and degrees anyway. Nearly universally, we are meeting lack of effort with lack of effort.

    More from Jollimore:

    He worries about the long-term consequences of passively allowing 18-year-olds to decide whether to actively engage with their assignments.

    I worry about that too. I really want to use the past tense there — worried about. I think the age of active worry about this is over. Students are deciding what work they think is relevant or important — which I’d wager is next to none of it — and using AI to shrug off everything else. And again, the collective response of educators seems to be — who cares? Or, in some cases, to quit.

    More on professors:

    Some professors have resorted to deploying so-called Trojan horses, sticking strange phrases, in small white text, in between the paragraphs of an essay prompt. (The idea is that this would theoretically prompt ChatGPT to insert a non sequitur into the essay.) Students at Santa Clara recently found the word broccoli hidden in a professor’s assignment. Last fall, a professor at the University of Oklahoma sneaked the phrases “mention Finland” and “mention Dua Lipa” in his. A student discovered his trap and warned her classmates about it on TikTok. “It does work sometimes,” said Jollimore, the Cal State Chico professor. “I’ve used ‘How would Aristotle answer this?’ when we hadn’t read Aristotle. But I’ve also used absurd ones and they didn’t notice that there was this crazy thing in their paper, meaning these are people who not only didn’t write the paper but also didn’t read their own paper before submitting it.”

    You can catch students using ChatGPT, if you want to. There are ways to do it, ways to limit it. And I wish the reporter had asked these teachers what happened to the students who were discovered. But I am sure I know the answer.

    I guess also, I apologize. Some educators are engaged in the fight to protect and preserve the value of learning things. I feel that it’s far too few and that, more often than not, they are alone in this. It’s depressing.

    Odds and Ends

    In addition to its excellent narrative about how bad things actually are in a GPT-corrupted education system, the article has a few other bits worth sharing.

    This, is pretty great:

    Before OpenAI released ChatGPT in November 2022, cheating had already reached a sort of zenith. At the time, many college students had finished high school remotely, largely unsupervised, and with access to tools like Chegg and Course Hero. These companies advertised themselves as vast online libraries of textbooks and course materials but, in reality, were cheating multi-tools. For $15.95 a month, Chegg promised answers to homework questions in as little as 30 minutes, 24/7, from the 150,000 experts with advanced degrees it employed, mostly in India. When ChatGPT launched, students were primed for a tool that was faster, more capable.

    Mentioning Chegg and Course Hero by name is strong work. Cheating multi-tools is precisely what they are.

    I thought this was interesting too:

    Students talk about professors who are rumored to have certain thresholds (25 percent, say) above which an essay might be flagged as an honor-code violation. But I couldn’t find a single professor — at large state schools or small private schools, elite or otherwise — who admitted to enforcing such a policy. Most seemed resigned to the belief that AI detectors don’t work. It’s true that different AI detectors have vastly different success rates, and there is a lot of conflicting data. While some claim to have less than a one percent false-positive rate, studies have shown they trigger more false positives for essays written by neurodivergent students and students who speak English as a second language.

    I have a few things to say about this.

    Students talk to one another. Remember a few paragraphs up where a student found the Trojan horse and posted it on social media? When teachers make efforts to stop cheating, to try catching disallowed use of AI, word gets around. Some students will try harder to get away with it. Others won’t try to cheat, figuring the risk isn’t worth it. Simply trying to stop it, in other words, will stop at least some of it.

    I think the idea that most teachers think AI detectors don’t work is true. It’s not just teachers. Entire schools believe this. It’s an epic failure of messaging, an astonishing triumph of the misinformed. Truth is, as reported above, detectors do vary. Some are great. Some are junk. But the good ones work. Most people continue to not believe it.

    And I’ll point out once again that the “studies have shown” thing is complete nonsense. As far as I have seen, exactly two studies have shown this, and both are deeply flawed. The one most often cited has made-up citations and research that is highly suspicious, which I pointed out in 2023 (see Issue 216). Frankly, I’ve not seen any good evidence to support this idea. As journalism goes, that’s a big miss in this story. It’s little wonder teachers think AI detectors don’t work.

    On the subject of junk AI detectors, there’s also this:

    I fed Wendy’s essay through a free AI detector, ZeroGPT, and it came back as 11.74 AI-generated, which seemed low given that AI, at the very least, had generated her central arguments. I then fed a chunk of text from the Book of Genesis into ZeroGPT and it came back as 93.33 percent AI-generated.

    This is a failure to understand how AI detection works. But also ZeroGPT does not work. Again, it’s no wonder that teachers think AI detection does not work.

    Continuing:

    It’s not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students’ essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.

    I don’t have nearly the bandwidth to get into this. But — sure. I have no doubt.

    Finally, I am not sure if I missed this at the time, but this is important too:

    In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments. In its first year of existence, ChatGPT’s total monthly visits steadily increased month-over-month until June, when schools let out for the summer. (That wasn’t an anomaly: Traffic dipped again over the summer in 2024.) Professors and teaching assistants increasingly found themselves staring at essays filled with clunky, robotic phrasing that, though grammatically flawless, didn’t sound quite like a college student — or even a human. Two and a half years later, students at large state schools, the Ivies, liberal-arts schools in New England, universities abroad, professional schools, and community colleges are relying on AI to ease their way through every facet of their education.

    As I have said before, OpenAI is not your friend (see Issue 308). It’s a cheating engine. It can be used well, and ethically. But so can steroids. So could OxyContin. It’s possible to be handed the answers to every test you’ll ever take and not use them. But it is delusional to think any significant number of people don’t.

    All wrapped up, this is a show-stopper of an article and I am very happy for the visibility it brings. I wish I could feel that it will make a difference.

    Source link

  • Your alumni magazine is a source of marketing gold

    Your alumni magazine is a source of marketing gold

    In a time of skyrocketing paper and postage costs, alumni magazines are paradoxically enjoying a renaissance. After cutting back—or cutting down—print issues during the pandemic, many institutions are now pushing for expanded page counts, more copies, better photography, multimedia extras and more institutional support.

    Why?

    Because audiences appreciate the thought-provoking content and the tangible, premium reminder of the enduring connection with their alma mater. In a 2024 CASE readership survey, 68 percent of TCU Magazine’s readers reported spending 30 minutes or more with every issue. Almost half reported that the magazine was a go-to source for continuing education.

    Journalists are pouring their passion and experience into institutional magazines because higher education shines glimmers of hope into an increasingly dark world. They highlight purpose-driven students who will tackle the problems of the future and brilliant faculty whose research is providing innovative solutions to the planet’s most pressing challenges.

    Our readership analytics at TCU Magazine have long shown a strong audience appetite for well-researched and carefully written and edited feature stories about forward momentum and its relationship to education. Since 2015, our overall page views have experienced an astounding 1,300 percent growth. That number sounds outlandish, but I can assure you it is accurate.

    Our alumni, parents, donors and internal stakeholders are and always have been the primary audiences. But they aren’t the only people who want to know about the students, faculty, staff and initiatives that thrive on our campus. TCU Magazine’s stories are crafted to be relevant far beyond our campus community and long after the initial date of publication.

    In 2021, when all the rules were being rewritten, we proposed a partnership with our colleagues in marketing. We suggested a trial run of using existing magazine stories as peer marketing material, promoting those features to internet users who live in the proximity of the country’s top 150 colleges and universities. The goal was for other professionals in higher education to learn about TCU beyond our exceptional student experience and athletic success.

    TCU’s marketing director agreed that long-form content could run alongside more traditional digital marketing materials. Why not? Serving stories about improving teacher retirement plans; developing free, open-source digital mapping tools; or better understanding mutations in the BRCA gene benefit us and all manner of readers.

    Audiences learn something new and interesting about how research is shaping the future, and we achieve our goal of enhancing TCU’s academic reputation.

    Win-win.

    Together, we built a partnership with a digital marketing agency based in Fort Worth. With their expert guidance, we got a crash course in the differences between Google Display Network and SEM keywords, Demand Gen ad placements, bidding strategies, and the wisdom of narrowing ad placements in social media feeds.

    We launched our first joint academic content campaign in April 2021 with a modest investment. The results were promising: In two months, we got the TCU initials in front of more than six million people around the country and enticed 87,000 of those people to click on the ad and come to the website to read the story.

    Best of all, these were what we refer to as quality clicks, because the average reader spent almost two minutes on one of our stories, far above the internet’s long-form content average of less than 40 seconds. That small trial convinced our divisional leaders that magazine material could be marketing gold.

    We didn’t need to reinvent the wheel or invest in outside development of marketing-specific content because we had a treasure trove already flowing from a steady creative stream inside our office.

    We expanded the efforts in 2022, sharing new stories with 10.5 million pairs of eyes and bringing 116,000 more people to our site to learn about TCU research. That year, we got an email from Puerto Rico about French professor Benjamin Ireland’s research reuniting families torn apart during forced internment during World War II. “I am not sure why Facebook ‘promoted’ your article to me this morning,” the effusive author shared, “but something made me click to read more.”

    We’ve continued to grow these campaigns. Though our mission at the magazine is and always will be to serve the TCU community first, we now factor in whether a proposed story might have a broader impact or might help us tell a more expansive tale about how the type of ethical leadership that flourishes here and makes the world a better place.

    My opinion is that these campaigns have worked because they’re a perfect merger of marketing and communication. We’re doing what magazine writers and editors have always done—telling authentic stories about real people doing purpose-driven work.

    What’s not to like?

    Caroline Collier is director of editorial services at Texas Christian University and editor of TCU Magazine.

    Source link