Tag: Events

  • Europe Must Do More to Protect Data Under Trump

    Europe Must Do More to Protect Data Under Trump

    Europe “needs to do more” to protect scientific data threatened by the Trump administration, the president of the European Research Council has said.

    Speaking at the Metascience 2025 conference in London, Maria Leptin said such data is in a “very precarious” position. Since Donald Trump began his second term as U.S. president, researchers have raced to archive or preserve access to U.S.-hosted data sets and other resources at risk of being taken down as the administration targets research areas including public health, climate and fields considered to be related to diversity.

    “We’ve heard the situation from the U.S. where some data are disappearing, where databases are being stopped, and this is really a wake-up call that we as a community need to do more about this and Europe needs to do more about it,” Leptin said.

    The ERC president highlighted the Global Biodata Coalition, which aims to “safeguard the world’s open life science, biological and biomedical reference data in perpetuity,” noting that the European Commission recently published a call to support the initiative.

    “Medical research critically depends on the maintenance and the availability of core data resources, and that is currently at risk. Some of these resources may disappear,” she said. “I really encourage all policymakers and funders to join the coalition.”

    “Right now is the worst time to not have access to data in view of the power of AI and the advances in computing, large language models, et cetera,” Leptin told the conference, noting that the Trump administration is not the only threat to accessible data. “The value of the data that are held across Europe is unfortunately massively reduced because of fragmentation, siloing, and uneven access.”

    A recent ERC workshop involving researchers, policymakers, industry representatives and start-ups raised some “shocking” concerns about health data, she added. “Even in the same town where researchers wanted to access the huge numbers of data that the hospitals in that town had, it was impossible because the hospitals couldn’t even share data with each other, because they used totally different data formats.”

    Boosting access to data will require “a huge effort,” Leptin acknowledged. “We of course need technical, legal and financial frameworks that make this possible and practical, [as well as] interoperable formats and common standards.”

    While not a data infrastructure in itself, the ERC “has a role to play” in improving accessibility, she said. “What we try to do is to set expectations around good data practices.”

    “We do need European-level solutions,” Leptin stressed. “The scientific questions we face, whether in climate or health or technology or [other fields], don’t stop at national borders—in fact, they are global.”

    Source link

  • Texas Law School Deans Fight to Keep ABA Accreditation

    Texas Law School Deans Fight to Keep ABA Accreditation

    A group of Texas law school deans is urging the state Supreme Court to uphold American Bar Association accreditation standards for public law schools. The state’s highest court announced in April that it was considering dropping the ABA requirement for licensure, opening a public comment period on the matter that closed July 1.

    “We strongly support continued reliance on ABA accreditation for Texas law schools and licensure eligibility,” the deans of eight of the state’s 10 ABA-accredited law schools wrote in a letter to the Texas Supreme Court. “ABA accreditation provides a nationally recognized framework for quality assurance and transparency; portability of licensure through recognition of ABA accreditation by all 50 states, which is critical for graduates’ career flexibility; consumer protections and public accountability through disclosure standards; and a baseline of educational quality that correlates with higher bar passage rates and better employment outcomes.”

    Though the Texas justices did not say why they were reviewing ABA accreditation, the law deans’ letter noted that the body has already suspended its DEI standards—a move it announced in February and then extended in May through Aug. 31, 2026. That means “the language of the Standard can be revised in accordance with federal constitutional law and Texas state law that bar certain diversity, equity and inclusion practices at state universities,” the deans wrote.

    Of the state’s ABA-accredited law schools’ deans, only Robert Chesney of the University of Texas and Robert Ahdieh of Texas A&M didn’t sign the letter, Reuters reported.

    In his own nine-page letter to the state Supreme Court, Chesney urged the justices to look at “alternative” pathways for ensuring law school standards “to help pave the way for innovative, lower-cost approaches to legal education.”

    Ahdieh told Reuters that whatever the court decides about ABA accreditation, it’s “critical” that law degrees earned in Texas remain portable.

    Source link

  • Feds Target Harvard’s Accreditation, Foreign Student Records

    Feds Target Harvard’s Accreditation, Foreign Student Records

    Libby O’Neill/Getty Images

    In the latest volley in the Trump administration’s war with Harvard University, federal agencies told Harvard’s accreditor the university is violating antidiscrimination laws, while Immigration and Customs Enforcement will subpoena Harvard’s “records, communications, and other documents relevant to the enforcement of immigration laws since January 1, 2020.”

    The Departments of Education, Health and Human Services, and Homeland Security announced these moves Wednesday in news releases replete with condemnations from cabinet officials. The pressure comes as Harvard still refuses to bow to all of the Trump administration’s demands from April, which include banning admission of international students “hostile to the American values and institutions inscribed in the U.S. Constitution and Declaration of Independence, including students supportive of terrorism or anti-Semitism.” In May, DHS tried to stop Harvard from enrolling international students by stripping it of its Student and Exchange Visitor Program certification, but a judge has blocked that move.

    Education Secretary Linda McMahon said in a Wednesday statement, “By allowing antisemitic harassment and discrimination to persist unchecked on its campus, Harvard University has failed in its obligation to students, educators, and American taxpayers. The Department of Education expects the New England Commission of Higher Education to enforce its policies and practices.” (Only the accreditor can find a college in violation of its policies.)

    Trump officials said last week that Harvard is violating Title VI of the Civil Rights Act of 1964, which prohibits discrimination based on shared ancestry, including antisemitism. They notified that accrediting agency of the HHS Office for Civil Rights’ finding that Harvard is displaying “deliberate indifference” to discrimination against Jewish and Israeli students.

    HHS’s Notice of Violation said multiple sources “present a grim reality of on-campus discrimination that is pervasive, persistent, and effectively unpunished.” Wednesday’s release from HHS said the investigation grew from a review of Harvard Medical School “based on reports of antisemitic incidents during its 2024 commencement ceremony,” into a review of the whole institution from Oct. 7, 2023, through the present.

    HHS Secretary Robert F. Kennedy Jr. said that “when an institution—no matter how prestigious—abandons its mission and fails to protect its students, it forfeits the legitimacy that accreditation is designed to uphold. HHS and the Department of Education will actively hold Harvard accountable through sustained oversight until it restores public trust and ensures a campus free of discrimination.”

    The Trump administration also notified Columbia University’s accreditor after it concluded Columbia committed a similar violation of federal civil rights law. The accreditor, the Middle States Commission on Higher Education, then told Columbia that its accreditation could be in jeopardy.

    DHS’s subpoena announcement is the latest move in its targeting of Harvard over its international students, who comprise more than a quarter of its enrollment.

    DHS Assistant Secretary Tricia McLaughlin said in a release, “We tried to do things the easy way with Harvard. Now, through their refusal to cooperate, we have to do things the hard way. Harvard, like other universities, has allowed foreign students to abuse their visa privileges and advocate for violence and terrorism on campus.”

    DHS didn’t provide Inside Higher Ed information on what specific records ICE is subpoenaing. It said in its release that “this comes after the university repeatedly refused past non-coercive requests to hand over the required information for its Student Visitor and Exchange Program [sic] certification.”

    The release said DHS Secretary Kristi Noem “demanded Harvard provide information about the criminality and misconduct of foreign students on its campus” back in April. The release further said that other universities “should take note of Harvard’s actions, and the repercussions, when considering whether or not to comply with similar requests.”

    Harvard pushed back in statements of its own Wednesday. It called the DHS subpoenas “unwarranted” but said it “will continue to cooperate with lawful requests and obligations.”

    “The administration’s ongoing retaliatory actions come as Harvard continues to defend itself and its students, faculty, and staff against harmful government overreach aimed at dictating whom private universities can admit and hire, and what they can teach,” one Harvard statement said. “Harvard remains unwavering in its efforts to protect its community and its core principles against unfounded retribution by the federal government.”

    If Harvard were to lose its accreditation, it would be cut off from federal student aid. In another statement, Harvard officials say they are complying with the New England Commission of Higher Education’s standards “maintaining its accreditation uninterrupted since its initial review in 1929.”

    Neither the Trump administration nor Larry Schall, president of NECHE, provided the letter the administration wrote to the commission. Schall told Inside Higher Ed the commission will request a response from Harvard within 30 days and that, plus the results of the federal investigation, will be presented to the commission at its next regularly scheduled meeting, currently set for September.

    “We have processes we follow,” Schall said. “We follow them whether it’s Harvard or some other institution … Our processes are consistent and actually directed by federal regulation.”

    Source link

  • Keep in Mind That AI Is Multimodal Now

    Keep in Mind That AI Is Multimodal Now

    Remember in late 2022 when ChatGPT arrived on the international scene and you communicated with AI through a simple chat bot interface? It was remarkable that you could type in relatively short prompts and it would instantly type back directly to you—a machine with communication capability!

    For most of us, this remains the most common daily mode of accessing and utilizing AI. Many of us are using AI only as a replacement for Google Search. In fact, Google Search AI Overviews, are now a standard feature, which was announced last year for a significant portion of users and search queries. They appear at the top of the results, and only after allowing you to follow up with a deeper dive are you taken to the old list of responses. As of mid-June 2025, the rollout of AI Overviews has progressed to the point where these overviews are a common sight at the top of search results pages. Yet the whole world of communication is open now for most of the frontier models of AI—and with the new communication modes comes a whole world of possibilities.

    In order to more fully utilize the remarkable range of capabilities of AI today, we need to become comfortable with the many input and output modes that are available. From audio, voice, image and stunning video to massive formally formatted documents, spreadsheets, computer code, databases and more, the potential to input and output material is beyond what most of us take for granted. That is not to mention the emerging potential of embodied AI, which includes all of these capabilities in a humanoid form, as discussed in this column two weeks ago.

    So, what can AI do with images and videos? Of course, you can import images as still photographs and instruct AI to edit the photos, adding or deleting objects within the image. Many apps do this exceptionally well. This does raise questions about deepfakes, images that can be shared as if they were real, when actually they are altered by AI in an attempt to mislead the public. Most such images do carry a watermark that indicates the image was generated or altered by AI. However, there are watermark removers that will wash away those well-intended alerts.

    One example of using the image capability of AI is in the app PictureThis, which describes itself as a “botanist in your pocket.” As one would expect, you can upload a picture from your smartphone and it will identify the plant. It will also provide a diagnosis of any conditions or diseases that it can determine through the image, offer care suggestions such as optimal lighting and watering, point out toxicity to humans and pets, and provide tips on how to help your plant thrive. In education, we can utilize AI to provide these kinds of services to learners who simply take a snapshot of their work.

    We can build upon the PictureThis example to create a kind of “professor in your pocket” that offers enhanced responses to images that, for example, might include an attempt to solve a mathematical problem, develop a chemistry formula, create an outline for an essay and much more. The student may simply take a smartphone or screenshot of their work and share it with the app, which will respond with what may be right and wrong in the work as well as give ideas of further research and context that will be helpful.

    Many of us are in positions where we need to construct spreadsheets, PowerPoint presentations and more formal reports with cover pages, tables of contents, citations and references. AI stands ready to convert data, text and free-form writing into perfectly formatted final products. Use the upload icon that is commonly located near the prompt window in ChatGPT, Gemini, Claude or other leading models to upload your material for analysis or formatting. Gemini, a Google product, has direct connections with Google apps.

    Many of these features are available on the free tier of the products. Most major AI companies have a subscription tier for around $20 per month that provides limited access to higher levels of their products. In addition, there are business, enterprise, cloud and API levels that serve organizations and developers. As a senior fellow conducting research, I maintain a couple of subscriptions that enable me to seamlessly move through my work process from ideation to creation of content, then from content creation to enhancement of research inserting creative concepts and, finally, to develop a formal final report.

    Using the pro versions gives access to deep research tools in most cases. This mode provides far more “thinking” by the AI tool, which can provide more extensive web-based research, generate novel ideas and pursue alternative approaches with extensive documentation, analysis and graphical output in the form of tables, spreadsheets and charts. Using a combination of these approaches, one can assemble a thoughtful deep dive into a current or emerging topic.

    AI can also provide effective “brainstorming” that integrates deep insights into the topics being explored. One currently free tool is Stanford University’s Storm, a research prototype that supports interactive research and creative analyses. Storm assists with article creation and development and offers an intriguing roundtable conversation that enables several virtual and human participants to join in the brainstorming from distant locations.

    This has tremendous potential for sparking interactive debates and discussions among learners that can include AI-generated participants. I encourage faculty to consider using this tool as a developmental activity for learners to probe deeply into topics in your discipline as well as to provide experience in collaborative virtual discussions that presage experiences they may encounter when they enter or advance in the workforce.

    In general, we are underutilizing not only the analytical and composition capabilities of AI, but also the wealth of multimode capabilities of these tools. Depending upon your needs, we have both input and output capabilities in audio, video, images, spreadsheets, coding, graphics and multimedia combinations. The key to most effectively developing skill in the use of these tools is to incorporate their time-saving and illustrative capabilities into your daily work.

    So, if you are writing a paper and have some data to include, try out an AI app to generate a spreadsheet and choose the best chart to further clarify and emphasize trends. If you need a modest app to perform a repetitive function for yourself or for others, for example, generating mean, mode and standard deviation, you can be helped by describing the inputs/outputs to AI and prompt it to create the code for you. Perhaps you want to create a short video clip as a simulation of how a new process might work; AI can do that from a description of the scene that you provide. If you want to create a logo for a prospective project, initiative or other activity, AI will give you a variety of custom-created logos. In all cases, you can ask for revisions and alterations. Think of AI as your dedicated assistant who has multimedia skills and is eager to help you with these tasks. If you are not sure how to get started, of course, just ask AI.

    Source link

  • The Hidden Curriculum of Student Conduct Proceedings

    The Hidden Curriculum of Student Conduct Proceedings

    For first-generation students, the hidden curriculum—the unstated norms, policies and expectations students need to know in higher education—can be a barrier to participating in high-impact practices, leaving them in the dark about how to thrive in college.

    But new research aims to identify the lesser-known policies that disadvantage first-generation students and to make them more accessible. During a panel presentation at NASPA’s Student Success in Higher Education conference in June, Kristin Ridge, associate dean of students and community standards at the University of Rhode Island, discussed her doctoral research on first-generation students and how they interact with the student handbook and conduct spaces on campus.

    What’s the need: First-generation students make up 54 percent of all undergraduates in the U.S., or about 8.2 million students. But only one in four first-generation students graduates with a college degree, compared to nearly 60 percent of continuing-generation students.

    First-generation students are often diverse in their racial and ethnic backgrounds and come with a variety of strengths, which academic Tara Yosso describes as the cultural wealth model. But in some areas, including higher ed’s bureaucratic processes, first-gen students can lack family support and guidance to navigate certain situations, Ridge said. Her personal experience as a first-generation learner and a conduct officer pushed her to research the issue.

    “It really came to a head when I was dealing with two students who had a similar circumstance, and I felt like one had a better grasp of what was going on than the other one, and that was something that didn’t sit right with me,” Ridge said. “I felt like the behavior should be what I am addressing and what the students are learning from, not their previous family of origin or lived experience.”

    Conduct systems are complicated because they require a fluency to navigate the bureaucracy, Ridge said. Student handbooks are often written like legal documents, but the goal of disciplinary proceedings is for students to learn from their behavior. “If a student doesn’t understand the process or the process isn’t accessible to them, there are very real consequences that can interrupt their educational journey,” she added.

    Some states require conduct sanctions to be placed on a student’s transcript or a dean’s report for transfer application. These sanctions can result in debt, stranded credits or underemployment if students are unable to transfer or earn a degree.

    “Sometimes [continuing-generation] students who have parents or supporters can better understand what the implications of a sanction would be,” Ridge says. “Students who don’t have that extra informed support to lean on may unwittingly end up with a sanction that has more long-term impact than they realize.”

    First-generation students may also experience survivor’s or breakaway guilt for having made it to college, which can result in them being less likely to turn to their families for help if they break the student code of conduct or fear they will be expelled for their actions, Ridge said.

    Therefore, colleges and universities should seek to create environments that ensure all students are aware of conduct procedures, the content of the student handbook and how to receive support and advocacy from both the institution and their communities, Ridge said.

    Creating solutions: Some key questions conduct staff members can ask themselves, Ridge said, include:

    • Is the handbook easy to access, or is it hidden behind a login or pass code? If students or their family members or supporters have to navigate additional steps to read the student handbook, it limits transparency and opportunities for support.
    • Is content available in plain English or as an FAQ page? While institutions must outline some expectations in specific language for legal reasons, ensuring all students understand the processes increases transparency. “I like to say I want [students] to learn from the process, not feel like the process happened to them,” Ridge said.
    • Is the handbook available in other languages? Depending on the student population, offering the handbook in additional languages can address equity concerns about which families can support their students. Hispanic-serving institutions, for example, should offer the handbook in Spanish, Ridge said.
    • Who is advocating for students’ rights in conduct conversations? Some institutions offer students a conduct adviser, which Ridge says should be an opt-in rather than opt-out policy.
    • Is conduct addressed early in the student experience? Conduct is not a fun office; “no one’s going to put us on a parade float,” Ridge joked. That’s why it’s vital to ensure that students receive relevant information when they transition into the institution, such as during orientation. “My goal is for them to feel that they are holding accountability for their choices, that they understand and learn from the sanctions or the consequences, but I don’t want them to be stressed about the process,” Ridge said. Partnering with campus offices, such as TRIO or Disability Services, can also ensure all students are aware of conduct staff and the office is seen less as punitive.

    If your student success program has a unique feature or twist, we’d like to know about it. Click here to submit.

    Source link

  • Essay on Faculty Engagement and Web Accessibility (opinion)

    Essay on Faculty Engagement and Web Accessibility (opinion)

    Inaccessible PDFs are a stubborn problem. How can we marshal the energy within our institutions to make digital course materials more accessible—one PDF, one class, one instructor at a time?

    Like many public higher education institutions, William & Mary is working to come into compliance with the Web Content Accessibility Guidelines by April 2026. These guidelines aim to ensure digital content is accessible for people who rely on screen readers and require that content be machine-readable.

    Amid a flurry of other broad institutional efforts to comply with the federal deadline, my colleague—coordinator of instruction for libraries Liz Bellamy—and I agreed to lead a series of workshops designed to help instructors improve the accessibility of their digital course materials. We’ve learned a lot along the way that we hope can be instructive to other institutions engaged in this important work.

    What We’ve Tried

    Our first big hurdle wasn’t technical—it was cultural, structural and organizational. At the same time various groups across campus were addressing digital accessibility, William & Mary had just moved our learning management system from Blackboard Learn to Blackboard Ultra, we were beginning the rollout of new campuswide enterprise software for several major institutional areas, the institution achieved R-1 status and everyone had so many questions about generative AI. Put plainly, instructors were overwhelmed, and inaccessible PDFs were only one of many competing priorities vying for their attention.

    To tackle the issue, a group of institutional leaders launched the “Strive for 85” campaign, encouraging instructors to raise their scores in Blackboard Ally, which provides automated feedback to instructors on the accessibility of their course materials, to 85 percent or higher. The idea was simple—make most course content accessible, starting with the most common problem: PDFs that are not machine-readable.

    We kicked things off at our August 2024 “Ready, Set, Teach!” event, offering workshops and consultations. Instructors learned how to find and use their Ally reports, scan and convert PDFs, and apply practical strategies to improve digital content accessibility. In the year that followed, we tried everything we could think of to keep the momentum going and move the needle on our institutional Ally score above the baseline. Despite our best efforts, some approaches fell flat:

    • Let’s try online workshops! Low engagement.
    • What about in-person sessions? Low attendance.
    • But what if we feed them lunch? Low attendance, now with a fridge full of leftovers.
    • OK, what if we reach out to department chairs and ask to speak in their department meetings? It turns out department meeting agendas are already pretty full; response rates were … low (n = 1).

    The truth is, instructors are busy. Accessibility often feels like one more thing on an already full plate. So far, our greatest success stories have come from one-on-one conversations and by identifying departmental champions—instructors who will model and advocate for accessible practices with discipline-specific solutions. (Consider the linguistics professor seeking an accurate 3-D model of the larynx collaborating with a health sciences colleague, who provided access to an interactive model from an online medical textbook—enhancing accessibility for students learning about speech production.)

    But these approaches require time and people power we don’t always have. Despite the challenges we’ve faced with scaling our efforts, when success happens, it can feel a little magical, like the time at the end of one of our highly attended workshops (n = 2) when a previously skeptical instructor reflected, “So, it sounds like accessibility is about more than students with disabilities. This can also help my other students.”

    What We’ve Learned

    Two ingredients seem essential:

    1. Activation energy: Instructors need a compelling reason to act, but they also need a small step to get started; otherwise, the work can feel overwhelming.

    Sometimes this comes in the form of an individual student disclosing their need for accessible content. But often, college students (especially first year or first generation) don’t disclose disabilities or feel empowered to advocate for themselves. For some instructors, seeing their score in Ally is enough of a motivation—they’re high achievers, and they don’t want a “low grade” on anything linked to their name. More often, though, we’ve seen instructors engage in this work because a colleague or department chair tells them they need to. Leveraging positive peer pressure, coupled with quick practical solutions to improve accessibility, seems to be an effective approach.

    1. Point-of-need support: Help must be timely, relevant and easy to access.

    When instructors feel overwhelmed by the mountain of accessibility recommendations in their Ally reports, they are often hesitant to even get started. We’ve found that personal conversations about student engagement and course content or design often provide an opening to talk about accessibility. And once the door is open, instructors are often very receptive to hearing about a few small changes they can make to improve the accessibility of their course content.

    Where Things Stand

    Now for the reality check. So far, our institutional Ally score has been fairly stagnant; we haven’t reached the 85 percent goal we set for ourselves. And even for seasoned educational developers, it can be discouraging to see so little change after so much effort. But new tools offer hope. Ally recently announced planned updates to allow professors to remediate previously inaccessible PDFs directly in Blackboard without having to navigate to another platform. If reliable, this could make remediation more manageable, providing a solution at the point of need and lowering the activation energy required to solve the problem.

    We’re also considering:

    • Focus groups to better understand what motivates instructors to engage in this work.
    • Exploring the effectiveness of pop-up notifications that appear with accessibility tips and reminders when instructors log in to Blackboard to raise awareness and make the most of point-of-need supports.
    • Defining “reasonable measures” for compliance, especially for disciplines with unique content needs (e.g., organic chemistry, modern languages and linguistics).

    Leading With Empathy

    One unintended consequence we’ve seen: Some instructors are choosing to stop uploading digital content altogether. Faced with the complexity of digital accessibility requirements, they’re opting out rather than adapting. Although this could help our institutional compliance score, it’s often a net loss for students and for learning, so we want to find a path forward that doesn’t force instructors to make this kind of choice.

    Accessibility is about equity, but it’s also about empathy. As we move toward 2026, we need to support—not scare—instructors into compliance. Every step we make toward increased accessibility helps our students. Every instructor champion working with their peers to find context-specific solutions helps further our institutional goals. Progress over perfection might be the only sustainable path forward.

    Source link

  • The Quick Convo All Writing Teams Should Have (opinion)

    The Quick Convo All Writing Teams Should Have (opinion)

    Scenario 1: You’re part of a cross-disciplinary group of faculty members working on the new general education requirement. By the end of the semester, your group has to produce a report for your institution’s administration. As you start to generate content, one member’s primary contributions focus on editing for style and mechanics, while the other members are focused on coming to an agreement on the content and recommendations.

    Scenario 2: When you’re at the stage of drafting content for a grant, one member of a writing team uses strikethrough to delete a large chunk of text, with no annotation or explanation for the decision. The writing stops as individual participants angrily back channel.

    Scenario 3: A team of colleagues decides to draft a vision statement for their unit on campus. They come to the process assuming that everyone has a shared idea about the vision and mission of their department. But when they each contribute a section to the draft, it becomes clear that they are not, in fact, on the same page about how they imagine the future of their unit’s work.

    In the best case scenarios, we choose people to write with. People whom we trust, who we know will pull their weight and might even be fun to work with. However, many situations are thrust upon us rather than carefully selected. We have to complete a report, write an important email, articulate a new policy, compose and submit a grant proposal, author a shared memo, etc., with a bunch of folks we would likely not have chosen on our own.

    Further, teams of employees tasked with writing are rarely selected because of their ability to write well with others, and many don’t have the language to talk through their preferred composing practices. Across professional writing and within higher education, the inability to work collaboratively on a writing product is the cause of endless strife and inefficiency. How can we learn how to collaborate with people we don’t choose to write with?

    Instead of just jumping into the writing task, we argue for a quick conversation about writing before any team authorship even starts. If time is limited, this conversation doesn’t necessarily need to be more than 15 minutes (though devoting 30 minutes might be more effective) depending on the size of the writing team, but it will save you time—and, likely, frustration—in the long run.

    Drawing from knowledge in our discipline—writing studies—we offer the following strategies for a guided conversation before starting any joint writing project. The quick convo should serve to surface assumptions about each member’s beliefs about writing, articulate the project’s goal and genre, align expectations, and plan the logistics.

    Shouldn’t We Just Use AI for This Kind of Writing?

    As generative AI tools increasingly become integrated into the writing process, or even supplant parts of it, why should people write at all? Especially, why should we write together when people can be so troublesome?

    Because writing is thinking. Certainly, the final writing product matters—a lot—but the reason getting to the product can be so hard is that writing requires critical thinking around project alignment. Asking AI to do the writing skips the hard planning, thinking and drafting work that will make the action/project/product that the writing addresses more successful.

    Further, we do more than just complete a product/document when we write (either alone or together)—we surface shared assumptions, we come together through conversation and we build relationships. A final written product that has a real audience and purpose can be a powerful way to build community, and not just in the sense that it might make writers feel good. An engaged community is important, not just for faculty and staff happiness, but for productivity, for effective project completion and for long-term institutional stability.

    Set the Relational Vibe

    To get the conversation started, talk to each other: Do real introductions in which participants talk about how they write and what works for them. Talk to yourself: Do a personal gut check, acknowledging any feelings/biases about group members, and commit to being aware of how these personal relationships/feelings might influence how you perceive and accept their contributions. Ideas about authorship, ownership and credit, including emotional investments in one’s own words, are all factors in how people approach writing with others.

    Articulate the Project Purpose and Genre

    Get on the same page about what the writing should do (purpose) and what form it should take (genre). Often the initial purpose of a writing project is that you’ve been assigned to a task—students may find it funny that so much faculty and staff writing at the university is essentially homework! Just like our students, we have to go beyond the bare minimum of meeting a requirement to find out why that writing product matters, what it responds to and what we want it to accomplish. To help the group come to agreement about form and writing conventions, find some effective examples of the type of project you’re trying to write and talk through what you like about each one.

    Align Your Approach

    Work to establish a sense of shared authorship—a “we” approach to the work. This is not easy, but it’s important to the success of the product and for the sake of your sanity. Confront style differences and try to come to agreement about not making changes to each other’s writing that don’t necessarily improve the content. There’s always that one person who wants to add “nevertheless” for every transition or write “next” instead of “then”—make peace with not being too picky. Or, agree to let AI come in at the end and talk about the proofreading recommendations from the nonperson writer.

    This raises another question: With people increasingly integrating ChatGPT and its ilk into their processes (and Word/Google documents offering AI-assisted authorship tools), how comfortable is each member of the writing team with integrating AI-generated text into a final product?

    Where will collaboration occur? In person, online? Synchronously or asynchronously? In a Google doc, on Zoom, in the office, in a coffee shop? Technologies and timing both influence process, and writers might have different ideas about how and when to write (ideas that might vary based on the tools that your team is going to use).

    When will collaboration occur? Set deadlines and agree to stick with them. Be transparent about expectations from and for each member.

    How will collaboration occur? In smaller groups/pairs, all together, or completely individually? How will issues be discussed and resolved?

    Finally, Some Recommendations on What Not to Do

    Don’t:

    • Just divvy up the jobs and call it a day. This will often result in a disconnected, confusing and lower-quality final product.
    • Take on everything because you’re the only one who can do it. This is almost never true and is a missed opportunity to build capacity among colleagues. Developing new skills is an investment.
    • Overextend yourself and then resent your colleagues. This is a surefire path to burnout.
    • Sit back and let other folks take over. Don’t be that person.

    Source link

  • AI, Irreality and the Liberal Educational Project (opinion)

    AI, Irreality and the Liberal Educational Project (opinion)

    I work at Marquette University. As a Roman Catholic, Jesuit university, we’re called to be an academic community that, as Pope John Paul II wrote, “scrutinize[s] reality with the methods proper to each academic discipline.” That’s a tall order, and I remain in the academy, for all its problems, because I find that job description to be the best one on offer, particularly as we have the honor of practicing this scrutinizing along with ever-renewing groups of students.

    This bedrock assumption of what a university is continues to give me hope for the liberal educational project despite the ongoing neoliberalization of higher education and some administrators’ and educators’ willingness to either look the other way regarding or uncritically celebrate the generative software (commonly referred to as “generative artificial intelligence”) explosion over the last two years.

    In the time since my last essay in Inside Higher Ed, and as Marquette’s director of academic integrity, I’ve had plenty of time to think about this and to observe praxis. In contrast to the earlier essay, which was more philosophical, let’s get more practical here about how access to generative software is impacting higher education and our students and what we might do differently.

    At the academic integrity office, we recently had a case in which a student “found an academic article” by prompting ChatGPT to find one for them. The chat bot obeyed, as mechanisms do, and generated a couple pages of text with a title. This was not from any actual example of academic writing but instead was a statistically probable string of text having no basis in the real world of knowledge and experience. The student made a short summary of that text and submitted it. They were, in the end, not found in violation of Marquette’s honor code, since what they submitted was not plagiarized. It was a complex situation to analyze and interpret, done by thoughtful people who care about the integrity of our academic community: The system works.

    In some ways, though, such activity is more concerning than plagiarism, for, at least when students plagiarize, they tend to know the ways they are contravening social and professional codes of conduct—the formalizations of our principles of working together honestly. In this case, the student didn’t see the difference between a peer-reviewed essay published by an academic journal and a string of probabilistically generated text in a chat bot’s dialogue box. To not see the difference between these two things—or to not care about that difference—is more disconcerting and concerning to me than straightforward breaches of an honor code, however harmful and sad such breaches are.

    I already hear folks saying: “That’s why we need AI literacy!” We do need to educate our students (and our colleagues) on what generative software is and is not. But that’s not enough. Because one also needs to want to understand and, as is central to the Ignatian Pedagogical Paradigm that we draw upon at Marquette, one must understand in context.

    Another case this spring term involved a student whom I had spent several months last fall teaching in a writing course that took “critical AI” as its subject matter. Yet this spring term the student still used a chat bot to “find a quote in a YouTube video” for an assignment and then commented briefly on that quote. The problem was that the quote used in the assignment does not appear in the selected video. It was a simulacrum of a quote; it was a string of probabilistically generated text, which is all generative software can produce. It did not accurately reflect reality, and the student did not cite the chat bot they’d copied and pasted from, so they were found in violation of the honor code.

    Another student last term in the Critical AI class prompted Microsoft Copilot to give them quotations from an essay, which it mechanically and probabilistically did. They proceeded to base their three-page argument on these quotations, none of which said anything like what the author in question actually said (not even the same topic); their argument was based in irreality. We cannot scrutinize reality together if we cannot see reality. And many of our students (and colleagues) are, at least at times, not seeing reality right now. They’re seeing probabilistic text as “good enough” as, or conflated with, reality.

    Let me point more precisely to the problem I’m trying to put my finger on. The student who had a chat bot “find” a quote from a video sent an email to me, which I take to be completely in earnest and much of which I appreciated. They ended the email by letting me know that they still think that “AI” is a really powerful and helpful tool, especially as it “continues to improve.” The cognitive dissonance between the situation and the student’s assertion took me aback.

    Again: the problem with the “We just need AI literacy” argument. People tend not to learn what they do not want to learn. If our students (and people generally) do not particularly want to do work, and they have been conditioned by the use of computing and their society’s habits to see computing as an intrinsic good, “AI” must be a powerful and helpful tool. It must be able to do all the things that all the rich and powerful people say it does. It must not need discipline or critical acumen to employ, because it will “supercharge” your productivity or give you “10x efficiency” (whatever that actually means). And if that’s the case, all these educators telling you not to offload your cognition must be behind the curve, or reactionaries. At the moment, we can teach at least some people all about “AI literacy” and it will not matter, because such knowledge refuses to jibe with the mythology concerning digital technology so pervasive in our society right now.

    If we still believe in the value of humanistic, liberal education, we cannot be quiet about these larger social systems and problems that shape our pupils, our selves and our institutions. We cannot be quiet about these limits of vision and questioning. Because not only do universities exist for the scrutinizing of reality with the various methods of the disciplines as noted at the outset of this essay, but liberal education also assumes a view of the human person that does not see education as instrumental but as formative.

    The long tradition of liberal education, for all its complicity in social stratification down the centuries, assumes that our highest calling is not to make money, to live in comfort, to be entertained. (All three are all right in their place, though we must be aware of how our moneymaking, comfort and entertainment derive from the exploitation of the most vulnerable humans and the other creatures with whom we share the earth, and how they impact our own spiritual health.)

    We are called to growth and wisdom, to caring for the common good of the societies in which we live—which at this juncture certainly involves caring for our common home, the Earth, and the other creatures living with us on it. As Antiqua et nova, the note released from the Vatican’s Dicastery for Culture and Education earlier this year (cited commendingly by secular ed-tech critics like Audrey Watters) reiterates, education plays its role in this by contributing “to the person’s holistic formation in its various aspects (intellectual, cultural, spiritual, etc.) … in keeping with the nature and dignity of the human person.”

    These objectives of education are not being served by students using generative software to satisfy their instructors’ prompts. And no amount of “literacy” is going to ameliorate the situation on its own. People have to want to change, or to see through the neoliberal, machine-obsessed myth, for literacy to matter.

    I do believe that the students I’ve referred to are generally striving for the good as they know how. On a practical level, I am confident they’ll go on to lead modestly successful lives as our society defines that term with regard to material well-being. I assume their motivation is not to cause harm or dupe their instructors; they’re taking part in “hustle” culture, “doing school” and possibly overwhelmed by all their commitments. Even if all this is indeed the case, liberal education calls us to more, and it’s the role of instructors and administrators to invite our students into that larger vision again and again.

    If we refuse to give up on humanistic, liberal education, then what do we do? The answer is becoming clearer by the day, with plenty of folks all over the internet weighing in, though it is one many of us do not really want to hear. Because at least one major part of the answer is that we need to make an education genuinely oriented toward our students. A human-scale education, not an industrial-scale education (let’s recall over and over that computers are industrial technology). The grand irony of the generative software moment for education in neoliberal, late-capitalist society is that it is revealing so many of the limits we’ve been putting on education in the first place.

    If we can’t “AI literacy” our educational problems away, we have to change our pedagogy. We have to change the ways we interact with our students inside the classroom and out: to cultivate personal relationships with them whenever possible, to model the intellectual life as something that is indeed lived out with the whole person in a many-partied dialogue stretching over millennia, decidedly not as the mere ability to move information around. This is not a time for dismay or defeat but an incitement to do the experimenting, questioning, joyful intellectual work many of us have likely wanted to do all along but have not had a reason to go off script for.

    This probably means getting creative. Part of getting creative in our day probably means de-computing (as Dan McQuillan at the University of London labels it). To de-compute is to ask ourselves—given our ambient maximalist computing habits of the last couple decades—what is of value in this situation? What is important here? And then: Does a computer add value to this that it is not detracting from in some other way? Computers may help educators collect assignments neatly and read them clearly, but if that convenience is outweighed by constantly having to wonder if a student has simply copied and pasted or patch-written text with generative software, is the value of the convenience worth the problems?

    Likewise, getting creative in our day probably means looking at the forms of our assessments. If the highly structured student essay makes it easier for instructors to assess because of its regularity and predictability, yet that very regularity and predictability make it a form that chat bots can produce fairly readily, well: 1) the value for assessing may not be worth the problems of teeing up chat bot–ifiable assignments and 2) maybe that wasn’t the best form for inviting genuinely insightful and exciting intellectual engagement with our disciplines’ materials in the first place.

    I’ve experimented with research journals rather than papers, with oral exams as structured conversations, with essays that focus intently on one detail of a text and do not need introductions and conclusions and that privilege the student’s own voice, and other in-person, handmade, leaving-the-classroom kinds of assessments over the last academic year. Not everything succeeded the way I wanted, but it was a lively, interactive year. A convivial year. A year in which mostly I did not have to worry about whether students were automating their educations.

    We have a chance as educators to rethink everything in light of what we want for our societies and for our students; let’s not miss it because it’s hard to redesign assignments and courses. (And it is hard.) Let’s experiment, for our own sakes and for our students’ sakes. Let’s experiment for the sakes of our institutions that, though they are often scoffed at in our popular discourse, I hope we believe in as vibrant communities in which we have the immense privilege of scrutinizing reality together.

    Jacob Riyeff is a teaching associate professor and director of academic integrity at Marquette University.

    Source link

  • N.C. Gov. Vetoes Bills Targeting ‘DEI,’ ‘Divisive Concepts’

    N.C. Gov. Vetoes Bills Targeting ‘DEI,’ ‘Divisive Concepts’

    North Carolina’s Democratic governor has vetoed two bills the Republican-led General Assembly passed targeting what lawmakers dubbed “diversity, equity and inclusion”; “discriminatory practices”; and “divisive concepts” in public higher education.

    Senate Bill 558 would have banned institutions from having offices “promoting discriminatory practices or divisive concepts” or focused on DEI. The bill defined “discriminatory practices” as “treating an individual differently [based on their protected federal law classification] solely to advantage or disadvantage that individual as compared to other individuals or groups.”

    SB 558’s list of restricted divisive concepts mirrored the lists that Republicans have inserted into laws in other states, including the idea that “a meritocracy is inherently racist or sexist” or that “the rule of law does not exist.” The legislation would have prohibited colleges and universities from endorsing these concepts.

    The bill would have also banned institutions from establishing processes “for reporting or investigating offensive or unwanted speech that is protected by the First Amendment, including satire or speech labeled as microaggression.”

    In his veto message Thursday, Gov. Josh Stein wrote, “Diversity is our strength. We should not whitewash history, police dorm room conversations, or ban books. Rather than fearing differing viewpoints and cracking down on free speech, we should ensure our students learn from diverse perspectives and form their own opinions.”

    Stein also vetoed House Bill 171, which would have broadly banned DEI from state government. It defined DEI in multiple ways, including the promotion of “differential treatment of or providing special benefits to individuals on the basis of race, sex, color, ethnicity, nationality, country of origin, or sexual orientation.”

    “House Bill 171 is riddled with vague definitions yet imposes extreme penalties for unknowable violations,” Stein wrote in his HB 171 veto message. NC Newsline reported that lawmakers might still override the vetoes.

    Source link

  • On the Sensibility of Cognitive Outsourcing (opinion)

    On the Sensibility of Cognitive Outsourcing (opinion)

    I am deeply worried about my vacuuming skills. I’ve always enjoyed vacuuming, especially with the vacuum cleaner I use. It has a clear dustbin, and there’s something cathartic about running it over the carpet in the upstairs hallway and seeing all the dust and debris it collects. I’m worried, however, because I keep outsourcing my downstairs vacuuming to the robot vacuum cleaner my wife and I bought a while back. With three kids and three dogs in the house, our family room sees a lot of foot traffic, and I save a lot of time by letting the robot clean up. What am I losing by relying on my robot vacuum to keep my house clean?

    Not much, of course, and I’m not actually worried about losing my vacuuming skills. Vacuuming the family room isn’t a task that means much to me, and I’m happy to let the robot handle it. Doing so frees up my time for other tasks, preferably bird-watching out the kitchen window, but more often doing the dishes, a chore for which I don’t have a robot to help me. It’s entirely reasonable for me to offload a task I don’t care much about to the machines when the machines are right there waiting to do the work for me.

    That was my response to a new high-profile study from a MIT Media Lab team led by Nataliya Kosmyna. Their preprint, “Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task,” details their experiment. The team enlisted 54 adult participants to write short essays using SAT prompts over multiple sessions. A third of the participants were given access to ChatGPT to help with their essay writing, a third had access to any website they could reach through a Google search engine but were prohibited from using ChatGPT or other large language models and a third had no outside aids (the “brain-only” group). The researchers not only scored the quality of the participants’ essays, but they also used electroencephalography to record participants’ brain activity during these writing tasks.

    The MIT team found that “brain connectivity systematically scaled down with the amount of external support.” While the brain-only group “exhibited the strongest, widest‑ranging [neural] networks,” AI assistance in the experiment “elicited the weakest overall coupling.” Moreover, the ChatGPT users were increasingly less engaged in the writing process over the multiple sessions, often just copying and pasting from the AI chat bot by the end of the experiment. They also had a harder time quoting anything from the essay they had just submitted compared to the brain-only group.

    This study has inspired some dramatic headlines: “ChatGPT May Be Eroding Critical Thinking Skills” and “Study: Using AI Could Cost You Brainpower” and “Your Reliance on ChatGPT Might Be Really Bad for Your Brain.” Savvy news readers will key into the qualifiers in those headlines (“may,” “could,” “might”) instead of the scarier words, and the authors of the study have made an effort to prevent journalists and commentators from overplaying their results. From the study’s FAQ: “Is it safe to say that LLMs are, in essence, making us ‘dumber’? No!” As is usually the case in the AI-and-learning discourse, we need to slow our roll and look beyond the hyperbole to see what this new study does and doesn’t actually say.

    I should state now for the record that I am not a neuroscientist. I can’t weigh in with any authority on the EEG analysis in this study, although others with expertise in this area have done so and have expressed concerns about the authors’ interpretation of EEG data. I do, however, know a thing or two about teaching and learning in higher education, having spent my career at university centers for teaching and learning helping faculty and other instructors across the disciplines explore and adopt evidence-based teaching practices. And it’s the teaching-and-learning context in the MIT study that caught my eye.

    Consider the task that participants in this study, all students or staff at Boston-area universities, were given. They were presented with three SAT essay prompts and asked to select one. They were then given 20 minutes to write an essay in response to their chosen prompt, while wearing an EEG helmet of some kind. Each subject participated in a session like this three times over the course of a few months. Should we be surprised that the participants who had access to ChatGPT increasingly outsourced their writing to the AI chat bot? And that, in doing so, they were less and less engaged in the writing process?

    I think the takeaway from this study is that if you give adults an entirely inauthentic task and access to ChatGPT, they’ll let the robot do the work and save their energy for something else. It’s a reasonable and perhaps cognitively efficient thing to do. Just like I let my robot vacuum cleaner tidy up my family room while I do the dishes or look for an eastern wood pewee in my backyard.

    Sure, writing an SAT essay is a cognitively complex task, and it is perhaps an important skill for a certain cohort of high school students. But what this study shows is what generative AI has been showing higher ed since ChatGPT launched in 2022: When we ask students to do things that are neither interesting nor relevant to their personal or professional lives, they look for shortcuts.

    John Warner, an Inside Higher Ed contributor and author of More Than Words: How to Think About Writing in the Age of AI (Basic Books), wrote about this notion in his very first post about ChatGPT in December 2022. He noted concerns that ChatGPT would lead to the end of high school English, and then asked, “What does it say about what we ask students to do in school that we assume they will do whatever they can to avoid it?”

    What’s surprising to me about the new MIT study is that we are more than two years into the ChatGPT era and we’re still trying to assess the impact of generative AI on learning by studying how people respond to boring essay assignments. Why not explore how students use AI during more authentic learning tasks? Like law students drafting contracts and client memos or composition students designing multimodal projects or communications students attempting impossible persuasive tasks? We know that more authentic assignments motivate deeper engagement and learning, so why not turn students loose on those assignments and then see what impact AI use might have?

    There’s another, more subtle issue with the discourse around generative AI in learning that we can see in this study. In the “Limitations and Future Work” section of the preprint, the authors write, “We did not divide our essay writing task into subtasks like idea generation, writing, and so on.” Writing an essay is a more complicated cognitive process than vacuuming my family room, but critiques of the use of AI in writing are often focused on outsourcing the entire writing process to a chat bot. That seems to be what the participants did in this study, and it is perhaps a natural use of AI when given an uninteresting task.

    However, when a task is interesting and relevant, we’re not likely to hand it off entirely to ChatGPT. Savvy AI users might get a little AI help with parts of the task, like generating examples or imagining different audiences or tightening our prose. AI can’t do all the things that a trained human editor can, but, as writing instructor (and human editor) Heidi Nobles has argued, AI can be a useful substitute when a human editor isn’t readily available. It’s a stretch to say that my robot vacuum cleaner and I collaborate to keep the house tidy, but it’s reasonable to think that someone invested in a complex activity like writing might use generative AI as what Ethan Mollick calls a “co-intelligence.”

    If we’re going to better understand generative AI’s impact on learning, something that will be critical for higher education to do to keep its teaching mission relevant, we have to look at the best uses of AI and the best kinds of learning activities. That research is happening, thankfully, but we shouldn’t expect simple answers. After all, learning is more complicated than vacuuming.

    Derek Bruff is associate director of the Center for Teaching Excellence at the University of Virginia.

    Source link