Category: Featured

  • Making SEISA official | Wonkhe

    Making SEISA official | Wonkhe

    Developing a new official statistic is a process that can span several years.

    Work on SEISA began in 2020 and this blog outlines the journey to official statistics designation and some key findings that have emerged along the way. Let’s firstly recap why HESA needed a new deprivation index.

    The rationale behind pursuing this project stemmed from an Office for Statistics Regulation (OSR) report which noted that post-16 education statistics lacked a UK-wide deprivation metric. Under the Code of Practice for Statistics, HESA are required to innovate and fill identified statistical gaps that align with our area of specialism.

    Fast forward almost six years and the UK Statistics Authority have reiterated the importance of UK-wide comparable statistics in their response to the 2024 Lievesley Review.

    Breaking down barriers

    While higher education policy may be devolved, all nations have ambitions to ensure there is equal opportunity for all. Policymakers and the higher education sector agree that universities have a pivotal role in breaking down barriers to opportunity and that relevant data is needed to meet this mission. Having UK-wide comparable statistics relating to deprivation based on SEISA can provide the empirical evidence required to understand where progress is being made and for this to be used across the four nations to share best practice.

    In developing SEISA, we referred to OSR guidance to produce research that examines the full value of a new statistic before it is classed as an ‘official statistic in development’. We published a series of working papers in 2021 and 2022, with the latter including comparisons to the Indices of Deprivation (the main area-based measure utilised among policymakers at present). We also illustrated why area-based measures remain useful in activities designed to promote equal opportunity.

    Our research indicated that the final indexes derived from the Indices of Deprivation in each nation were effective at catching deprived localities in large urban areas, such as London and Glasgow, but that SEISA added value by picking up deprivation in towns and cities outside of these major conurbations. This included places located within former mining, manufacturing and industrial communities across the UK, like Doncaster or the Black Country in the West Midlands, as well as Rhondda and Caerphilly in Wales. The examples below come from our interactive maps for SEISA using Census 2011 data.

    An area of Doncaster that lies within decile 4 of the English Index of Multiple Deprivation (2019)

    An area of Caerphilly that lies within decile 5 of the Welsh Index of Multiple Deprivation (2019)

    We also observed that SEISA tended to capture a greater proportion of rural areas in the bottom quintile when compared with the equivalent quintile of the Index of Multiple Deprivation in each nation.

    Furthermore, in Scotland, the bottom quintile of the Scottish Index of Multiple Deprivation does not contain any locations in the Scottish islands, whereas the lowest quintile of SEISA covers all council areas in the country. These points are highlighted by the examples below from rural Shropshire and the Shetland Islands, which also show the benefit that SEISA offers by being based on smaller areas (in terms of population size) than those used to form the Indices of Deprivation. That is, drawing upon a smaller geographic domain enables pockets of deprivation to be identified that are otherwise surrounded by less deprived neighbourhoods.

    A rural area of Shropshire that is placed in decile 5 of the English Index of Multiple Deprivation (2019)

    An area of the Shetland Islands that is within decile 7 of the Scottish Index of Multiple Deprivation (2020)

    Becoming an official statistic

    Alongside illustrating value, our initial research had to consider data quality and whether our measure correlated with deprivation as expected. Previous literature has highlighted how the likelihood of experiencing deprivation increases if you are a household that is;

    • On a low income
    • Lives in social housing
    • A lone parent family
    • In poor health

    Examining how SEISA was associated with these variables gave us the assurance that it was ready to become an ‘official statistic in development’. As we noted when we announced our intention for the measure to be assigned this badge for up to two years, a key factor we needed to establish during this time period was the consistency in the findings (and hence methodological approach) when Census 2021-22 data became available in Autumn 2024.

    Recreating SEISA using the latest Census records across all nations, we found there was a high level of stability in the results between the 2011 and 2021-22 Census collections. For instance, our summary page shows the steadiness in the associations between SEISA and income, housing, family composition and health, with an example of this provided below.

    The association between SEISA and family composition in Census 2011 and 2021-22

    Over the past twelve months, we’ve been gratified to see applications of SEISA in the higher education sector and beyond. We’ve had feedback on how practitioners are using SEISA to support their widening participation activities in higher education and interest from councils working on equality of opportunity in early years education. The measure is now available via the Local Insight database used by local government and charities to source data for their work.

    It’s evident therefore that SEISA has the potential to help break down barriers to opportunity across the UK and is already being deployed by data users to support their activities. The demonstrable value of SEISA and its consistency following the update to Census 2021-22 data mean that we can now remove the ‘in development’ badge and label SEISA as an official statistic.

    View the data for SEISA based on the Census 2021-22 collection, alongside a more detailed insight into why SEISA is now an official statistic, on the HESA website.

    Please feel free to submit any feedback you have on SEISA to [email protected].

    Read HESA’s latest research releases and if you would like to be kept updated on future publications, you can sign-up to our mailing list.

    Source link

  • Politics and international relations has grown over the last decade – but unevenly

    Politics and international relations has grown over the last decade – but unevenly

    The world seems an uncertain place to live in as we begin 2025: growing levels of conflict and instability across the globe, democratic institutions under pressure, and civic infrastructure being tested by the raging unpredictability of the natural world. Has there ever been a more appropriate time for people, young or old, to study politics? Has there ever been a time when we have been more in need of the expertise of political scientists, theorists, and scholars of international relations to help us make sense of this complex and changing world?

    It feels timely, therefore, that we at the British Academy are publishing a report on the provision of politics and international relations in the UK. This report is the latest in a series of state of the discipline reports from our SHAPE Observatory. It aims to take the temperature of the discipline by examining the size and shape of the sector and observing key trends over the past decade or so.

    Going for growth

    One of the key themes that emerged from our report was expansion. Compared to 2011–12, there has been a 20 per cent increase in first degree students and a 41 per cent increase in postgraduate taught students taking politics and international relations. The number of academic staff has also increased by 52 per cent since 2012–13.

    With this expansion has come diversification, both among students and staff. There are now more female students studying this traditionally male-dominated subject and the proportion of first degree students from minoritised ethnic backgrounds has increased by eight percentage points since 2011–12. Over the same period, the number of international students from outside the EU has more than doubled. The workforce is also becoming more international, with notable increases in staff from outside Europe and North America.

    All of this is positive, as it shows there is still strong demand for the discipline in the UK and that both students and scholars want to come here from around the world to work and study. In interviews we conducted with academic staff, there was a strong emphasis on the positive effects of this diversification. It was argued that the learning and research environment is enriched by bringing a range of perspectives and backgrounds onto campus.

    Uneven development

    But when you scratch beneath the surface of the aggregate numbers, another picture starts to emerge. When we looked at student numbers by institution, it became clear that changes have been highly uneven across the sector since 2011–12. A stark difference was observable, for example, between the average change in student numbers at Russell Group institutions and the rest of the sector:

    Number of institutions Mean change in student numbers
    Russell Group 23 320.2
    Pre-92 other 39 -24.7
    Post-92 51 -16.8

    Mean change (FPE) in first degree student numbers, 2011–12 to 2021–22

    So, if this is a story of expansion, it is really a story of a select few institutions that have expanded remarkably, while the rest of the sector has seen its share of politics and international relations students dwindle over the past few years.

    This pattern will be familiar to some at the institutional level, particularly in England and Wales, where caps on student numbers have been removed. Yet the overall institutional picture can mask ups and downs in recruitment within the same university, along with any restructuring of departments and course portfolios. Isolating changes in student numbers for a single disciplinary area is therefore very revealing.

    Growing pains

    So what are the implications of these changes? More students are engaging with the discipline, and in England and Wales more are able to attend their first-choice destination. Those working within departments at research-intensive universities may argue that the expansion of their department has preserved a degree of pluralism in research activity and practice. The UK has a proud history of political theory, for example, and this sub-field continues to carve out a notable space in the disciplinary landscape – something not mirrored in other leading research nations.

    However, the divergence in recruitment has clearly had a destabilising impact on some politics departments. The redistribution of students across the UK has real-world consequences, leading in some instances to internal restructuring and even departmental closures. Amid gloomy forecasts for the sector, mounting financial pressures, and announcements of course closures in all manner of disciplines, the risk of an uneven balance of course provision has come into sharp focus.

    Mind the gap(s)

    It is in this context that the British Academy recently launched a new map showing changing SHAPE provision in UK higher education over a decade. The picture for politics and international relations is broadly positive, with good coverage across the country at least at the regional level. However, when you exclude students with prior qualifications above the average tariff for the discipline, there is a notable absence of people studying single honours degrees in politics and international relations in the central belt of Scotland.

    The question of access to the discipline is an important one that deserves more detailed exploration at a local level. Many of the institutions that have seen a drop in their student intake are the same universities that would argue they are most adept at reaching local communities where access to higher education is lowest. Moreover, they would likely contend that they are best placed to support these students to succeed at university.

    In an era where more of the learning experience is being digitised and moved online, and where the numbers of commuter students are increasing, perhaps the concentration of politics and international relations students at fewer universities is less of an issue. Institutions are being asked to do more with less, and from a technocratic perspective, this can create economies of scale. Whether this is in the long-term interest of students is questionable. Moreover, ever-concentrating provision does seem antithetical to the notion of addressing regional inequalities, and it runs counter to the government’s ambitions to boost local R&D.

    A question of sustainability

    The question that emerges is not whether this is a problem, but whether it is sustainable.

    There is a great deal of discussion about how current disruption in higher education will spill over into the research base. When we interviewed those working in the field, the diversity of the politics and international relations sector was identified as a key strength, and as one of the elements that contributes to its enviable reputation around the globe. Once a department is gone, it is very hard to reestablish.

    In these volatile times, facing global challenges, politics and international relations has so much to offer both students and wider society. Let’s hope the discipline continues to thrive here in the UK for many years to come.

    Source link

  • Student Booted from PhD Program Over AI Use (Derek Newton/The Cheat Sheet)

    Student Booted from PhD Program Over AI Use (Derek Newton/The Cheat Sheet)


    This one is going to take a hot minute to dissect. Minnesota Public Radio (MPR) has the story.

    The plot contours are easy. A PhD student at the University of Minnesota was accused of using AI on a required pre-dissertation exam and removed from the program. He denies that allegation and has sued the school — and one of his professors — for due process violations and defamation respectively.

    Starting the case.

    The coverage reports that:

    all four faculty graders of his exam expressed “significant concerns” that it was not written in his voice. They noted answers that seemed irrelevant or involved subjects not covered in coursework. Two instructors then generated their own responses in ChatGPT to compare against his and submitted those as evidence against Yang. At the resulting disciplinary hearing, Yang says those professors also shared results from AI detection software. 

    Personally, when I see that four members of the faculty unanimously agreed on the authenticity of his work, I am out. I trust teachers.

    I know what a serious thing it is to accuse someone of cheating; I know teachers do not take such things lightly. When four go on the record to say so, I’m convinced. Barring some personal grievance or prejudice, which could happen, hard for me to believe that all four subject-matter experts were just wrong here. Also, if there was bias or petty politics at play, it probably would have shown up before the student’s third year, not just before starting his dissertation.

    Moreover, at least as far as the coverage is concerned, the student does not allege bias or program politics. His complaint is based on due process and inaccuracy of the underlying accusation.

    Let me also say quickly that asking ChatGPT for answers you plan to compare to suspicious work may be interesting, but it’s far from convincing — in my opinion. ChatGPT makes stuff up. I’m not saying that answer comparison is a waste, I just would not build a case on it. Here, the university didn’t. It may have added to the case, but it was not the case. Adding also that the similarities between the faculty-created answers and the student’s — both are included in the article — are more compelling than I expected.

    Then you add detection software, which the article later shares showed high likelihood of AI text, and the case is pretty tight. Four professors, similar answers, AI detection flags — feels like a heavy case.

    Denied it.

    The article continues that Yang, the student:

    denies using AI for this exam and says the professors have a flawed approach to determining whether AI was used. He said methods used to detect AI are known to be unreliable and biased, particularly against people whose first language isn’t English. Yang grew up speaking Southern Min, a Chinese dialect. 

    Although it’s not specified, it is likely that Yang is referring to the research from Stanford that has been — or at least ought to be — entirely discredited (see Issue 216 and Issue 251). For the love of research integrity, the paper has invented citations — sources that go to papers or news coverage that are not at all related to what the paper says they are.

    Does anyone actually read those things?

    Back to Minnesota, Yang says that as a result of the findings against him and being removed from the program, he lost his American study visa. Yang called it “a death penalty.”

    With friends like these.

    Also interesting is that, according to the coverage:

    His academic advisor Bryan Dowd spoke in Yang’s defense at the November hearing, telling panelists that expulsion, effectively a deportation, was “an odd punishment for something that is as difficult to establish as a correspondence between ChatGPT and a student’s answer.” 

    That would be a fair point except that the next paragraph is:

    Dowd is a professor in health policy and management with over 40 years of teaching at the U of M. He told MPR News he lets students in his courses use generative AI because, in his opinion, it’s impossible to prevent or detect AI use. Dowd himself has never used ChatGPT, but he relies on Microsoft Word’s auto-correction and search engines like Google Scholar and finds those comparable. 

    That’s ridiculous. I’m sorry, it is. The dude who lets students use AI because he thinks AI is “impossible to prevent or detect,” the guy who has never used ChatGPT himself, and thinks that Google Scholar and auto-complete are “comparable” to AI — that’s the person speaking up for the guy who says he did not use AI. Wow.

    That guy says:

    “I think he’s quite an excellent student. He’s certainly, I think, one of the best-read students I’ve ever encountered”

    Time out. Is it not at least possible that professor Dowd thinks student Yang is an excellent student because Yang was using AI all along, and our professor doesn’t care to ascertain the difference? Also, mind you, as far as we can learn from this news story, Dowd does not even say Yang is innocent. He says the punishment is “odd,” that the case is hard to establish, and that Yang was a good student who did not need to use AI. Although, again, I’m not sure how good professor Dowd would know.

    As further evidence of Yang’s scholastic ability, Dowd also points out that Yang has a paper under consideration at a top academic journal.

    You know what I am going to say.

    To me, that entire Dowd diversion is mostly funny.

    More evidence.

    Back on track, we get even more detail, such as that the exam in question was:

    an eight-hour preliminary exam that Yang took online. Instructions he shared show the exam was open-book, meaning test takers could use notes, papers and textbooks, but AI was explicitly prohibited. 

    Exam graders argued the AI use was obvious enough. Yang disagrees. 

    Weeks after the exam, associate professor Ezra Golberstein submitted a complaint to the U of M saying the four faculty reviewers agreed that Yang’s exam was not in his voice and recommending he be dismissed from the program. Yang had been in at least one class with all of them, so they compared his responses against two other writing samples. 

    So, the exam expressly banned AI. And we learn that, as part of the determination of the professors, they compared his exam answers with past writing.

    I say all the time, there is no substitute for knowing your students. If the initial four faculty who flagged Yang’s work had him in classes and compared suspicious work to past work, what more can we want? It does not get much better than that.

    Then there’s even more evidence:

    Yang also objects to professors using AI detection software to make their case at the November hearing.  

    He shared the U of M’s presentation showing findings from running his writing through GPTZero, which purports to determine the percentage of writing done by AI. The software was highly confident a human wrote Yang’s writing sample from two years ago. It was uncertain about his exam responses from August, assigning 89 percent probability of AI having generated his answer to one question and 19 percent probability for another. 

    “Imagine the AI detector can claim that their accuracy rate is 99%. What does it mean?” asked Yang, who argued that the error rate could unfairly tarnish a student who didn’t use AI to do the work.  

    First, GPTZero is junk. It’s reliably among the worst available detection systems. Even so, 89% is a high number. And most importantly, the case against Yang is not built on AI detection software alone, as no case should ever be. It’s confirmation, not conviction. Also, Yang, who the paper says already has one PhD, knows exactly what an accuracy rate of 99% means. Be serious.

    A pattern.

    Then we get this, buried in the news coverage:

    Yang suggests the U of M may have had an unjust motive to kick him out. When prompted, he shared documentation of at least three other instances of accusations raised by others against him that did not result in disciplinary action but that he thinks may have factored in his expulsion.  

    He does not include this concern in his lawsuits. These allegations are also not explicitly listed as factors in the complaint against him, nor letters explaining the decision to expel Yang or rejecting his appeal. But one incident was mentioned at his hearing: in October 2023, Yang had been suspected of using AI on a homework assignment for a graduate-level course. 

    In a written statement shared with panelists, associate professor Susan Mason said Yang had turned in an assignment where he wrote “re write it, make it more casual, like a foreign student write but no ai.”  She recorded the Zoom meeting where she said Yang denied using AI and told her he uses ChatGPT to check his English.

    She asked if he had a problem with people believing his writing was too formal and said he responded that he meant his answer was too long and he wanted ChatGPT to shorten it. “I did not find this explanation convincing,” she wrote. 

    I’m sorry — what now?

    Yang says he was accused of using AI in academic work in “at least three other instances.” For which he was, of course, not disciplined. In one of those cases, Yang literally turned in a paper with this:

    “re write it, make it more casual, like a foreign student write but no ai.” 

    He said he used ChatGPT to check his English and asked ChatGPT to shorten his writing. But he did not use AI. How does that work?

    For that one where he left in the prompts to ChatGPT:

    the Office of Community Standards sent Yang a letter warning that the case was dropped but it may be taken into consideration on any future violations. 

    Yang was warned, in writing.

    If you’re still here, we have four professors who agree that Yang’s exam likely used AI, in violation of exam rules. All four had Yang in classes previously and compared his exam work to past hand-written work. His exam answers had similarities with ChatGPT output. An AI detector said, in at least one place, his exam was 89% likely to be generated with AI. Yang was accused of using AI in academic work at least three other times, by a fifth professor, including one case in which it appears he may have left in his instructions to the AI bot.

    On the other hand, he did say he did not do it.

    Findings, review.

    Further:

    But the range of evidence was sufficient for the U of M. In the final ruling, the panel — comprised of several professors and graduate students from other departments — said they trusted the professors’ ability to identify AI-generated papers.

    Several professors and students agreed with the accusations. Yang appealed and the school upheld the decision. Yang was gone. The appeal officer wrote:

    “PhD research is, by definition, exploring new ideas and often involves development of new methods. There are many opportunities for an individual to falsify data and/or analysis of data. Consequently, the academy has no tolerance for academic dishonesty in PhD programs or among faculty. A finding of dishonesty not only casts doubt on the veracity of everything that the individual has done or will do in the future, it also causes the broader community to distrust the discipline as a whole.” 

    Slow clap.

    And slow clap for the University of Minnesota. The process is hard. Doing the review, examining the evidence, making an accusation — they are all hard. Sticking by it is hard too.

    Seriously, integrity is not a statement. It is action. Integrity is making the hard choice.

    MPR, spare me.

    Minnesota Public Radio is a credible news organization. Which makes it difficult to understand why they chose — as so many news outlets do — to not interview one single expert on academic integrity for a story about academic integrity. It’s downright baffling.

    Worse, MPR, for no specific reason whatsoever, decides to take prolonged shots at AI detection systems such as:

    Computer science researchers say detection software can have significant margins of error in finding instances of AI-generated text. OpenAI, the company behind ChatGPT, shut down its own detection tool last year citing a “low rate of accuracy.” Reports suggest AI detectors have misclassified work by non-native English writers, neurodivergent students and people who use tools like Grammarly or Microsoft Editor to improve their writing. 

    “As an educator, one has to also think about the anxiety that students might develop,” said Manjeet Rege, a University of St. Thomas professor who has studied machine learning for more than two decades. 

    We covered the OpenAI deception — and it was deception — in Issue 241, and in other issues. We covered the non-native English thing. And the neurodivergent thing. And the Grammarly thing. All of which MPR wraps up in the passive and deflecting “reports suggest.” No analysis. No skepticism.

    That’s just bad journalism.

    And, of course — anxiety. Rege, who please note has studied machine learning and not academic integrity, is predictable, but not credible here. He says, for example:

    it’s important to find the balance between academic integrity and embracing AI innovation. But rather than relying on AI detection software, he advocates for evaluating students by designing assignments hard for AI to complete — like personal reflections, project-based learnings, oral presentations — or integrating AI into the instructions. 

    Absolute joke.

    I am not sorry — if you use the word “balance” in conjunction with the word “integrity,” you should not be teaching. Especially if what you’re weighing against lying and fraud is the value of embracing innovation. And if you needed further evidence for his absurdity, we get the “personal reflections and project-based learnings” buffoonery (see Issue 323). But, again, the error here is MPR quoting a professor of machine learning about course design and integrity.

    MPR also quotes a student who says:

    she and many other students live in fear of AI detection software.  

    “AI and its lack of dependability for detection of itself could be the difference between a degree and going home,” she said. 

    Nope. Please, please tell me I don’t need to go through all the reasons that’s absurd. Find me one single of case in which an AI detector alone sent a student home. One.

    Two final bits.

    The MPR story shares:

    In the 2023-24 school year, the University of Minnesota found 188 students responsible of scholastic dishonesty because of AI use, reflecting about half of all confirmed cases of dishonesty on the Twin Cities campus. 

    Just noteworthy. Also, it is interesting that 188 were “responsible.” Considering how rare it is to be caught, and for formal processes to be initiated and upheld, 188 feels like a real number. Again, good for U of M.

    The MPR article wraps up that Yang:

    found his life in disarray. He said he would lose access to datasets essential for his dissertation and other projects he was working on with his U of M account, and was forced to leave research responsibilities to others at short notice. He fears how this will impact his academic career

    Stating the obvious, like the University of Minnesota, I could not bring myself to trust Yang’s data. And I do actually hope that being kicked out of a university for cheating would impact his academic career.

    And finally:

    “Probably I should think to do something, selling potatoes on the streets or something else,” he said. 

    Dude has a PhD in economics from Utah State University. Selling potatoes on the streets. Come on.

    Source link

  • Student-Athlete Unionization Efforts Withdrawn Prior to Second Trump Administration

    Student-Athlete Unionization Efforts Withdrawn Prior to Second Trump Administration

    by CUPA-HR | January 21, 2025

    Two efforts to extend collective bargaining rights to college athletes have been withdrawn in recent weeks in anticipation of the Trump administration taking control of the National Labor Relations Board (NLRB).

    On December 31, 2024, the Dartmouth men’s basketball team withdrew their petition to unionize. Members of the team overwhelmingly voted in March 2024 to join the Service Employees International Union (SEIU). The vote came one month after an NLRB regional director ruled that the players were employees of the college and were thus eligible to unionize.

    Additionally, on January 10, 2025, the National College Players Association (NCPA) withdrew its case against the University of Southern California, the Pac-12 Conference and the NCAA. In the original complaint, the NCPA claimed the three plaintiffs violated the National Labor Relations Act (NLRA) by misclassifying the student-athletes as non-employees. They also argued all three plaintiffs were joint employers of the student-athletes.

    Both of these efforts were pursued after NLRB General Counsel Jennifer Abruzzo issued a memorandum arguing that student-athletes are employees under the NLRA and are therefore afforded all statutory protections as prescribed under the law. The incoming administration will likely rescind the memorandum, halting or at least hindering unionization efforts among student-athletes.

    The decision to withdraw both petitions is likely meant to avoid an unfavorable outcome and precedent from a soon-to-be Republican-controlled NLRB. The SEIU explained in a statement following their withdrawal request that they sought “to preserve the precedent set by this exceptional group of young people on the men’s varsity basketball team.”

    CUPA-HR will keep members apprised of any updates related to student-athlete employment classification and unionization.



    Source link

  • FIRE to University of Texas at Dallas: Stop censoring the student press

    FIRE to University of Texas at Dallas: Stop censoring the student press

    The University of Texas at Dallas has a troubling history of trying to silence students. Now those students are fighting back.

    Today, the editors of The Retrograde published their first print edition, marking a triumphant return for journalism on campus in the face of administrative efforts to quash student press.

    Headlines above the fold of the first issue of The Retrograde, a new independent student newspaper at UT Dallas.

    Why call the newspaper The Retrograde? Because it’s replacing the former student newspaper, The Mercury, which ran into trouble when it covered the pro-Palestinian encampments on campus and shed light on UT Dallas’s use of state troopers (the same force that broke up UT Austin’s encampment just one week prior) and other efforts to quash even peaceful protest. As student journalists reported, their relationship with the administration subsequently deteriorated. University officials demoted the newspaper’s advisor and even removed copies of the paper from newsstands. At the center of this interference were Lydia Lum, director of student media, and Jenni Huffenberger, senior director of marketing and student media, whose titles reflect the university’s resistance to editorial freedom.

    The conflict between the paper and the administration came to a head when Lum called for a meeting of the Student Media Oversight Board, a university body which has the power to remove student leaders, accusing The Mercury’s editor-in-chief, Gregorio Olivares Gutierrez, of violating student media bylaws by having another form of employment, exceeding printing costs, and “bypassing advisor involvement.” Yet rather than follow those same bylaws, which offer detailed instructions for removing a student editor, Lum told board members from other student media outlets not to attend the meeting. A short-handed board then voted to oust Gutierrez. Adding insult to injury, Huffenberger unilaterally denied Gutierrez’s appeal, again ignoring the bylaws, which require the full board to consider any termination appeals.

    The student journalists of The Retrograde have shown incredible spirit. With your help, we can ensure their efforts — and the rights of all student journalists — are respected.

    In response, The Mercury’s staff went on strike, demanding Gutierrez’s reinstatement. To help in that effort, FIRE and the Student Press Law Center joined forces to pen a Nov. 12, 2024 letter calling for UT Dallas to honor the rights of the student journalists. We also asked them to pay the students the money they earned for the time they worked prior to the strike.

    UT Dallas refused to listen. Instead of embracing freedom of the press, the administration doubled down on censorship, ignoring both the students’ and our calls for justice.

    FIRE's advertisement in the first issue of the Retrograde student newspaper at UT Dallas. The headline reads: "FIRE Supports Student Journalism"

    FIRE took out a full page ad in support of The Retrograde at UT Dallas.

    In our letter, we argued that the university’s firing of Gutierrez was in retaliation for The Mercury’s unflattering coverage of the way administrators had handled the encampments. This is not even the first time UT Dallas has chosen censorship as the “best solution;” look no further than in late 2023 when they removed the “Spirit Rocks” students used to express themselves. Unfortunately, the university ignored both the students’ exhortations and FIRE’s demands, leaving UT Dallas without its newspaper. 

    But FIRE’s Student Press Freedom Initiative is here to make sure censorship never gets the last word.

    Students established The Retrograde, a fully independent newspaper. Without university resources, they have had to crowdfund and source their own equipment, working spaces, a new website, and everything else necessary to provide quality student-led journalism to the UT Dallas community. They succeeded, and FIRE is proud to support their efforts, placing a full-page ad in this week’s inaugural issue of The Retrograde.

    The fight for press freedom at UT Dallas is far from over — but we need your help to make a difference.

    Demand accountability from UT Dallas. The student journalists of The Retrograde have shown incredible spirit. With your help, we can ensure their efforts — and the rights of all student journalists — are respected.

    Source link

  • Building and Sustaining an AI-informed Institution

    Building and Sustaining an AI-informed Institution

    Title: Navigating Artificial Intelligence in Postsecondary Education: Building Capacity for the Road Ahead

    Source: Office of Educational Technology, U.S. Department of Education

    As a response to the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, the Department of Education’s new brief, Navigating Artificial Intelligence in Postsecondary Education, provides recommendations for leaders at higher education institutions. The brief is divided into two main parts: one with policy recommendations and one reviewing literature and research.

    The report outlines five recommendations:

    Develop clear policies for the use of AI in postsecondary settings. The use of AI can be vast, from admissions to enrollment to other decision-making processes. It is important, though, to ensure that AI is not reifying bias. Stakeholders should consider the potential utility of an AI Bill of Rights or the National Institute of Standards and Technology’s AI Risk Management Framework in shaping policies for their campuses. They should also consider affirmative consent and disclosure policies as they relate to AI, as well as inscribing characteristics that make AI trustworthy.

    Generate infrastructure that supports the use of AI in pedagogy, student support, and data tracking. Incentivizing cross-department collaboration and faculty involvement in the development of AI tools is key. It is also important to integrate social and behavioral science research into evaluation of AI.

    Continually assess AI tools. This includes testing equity and accounting for any bias. AI should continuously go through a feedback loop. Institutions need to strategize in ensuring a balance of human supervision. Additionally, evaluations should be comprehensive and from diverse stakeholders.

    Collaborate with partners for the development and testing of AI across different educational uses. Leaders are tasked with finding and building relationships with partners. These partnerships should aim to ensure best practices and promote equitable AI.

    Programs should grow and develop alongside the job market’s increased demand for AI. Leaders must consider how to keep up with the evolving demand for AI, as well as how to integrate across all disciplines.

    Click here for the full report.

    —Kara Seidel


    If you have any questions or comments about this blog post, please contact us.

    Source link

  • New Might Not Always Mean Improved: The Benefits and Drawbacks of the New FAFSA

    New Might Not Always Mean Improved: The Benefits and Drawbacks of the New FAFSA

    Title: Chutes and Ladders: Falling Behind and Getting Ahead with the Simplified FAFSA

    Authors: Jonathan S. Lewis and Alyssa Stefanese Yates

    Source: uAspire

    Prior to the 2024-25 academic year, the Free Application for Federal Student Aid (FAFSA) underwent significant changes, mandated by Congress through the FAFSA Simplification Act. A recent uAspire survey of 274 students, parents, counselors, and financial aid administrators found the changes to the FAFSA entailed a number of “chutes,” or drawbacks, and several “ladders” that allowed for a more streamlined financial aid filing process.

    Key survey findings regarding the simplified FAFSA’s benefits and challenges include:

    Benefits:

    • Students appreciated the reduced number of questions within the new FAFSA form.
    • The office of Federal Student Aid improved help text and resources, which aided those accessing the form in answering common questions.
    • Some populations had easier experiences with FAFSA, such as those with relatively straightforward finances, and individuals without overwhelming extenuating circumstances.
    • Those with previous FAFSA experience noted a generally easier experience with the changes.
    • Students with access to high school or college counselors often found greater success with the changes, demonstrating the importance of accessibility to help during the process.

    Challenges:

    • More than half of those surveyed reported experiencing technical problems.
    • The delayed FAFSA timeline heightened stress among students and counselors.
    • Insufficient communication and customer service left approximately 4 million calls to the Department of Education’s call center between Jan. 1 and May 31 unanswered.
    • The issues above often compounded, forming intersecting challenges for students and counselors.
    • Individuals without a Social Security number and English language learners felt the challenges with FAFSA more severely, struggling in particular with technical problems and communication barriers, demonstrating that some populations had more difficult experiences than others.

    The authors conclude by recommending several additional changes to the FAFSA. To minimize the compounding negative effects of the chutes, financial aid processing should be completed faster; technical glitches should be fixed; and communication, wording, and form accessibility should be improved. The authors also recommend fortifying the ladders by finding additional opportunities to reduce the time spent and frustration felt by those filing.

    Overall, the survey highlights how the FAFSA changes produced diverse and polarizing effects. Many students found the process to be simple, securing their financial aid with just a few clicks; other students felt extreme stress caused by technical roadblocks and delays, which left them uncertain about how to pay for school.

    To read the full report from uAspire, click here. For additional information and to read about the Jan. 16 webinar with the authors of the report, click here.

    —Julia Napier


    If you have any questions or comments about this blog post, please contact us.

    Source link

  • Maximize Your Winter Break: College Class Benefits

    Maximize Your Winter Break: College Class Benefits



    Maximize Your Winter Break: College Class Benefits






















    Source link

  • A short college course for students’ life, academic skills

    A short college course for students’ life, academic skills

    While many students experience growing pains in the transition from high school to college, today’s learners face an extra challenge emerging from the COVID-19 pandemic. Many students experienced learning loss in K-12 as a result of distance learning, which has stunted their readiness to engage fully in academia.

    Three faculty members in the communications division at DePaul University noticed a disconnect in their own classrooms as they sought to connect with students. They decided to create their own intervention to address learners’ lack of communication and self-efficacy skills.

    Since 2022, DePaul has offered a two-credit communication course that assists students midway through the term and encourages reflection and goal setting for future success. Over the past four terms, faculty members have seen demonstrated change in students’ self-perceptions and commitment to engage in long-term success strategies.

    The background: Upon returning to in-person instruction after the pandemic, associate professor Jay Baglia noticed students still behaved as though their classes were one-directional Zoom calls, staring blankly or demonstrating learned helplessness from a lack of deadlines and loose attendance policies.

    “We were seeing a greater proportion of students who were not prepared for the college experience,” says Elissa Foster, professor and faculty fellow of the DePaul Humanities Center.

    Previous research showed that strategies to increase students’ collaboration and participation in class positively impacted engagement, helping students take a more active role in their learning and classroom environment.

    The faculty members decided to create their own workshop to equip students with practical tools they can use in their academics and their lives beyond.

    How it works: Offered for the first time in fall 2022, the Communication Fundamentals for College Success course is a two-credit, five-week course that meets for two 90-minute sessions a week, for a total of 10 meetings. The class is housed in the College of Communication but available to all undergraduate students.

    The course is co-taught and was developed by Foster and Kendra Knight, associate professor in the college of communication and an assessment consultant for the center for teaching and learning. Guest speakers from advising and the Office of Health Promotion and Wellness provide additional perspective.

    (from left to right) Jay Baglia, Elissa Foster and Kendra Knight developed a short-form course to support students’ capabilities in higher education and give them tools for future success.

    Aubreonna Chamberlain/DePaul University

    Course content includes skills and behaviors taught in the context of communication for success: asking for help, using university resources, engaging in class with peers and professors, and learning academic software. It also touches on more general behaviors like personal awareness, mindfulness, coping practices, a growth mindset, goal setting and project management.

    The demographics of students enrolled in the course vary; some are transfers looking for support as they navigate a university for the first time. Others are A students who wanted an extra course in their schedule. Others are juniors or seniors hoping to gain longer-term life skills to apply to their internships or their lives as professionals and find work-life balance.

    Throughout the course, students turned in regular reflection exercises for assessment and the final assignment was a writing assignment to identify three tools that they will take with them beyond this course.

    What’s different: One of the challenges in launching the course was distinguishing its goals from DePaul’s Chicago Quarter, which is the first-year precollege experience. Baglia compares the college experience to taking an international vacation: While you might have a guidebook and plan well for the experience beforehand, once you’re in country, you face challenges you didn’t anticipate or may be overwhelmed.

    Orientation is the guidebook students receive before going abroad, and the Communication for Success Class is their tour guide along the way.

    “I think across the country, universities and college professors are recognizing that scaffolding is really the way to go, particularly with first-generation college students,” Baglia says. “They don’t always have the language or the tools or the support or the conversations at home that prepare them for the strangeness of living on their own [and navigating higher education].”

    A unique facet of the course is that it’s offered between weeks three and seven in the semester, starting immediately after the add-drop period concludes and continuing until midterms. This delayed-start structure means the students enrolled in the course are often looking for additional credits to keep their full-time enrollment status, sometimes after dropping a different course.

    The timing of the course also requires a little time and trust, because most students register for it later, not during the course registration period. Baglia will be teaching the spring 2025 term and, as of Jan. 10, he only has two students registered.

    “It has not been easy convincing the administrators in our college to give it some time … Students have to register for this class [later],” Baglia says.

    The results: Foster, Knight and Baglia used a small grant to study the effects of the intervention and found, through the data, a majority of students identified time management and developing a growth mindset as the tools they want to keep working on, with just under half indicating self-care and 40 percent writing about classroom engagement.

    In their essays, students talked about mapping out their deadlines for the semester or using a digital calendar to stay on top of their schedules. Students also said they were more likely to view challenges as opportunities for growth or consider their own capabilities as underdeveloped, rather than stagnant or insufficient.

    The intervention has already spurred similar innovation within the university, with the College of Science and Health offering a similar life skills development course.

    Course organizers don’t have plans to scale the course at present, but they are considering ways to collect more data from participants after they finish the course and compare that to the more general university population.

    Get more content like this directly to your inbox every weekday morning. Subscribe to the Student Success newsletter here.

    This article has been updated to clarify the course is housed in the College of Communication.

    Source link

  • Three questions for UVA’s Anne Trumbore

    Three questions for UVA’s Anne Trumbore

    The Teacher in the Machine: A Human History of Education Technology (Princeton University Press) will be published this May. I was lucky enough to receive an advance copy. It is too early to interview the author, the University of Virginia’s Anne Trumbore, about the book, as you will not be able to get your hands on it for a few months. I can’t help myself, though.

    Like Anne, I am also a practitioner-scholar, working in and writing about the intersection of technology, learning and higher education change. While The Teacher and the Machine covers much of the same ground as my first co-authored book, Learning Innovation and the Future of Higher Education (JHUP, 2020), I learned much of what I didn’t know from reading Anne’s book.

    As the publication of The Teacher in the Machine approaches, I’ll share a full (highly positive) review. Until then, to help build anticipation about the book’s launch and also get to know its author better, I thought the best place to start is a Q&A.

    Q: Tell us about your current role at Darden (UVA) and the education and career path that you have followed.

    A: I’m currently the chief digital learning officer, where I lead a team that designs, develops and delivers education that enables career mobility for learners at all ages and stages. I arrived at this stage through a pretty circuitous path that included time as a journalist and obituary writer, a copywriter for motion picture advertising, a writing teacher at SFSU and Stanford, and then a lateral hop into ed tech. My education path was somewhat more straightforward: straight to undergrad from high school. But my graduate degrees were driven by career aspirations and occurred decades apart. (I resemble a lot of the learners we are helping now in that regard.)

    Oddly enough, my “unmarketable” undergrad degree in semiotics and my graduate work in writing and teaching writing got me hired full-time at Stanford, working on an adaptive grammar program that provided asynchronous personalized instruction and creating curriculum for and teaching at Stanford Online High School. That led to a role on the early team at Coursera, with a focus on working with university professors using (and developing) online peer review, which morphed into a role on the founding team at NovoEd, developing designs for social and project-based learning at scale. Then I pivoted back to higher ed with a role at Wharton, where I established Wharton Online.

    The questions I was trying to answer there, most of which revolved around maximizing the effectiveness of, and revenue for, online education in business topics, led me to UVA. Its Darden School of Business had just received a transformational gift to establish the Sands Institute for Lifelong Learning, which is where I saw the puck going at the intersection of higher education and technology. I earned an education doctorate at Penn GSE during my time at Wharton because the questions I began asking about what we were doing and why were not easily answered within the confines of the business school.

    Q: In The Teacher and the Machine, you tell the story of the birth and evolution of massive open online courses within the context of the history of educational technology. What are the lessons from the history of ed tech that we in higher education should absorb as we make decisions about the future of online education and AI for teaching and learning?

    A: The main takeaway is that innovation in ed tech is particularly reliant upon ignorance of its history for a couple of main reasons: Innovation drives adoption (no one wants to invest in an “old” idea), and the idea of using technology to make education both more efficient and democratic consolidates power in the hands of the disrupters, who are almost always businessmen and scientists educated at the most elite universities in the world.

    I believe that once you understand the history of ed tech and its intertwined beginnings with artificial intelligence, universities can be more clear-eyed about their business partnerships with ed-tech companies and their purchasing decisions, which are usually not driven by evidence-backed research. We also have the opportunity to be more thoughtful about our motives in distributing education “to the masses” and ask ourselves who this strategy benefits and why it is attractive to venture capital.

    Finally—and this is a point you and a few others have made extremely well—it’s incumbent upon higher ed institutions to be informed about the innovation narrative that gets circulated, which enriches the same set of people and institutions over and over again. I have to believe that if we have a greater understanding of the history and the motives of the major players in ed tech, we can also ask better questions of our ed-tech providers and partners so that we can create educational experiences that provide more returns to learners than ed-tech investors.

    Q: You are not only a student of higher education and digital learning, you are also a practitioner. How did your role throughout your career as a participant in the creation and development of MOOCs and other online learning initiatives impact how you write about that history in The Teacher in the Machine?

    A: The closest metaphor I can think of is that it felt like putting together a 2,000-piece puzzle of a photograph I was in: I knew what it would look like, but I had to break down and examine all the pieces and then reassemble. The questions I asked of the events were less about what happened and more about why did it happen that particular way? What were the conditions that produced our actions? Living the history also provided opportunities to fill in the gaps that some more traditional records leave out.

    I’m thinking especially of the daily minor decisions that were made under pressure that drove the history in unplanned directions, as well as the personalities of the main players. Experiencing these elements of the story and being able to report firsthand is one of the benefits to being in the circus ring instead of in the seats. Another is that you can directly see the audience, which provides a different lens than a more traditional history. Hopefully, the narrative benefited from the inside-out point of view.

    Source link