Tag: Problems

  • New Government, familiar problems – By Chris Husbands

    New Government, familiar problems – By Chris Husbands

    The higher education sector had high hopes of a new government last July. Early messaging from ministers suggested that they were justified.  The Guardian quoted Peter Kyle, the Science Secretary, declaring an ‘end to the war on universities’. Speaking to the Commons in September 2024, the Education Secretary Bridget Phillipson said that ‘the last Government ..use[d] our world-leading sector as a political football, talking down institutions and watching on as the situation became…desperate. I [want to]…return universities to being the engines of growth and opportunity‘.  In November, she announced a rise – albeit for just one year in the first instance – in the undergraduate tuition fee, with the prospect of alleviating pressure on higher education budgets.

    Ten months on, the hopes look tarnished as financial, political and policy challenges mount. The scale of the higher education funding challenge is deepening, it seems, by the week. The OfS has reported that four in ten universities will report a deficit this year.  Restructuring programmes are underway in scores of universities, with some institutions on their second, third or even fourth round of savings.  The post-study graduate visa, an important lifeline for international student recruitment, appears to be under threat.

    There are eerie echoes of headlines and comments under the last government.  The Daily Telegraph declared that a ‘record number of universities [are] in deficit’. The Times claimed that universities that appeared to report relatively poor progression to graduate-level jobs were to be ‘named and shamed’. Following the success of Reform UK in local elections, some backbench Labour MPs have been sharply critical of universities: ‘I would close half our universities and turn them into vocational colleges’, wrote the Liverpool MP Dan Carden (BA, London School of Economics, since you ask), whilst Jonathan Hinder, MP for Pendle (MA Oxford) declared himself ‘happy to be bold and say I don’t think we should have anywhere near as many universities and university places‘. Philip Augar, who reviewed skills funding for Theresa May’s Government, wrote in the Financial Times that the ‘English higher education market is broken‘ as a result of a ‘failed free market experiment’. It seems terribly familiar: a sector in financial crisis, losing political traction and friends.

    Policy direction appears to be unclear. The English higher education sector is still largely shaped by the coalition government’s policy decisions between 2010 and 2015. Its key design principles include uncapped student demand since number controls were abolished in 2013, assumed cross-subsidies across and between activity streams allowing for institutional flexibility, access to private capital markets since HEFCE capital funding was removed in 2011, diverse missions but largely homogenous delivery models based around traditional terms and full-time, three-year undergraduate provision, and jealously protected institutional autonomy. Familiar though these principles are in higher education policy, some are in truth relatively recent, and are creating tensions between what the nation wants from its university system, what universities can offer and what the government and others are willing to pay for.   

    Moreover, the sector we have in 2025 is not the sector which the 2017 Higher Education and Reform Act (HERA) envisaged: HERA was expected to significantly re-shape the sector. The government’s impact assessment of HERA suggested that there would be in the order of 800 HE providers by the mid-2020s.  This did not happen, though the impact of private capital, often channelled through established institutions and now rapidly growing for-profit providers, should not be underestimated as a longer-term transformative force in the sector.

    We are expecting both a three-year comprehensive spending review and a post-16 White Paper in a couple of months’ time. In my 2024 HEPI paper, ’Four Futures’, I sketched out possible scenarios for a sector facing intense challenges. The near-frozen undergraduate fee was reducing the unit of resource for undergraduate teaching as costs rose. Undergraduate demand seemed to be softening amongst (especially) disadvantaged eighteen-year-olds. International student demand remains volatile and subject to political change in visa regulations.  The structural deficit on research funding deepened.  ‘Four Futures’ outlined four scenarios, summarised in Table 1.

    Of course, we all want a mixture of cost control, thriving universities, regional growth and research excellence, but it is difficult to have all of them. Governments and universities set priorities based on limited resources, so there are choices to be made and trade-offs to be confronted for both policymakers and institutional leaders. 

    Government needs to make decisions about universities in the context of competing and changing policy imperatives. It needs to balance restoring government finances, allocating resources to other needy sectors, securing economic growth, and, more obviously important than a year ago, protecting sovereign intellectual property assets and growing defence-related R&D. The Secretary of State’s letter to Vice-Chancellors in the Autumn identified growth, engagement with place, teaching excellence, widening participation and securing efficiencies, but did not unpick the tensions between them.  That depends on articulating a stronger vision for higher education given the Government’s priorities and resources and the economic challenges facing institutions, and it is a task for the forthcoming White Paper.  

    But there are urgent choices too for institutions, and those need to be made quickly in many universities.  Institutional and sector efficiencies are vital, and a key theme of the UUK Carrington Review, but they need to be considered in the light of sustainable operating models for both academic delivery and professional services. Institutions need a clearly articulated value proposition, communicated strongly and effectively and capable of driving the operating model. In the past, too many universities have tried to do too many things – and with resources scarce, the choices cannot be ducked. That means there is a consideration which links the choices facing government and those facing individual institutions.  If a core strength of the English system lies in its diversity and its distributed excellence, individual institutions need to think about their place in, and responsibilities to, the wider HE system. For a sector characterised by intense competition, that is a profound cultural shift, notwithstanding the economic and legal challenges of collaboration.

    The higher education sector now is not the sector we have always had, and therefore it won’t be the sector we always have. How the sector collectively, and institutions individually, confront choices is a test for policymakers and institutional leaders.

    Source link

  • Solving the Right Problems –

    Solving the Right Problems –

    I write this post to e-Literate readers, Empirical Educator Project (EEP) participants, and 1EdTech members. You should know each other. But you don’t. We should all be working on solving problems together. But we aren’t.

    Not yet, anyway. Now that EEP is part of 1EdTech, I’m writing to ask you to come together at our Learning Impact conference in Indianapolis, the first week in June, to take on this work together.

    1EdTech has the potential to enable a massive learning impact because we have proven that we can change the way the entire EdTech ecosystem works together. (I recently posted a dialogue with Anthropic Claude about this topic.) I highlight the word “potential” because, as a community-driven organization, we only take on the challenges that the community decides to take on together. And the 1EdTech community has not had many e-Literate readers and EEP participants who can help us identify the most impactful challenges we could take on together.

    On the morning of Monday, June 2nd, we’ll have an EEP mini-conference. For those of you who have been to EEP before, the general idea will be familiar but the emphasis will be different. EEP didn’t have a strong engine to drive change. 1EdTech does. So the EEP mini-conference will be a series of talks in which the speakers propose ideas about what the 1EdTech should be working on, based on its learning impact. If you want to come just for the day, you can register for the mini-conference for $350 and participate in the opening events as well. But I invite you to register for the full conference. If you scan the agenda, you’ll see sessions throughout the conference that will interest e-Literate readers and EEP participants.

    EEP will become Learning Impact Labs

    We’re building something bigger. Nesting EEP inside Learning Impact is just a start. Our larger goal is to create an umbrella of educational impact-focused proposals for work that 1EdTech can take on now and a series of exploratory projects for us to understand work that we may want to take on soon. You may recall my AI Learning Design Assistant (ALDA) project, for example. That experiment now lives inside 1EdTech. As a community, we will be working to become more proactive, anticipating needs and opportunities that are directly driven by our collective understanding of what works, what is needed, and what is coming. We will have ideas. But we need yours.

    Come. Join us. If you’ve been a fellow traveler with me but haven’t seen a place for you at 1EdTech, I want you to know we have a seat with your name on it. If you’re a 1EdTech member who has colleagues more focused on the education (or the EdTech product design) side, let them know they can have a voice in 1EdTech.

    Let us, finally, raise the barn together.

    Come.

    Source link

  • How to incorporate real-world connections into any subject area

    How to incorporate real-world connections into any subject area

    Key points:

    In my classroom, I frequently encounter students expressing their opinions: “How is this relevant to the real world?” or “Why should I care? I will never use this.” This highlights the need for educators to emphasize real-world applications across all subjects.

    As an educator, I consistently strive to illustrate the practical applications of geography beyond the classroom walls. By incorporating real-world experiences and addressing problems, I aim to engage students and encourage them to devise solutions to these challenges. For instance, when discussing natural resources in geography, I pose a thought-provoking question: “What is something you cannot live without?” As students investigate everyday items, I emphasize that most of these products originate from nature at some point, prompting a discussion on the “true cost” of these goods.

    Throughout the unit, I invite a guest speaker who shares insights about their job duties and provides information related to environmental issues. This interaction helps students connect the dots, understanding that the products they use have origins in distant places, such as the Amazon rainforest. Despite it being thousands of miles away, I challenge students to consider why they should care.

    As students engage in a simulation of the rainforest, they begin to comprehend the alarming reality of its destruction, driven by the increasing demand for precious resources such as medicines, fruits, and beef. By the conclusion of the unit, students will participate in a debate, utilizing their research skills to argue for or against deforestation, exploring its implications for resources and products in relation to their daily lives. This approach not only enhances their understanding of geography but also creates a real-world connection that fosters a sense of responsibility toward the environment.

    Creating a foundation to build upon

    Engaging in academic discussions and navigating through academic content is essential for fostering a critical thinking mentality among students. However, it is often observed that this learning does not progress to deeper levels of thought. Establishing a solid foundation is crucial before advancing toward more meaningful and complex ideas.

    For instance, in our geography unit on urban sprawl, we start by understanding the various components related to urban sprawl. As we delve into the topic, I emphasize the importance of connecting our lessons to the local community. I pose the question: How can we identify an issue within the town of Lexington and address it while ensuring we do not contribute to urban sprawl?  Without a comprehensive foundation, students struggle to elevate their thinking to more sophisticated levels. Therefore, it is imperative to build this groundwork to enable students to engage in higher-order thinking effectively.

    Interdisciplinary approaches

    Incorporating an interdisciplinary approach can significantly enrich the learning process for students. When students recognize the connections between different subjects, they gain a deeper appreciation for the relevance of their education. According to Moser et. al (2019), “Integrative teaching benefits middle-level learners as it potentially increases student engagement, motivation, and achievement. It provides learners with the opportunity to synthesize knowledge by exploring topics and ideas through multiple lenses.” This method emphasizes the importance of making meaningful connections that deepen students’ comprehension. As they engage with the content from different perspectives, students will apply their learning in real-world contexts.

    For instance, principles from science can be linked to literature they are studying in English class. Similarly, concepts from physics can be applied to understand advancements in medical studies. By fostering these connections, students are encouraged to think critically and appreciate the interrelated nature of knowledge.

    Incorporating technology within classrooms

    In today’s digital world, where technology is readily accessible, it is crucial for classroom learning to align with current technological trends and innovations. Educators who do not incorporate technology into their teaching practices are missing an opportunity to enhance student learning experiences. In my class, I have students explore their designated area using Google Earth, which we previously outlined. Each student selected a specific region to concentrate on during their analysis. This process involves identifying areas that require improvement and discussing how it can benefit the community. Additionally, we examine how these changes can help limit urban sprawl and reduce traffic congestion.

    We have moved beyond the era of relying solely on paper copies and worksheets; the focus now is on adapting to change and providing the best opportunities for students to express themselves and expand their knowledge. As Levin & Wadmany (2014) observe, “some teachers find that technology encourages greater student-centeredness, greater openness toward multiple perspectives on problems, and greater willingness to experiment in their teaching.” This highlights the necessity for teachers to evolve into facilitators of learning, acting as guides who support students taking ownership of their learning.

    Strategies for implementation

    1. Start with the “why”: Teachers should critically consider the significance of their instructional approaches: Why is this method or content essential for students’ learning? Having a clear vision of the desired learning outcomes enables educators plan effectively and what instructional strategies to use. This intentionality is crucial.

    2. Use authentic materials: Incorporating meaningful text that involves real-world concepts can significantly enhance students’ engagement. For instance, in social studies class discussing renewable energy can lead to academic discussion or projects where students research about local initiatives in their community.

    3. Promote critical thinking: Encourage students to engage in critical thinking by asking open-ended questions, creating opportunities for debates to challenge their ideas, and urging them to articulate and defend their viewpoints.

    4. Encourage collaboration: Students excel in collaborative learning environment, such as group projects and peer reviews where they can engage with their classmates. These activities allow them to learn from each other and view different perspectives.

    5. Provide ongoing feedback: Providing constructive feedback is essential for helping students identify their strengths and areas for improvements. By having planned check-ins, teachers can tailor their instruction to ensure that they are meeting the academic needs of individual students.

    References

    Levin, T., & Wadmany, R. (2006). Teachers’ Beliefs and Practices in Technology-based Classrooms: A Developmental View. Journal of Research on Technology in Education, 39(2), 157–181. https://doi.org/10.1080/15391523.2006.10782478

    Moser, K. M., Ivy, J., & Hopper, P. F. (2019). Rethinking content teaching at the middle level: An interdisciplinary approach. Middle School Journal, 50(2), 17–27. https://doi.org/10.1080/00940771.2019.1576579

    Latest posts by eSchool Media Contributors (see all)

    Source link

  • Wave of state-level AI bills raise First Amendment problems

    Wave of state-level AI bills raise First Amendment problems

    AI is enhancing our ability to communicate, much like the printing press and the internet did in the past. And lawmakers nationwide are rushing to regulate its use, introducing hundreds of bills in states across the country.  Unfortunately, many AI bills we’ve reviewed would violate the First Amendment — just as FIRE warned against last month. It’s worth repeating that First Amendment doctrine does not reset itself after each technological advance. It protects speech created or modified with artificial intelligence software just as it does to speech created without it.

    On the flip side, AI’s involvement doesn’t change the illegality of acts already forbidden by existing law. There are some narrow, well-defined categories of speech not protected by the First Amendment — such as fraud, defamation, and speech integral to criminal conduct — that states can and do already restrict. In that sense, the use of AI is already regulated, and policymakers should first look to enforcement of those existing laws to address their concerns with AI. Further restrictions on speech are both unnecessary and likely to face serious First Amendment problems, which I detail below.

    Constitutional background: Watermarking and other compelled disclosure of AI use

    We’re seeing a lot of AI legislation that would require a speaker to disclose their use of AI to generate or modify text, images, audio, or video. Generally, this includes requiring watermarks on images created with AI, mandating disclaimers in audio and video generated with AI, and forcing developers to add metadata to images created with their software. 

    Many of these bills violate the First Amendment by compelling speech. Government-compelled speech—whether that speech is an opinion, or fact, or even just metadata—is generally anathema to the First Amendment. That’s for good reason: Compelled speech undermines everyone’s right to conscience and fundamental autonomy to control their own expression.

    To illustrate: Last year, in X Corp. v. Bonta, the U.S. Court of Appeals for the Ninth Circuit  reviewed a California law that required social media companies to post and report information about their content moderation practices. FIRE filed an amicus curiae — “friend of the court” — brief in that case, arguing the posting and reporting requirements unconstitutionally compel social media companies to speak about topics on which they’d like to remain silent. The Ninth Circuit agreed, holding the law was likely unconstitutional. While acknowledging the state had an interest in providing transparency, the court reaffirmed that “even ‘undeniably admirable goals’ ‘must yield’ when they ‘collide with the . . . Constitution.’”

    There are (limited) exceptions to the principle that the state cannot compel speech. In some narrow circumstances, the government may compel the disclosure of information. For example, for speech that proposes a commercial transaction, the government may require disclosure of uncontroversial, purely factual information to prevent consumer deception. (For example, under this principle, the D.C. Circuit allowed federal regulators to require disclosure of country-of-origin information about meat products.) 

    But none of those recognized exceptions would permit the government to mandate blanket disclosure of AI-generated or modified speech. States seeking to require such disclosures will face heightened scrutiny beyond what is required for commercial speech.

    AI disclosure and watermarking bills

    This year, we’re also seeing lawmakers introduce many bills that require certain disclosures whenever speakers use AI to create or modify content, regardless of the nature of the content. These bills include Washington’s HB 1170, Massachusetts’s HD 1861, New York’s SB 934, and Texas’s SB 668.

    At a minimum, the First Amendment requires these kinds of regulations to be tailored to address a particular state interest. But these bills are not aimed at any specific problem at all, much less being tailored to it; instead, they require nearly all AI-generated media to bear a digital disclaimer. 

    For example, FIRE recently testified against Washington’s HB 1170, which requires covered providers of AI to include in any AI-generated images, videos, or audio a latent disclosure detectable by an AI detection tool that the bill also requires developers to offer.

    Of course, developers and users can choose to disclose their use of AI voluntarily. But bills like HB 1170 force disclosure in constitutionally suspect ways because they aren’t aimed at furthering any particular governmental interest and they burden a wide range of speech.

    Because no reliable technology exists to detect whether media has been produced by AI, candidates can easily weaponize these laws to challenge all campaign-related media that they simply do not like. 

    In fact, if the government’s goal is addressing fraud or other unlawful deception, there are ways these disclosures could make things worse. First, the disclosure requirement will taint the speech of non-malicious AI users by fostering the false impression that their speech is deceptive, even if it isn’t. Second, bad actors can and will find ways around the disclosure mandate — including using AI tools in other states or countries, or just creating photorealistic content through other means. False content produced by bad actors will then have a much greater imprimatur of legitimacy than it would in a world without the disclosures required by this bill, because people will assume that content lacking the mandated disclosure was not created with AI.

    Constitutional background: Categorical ‘deepfake’ regulations

    A handful of bills introduced this year seek to categorically ban “deepfakes.” In other words, these bills would make it unlawful to create or share AI-generated content depicting someone saying or doing something that the person did not in reality say or do.

    Categorical exceptions to the First Amendment exist, but these exceptions are few, narrow, and carefully defined. Take, for example, false or misleading speech. There is no general First Amendment exception for misinformation or disinformation or other false speech. Such an exception would be easily abused to suppress dissent and criticism.

    There are, however, narrow exceptions for deceptive speech that constitutes fraud, defamation, or appropriation. In the case of fraud, the government can impose liability on speakers who knowingly make factual misrepresentations to obtain money or some other material benefit. For defamation, the government can impose liability for false, derogatory speech made with the requisite intent to harm another’s reputation. For appropriation, the government can impose liability for using another person’s name or likeness without permission, for commercial purposes.

    Misinformation versus disinformation, explained

    Issue Pages

    Confusingly, the terms are used interchangeably. But they are different — and the distinction matters.


    Read More

    Like an email message or social media post, AI-generated content can fall under one of these categories of unprotected speech, but the Supreme Court has never recognized a categorical exception for creating photorealistic images or video of another person. Context always matters.

    Although some people will use AI tools to produce unlawful or unprotected speech, the Court has never permitted the government to institute a broad technological ban that would stifle protected speech on the grounds that the technology has a potential for misuse. Instead, the government must tailor its regulation to the problem it’s trying to solve — and even then, the regulation will still fail judicial scrutiny if it burdens too much protected speech.

    AI-generated content has a wide array of potential applications, spanning from political commentary and parody to art, entertainment, education, and outreach. Users have deployed AI technology to create political commentary, like the viral deepfake of Mark Zuckerberg discussing his control over user data — and for parody, as seen in the Donald Trump pizza commercial and the TikTok account dedicated to satirizing Tom Cruise. In the realm of art and entertainment, the Dalí Museum used deepfake technology to bring the artist back to life, and the TV series “The Mandalorian” recreated a young Luke Skywalker. Deepfakes have even been used for education and outreach, with a deepfake of David Beckham raising awareness about malaria.

    These examples should not be taken to suggest that AI is always a positive force for shaping public discourse. It’s not. But not only will categorical bans on deepfakes restrict protected expression such as the examples above, they’ll face — and are highly unlikely to survive — the strictest judicial scrutiny under the First Amendment.

    Categorical deepfake prohibition bills

    Bills with categorical deepfake prohibitions include North Dakota’s HB 1320 and Kentucky’s HB 21.

    North Dakota’s HB 1320, a failed bill that FIRE opposed, is a clear example of what would have been an unconstitutional categorical ban on deepfakes. The bill would have made it a misdemeanor to “intentionally produce, possess, distribute, promote, advertise, sell, exhibit, broadcast, or transmit” a deepfake without the consent of the person depicted. It defined a deepfake as any digitally-altered or AI-created “video or audio recording, motion picture film, electronic image, or photograph” that deceptively depicts something that did not occur in reality and includes the digitally-altered or AI-created voice or image of a person.

    This bill was overly broad and would criminalize vast amounts of protected speech. It was so broad that it would be like making it illegal to paint a realistic image of a busy public park without obtaining everyone’s consent. Why make it illegal for that same painter to take their realistic painting and bring it to life with AI technology?

    Artificial intelligence, free speech, and the First Amendment

    Issue Pages

    FIRE offers an analysis of frequently asked questions about artificial intelligence and its possible implications for free speech and the First Amendment.


    Read More

    HB 1320 would have prohibited the creation and distribution of deepfakes regardless of whether they cause actual harm. But, as noted, there isn’t a categorical exception to the First Amendment for false speech, and deceptive speech that causes specific, targeted harm to individuals is already punishable under narrowly defined First Amendment exceptions. If, for example, someone creates and distributes to other people a deepfake showing someone doing something they didn’t in reality do, thus effectively serving as a false statement of fact, the depicted individual could sue for defamation if they suffered reputational harm. But this doesn’t require a new law.

    Even if HB 1320 were limited to defamatory speech, enacting new, technology-specific laws where existing, generally applicable laws already suffice risks sowing confusion that will ultimately chill protected speech. Such technology-specific laws are also easily rendered obsolete and ineffective by rapidly advancing technology.

    HB 1320’s overreach clashed with clear First Amendment protections. Fortunately, the bill failed to pass.

    Constitutional background: Election-related AI regulations

    Another large bucket of bills that we’re seeing would criminalize or create civil liability for the use of AI-generated content in election-related communications, without regard to whether the content is actually defamatory.

    Like categorical bans on AI, regulations of political speech have serious difficulty passing constitutional muster. Political speech receives strong First Amendment protection and the Supreme Court has recognized it as essential for our system of government: “Discussion of public issues and debate on the qualifications of candidates are integral to the operation of the system of government established by our Constitution.”

    Under strict scrutiny, prohibitions or restrictions on AI-modified or generated media relating to elections will face an uphill battle.

    As noted above, the First Amendment protects a great deal of false speech, so these regulations will be subject to strict scrutiny when challenged in court. This means the government must prove the law is necessary to serve a compelling state interest and is narrowly tailored to achieving that interest. Narrow tailoring in strict scrutiny requires that the state meet its interest using the least speech-restrictive means.

    This high bar protects the American people from poorly tailored regulations of political speech that chill vital forms of political discourse, including satire and parody. Vigorously protecting free expression ensures robust democratic debate, which can counter deceptive speech more effectively than any legislation.

    Under strict scrutiny, prohibitions or restrictions on AI-modified or generated media relating to elections will face an uphill battle. No elections in the United States have been decided, or even materially impacted, by any AI-generated media, so the threat — and the government’s interest in addressing it — remains hypothetical. Even if that connection was established, many of the current bills are not narrowly tailored; they would burden all kinds of AI-generated political speech that poses no threat to elections. Meanwhile, laws against defamation already provide an alternative means for candidates to address deliberate lies that harm them through reputational damage.

    Already, a court has blocked one of these laws on First Amendment grounds. In a First Amendment challenge from a satirist who uses AI to generate parodies of political figures, a federal court recently applied strict scrutiny and blocked a California statute aimed at “deepfakes” that regulated “materially deceptive” election-related content.

    Election-related AI bills

    Unfortunately, many states have jumped on the bandwagon to regulate AI-generated media relating to elections. In December, I wrote about two bills in Texas — HB 556 and HB 228 — that would criminalize AI-generated content related to elections. Other bills now include Alaska’s SB 2, Arkansas’s HB 1041, Illinois’s SB 150, Maryland’s HB 525, Massachusetts’s HD 3373, Mississippi’s SB 2642, Missouri’s HB 673, Montana’s SB 25, Nebraska’s LB 615, New York’s A 235, South Carolina’s H 3517, Vermont’s S 23, and Virginia’s SB 775.

    For example, S 23, a Vermont bill, bans a person from seeking to “publish, communicate, or otherwise distribute a synthetic media message that the person knows or should have known is a deceptive and fraudulent synthetic media of a candidate on the ballot.” According to the bill, synthetic media means content that creates “a realistic but false representation” of a candidate created or manipulated with “the use of digital technology, including artificial intelligence.”

    Under this bill (and many others like it), if someone merely reposted a viral AI-generated meme of a presidential candidate that portrayed that candidate “saying or doing something that did not occur,” the candidate could sue the reposter to block them from sharing it further, and the reposter could face a substantial fine should the state pursue the case further. This would greatly burden private citizens’ political speech, and would burden candidates’ speech by giving political opponents a weapon to wield against each other during campaign season. 

    Because no reliable technology exists to detect whether media has been produced by AI, candidates can easily weaponize these laws to challenge all campaign-related media that they simply do not like. To cast a serious chill over electoral discourse, a motivated candidate need only file a bevy of lawsuits or complaints that raise the cost of speaking out to an unaffordable level.

    Instead of voter outreach, political campaigning would turn into lawfare.

    Concluding Thoughts

    That’s a quick round-up of the AI-related legislation I’m seeing at the moment and how it impacts speech. We’ll keep you posted!



    Source link

  • Perplexing Problems in ACPA Student Technology Infographic – MistakenGoal.com

    Perplexing Problems in ACPA Student Technology Infographic – MistakenGoal.com

    I’ve whined about bad infographics and I try to avoid complaining about their continuing proliferation.  But I can’t bite my tongue about this ACPA infographic purporting to show information about technology usage by undergraduate students.  It’s bad not just because it’s misrepresenting information but because it’s doing so in the specific context of making a call for quality research and leadership in higher education.

    There are some serious problems with the layout and structure of the infographic but let’s focus on the larger issues of data quality and (mis)representation.  I’ve labeled the three major sections of this infographic in the image to the right and I’ll use those numbers below to discuss each section.

    Before I dive into the specific sections, however, I have to ask: Why aren’t the sources cited on the infographic? They’re listed on the ACPA president’s blog post (and perhaps other places) but it’s perplexing that the authors of this document didn’t think it important to credit their sources in their image.

    Section 1: Student use of technology in social interactions and on mobile devices

    The primary problem with this section is that uses this Noel-Lovitz report as its sole source of information and generalizes was beyond the bounds of that source.  The report is based on a phone survey of “2,018 college-bound high school juniors and seniors (p. 2)” but that limitation is completely lost in this infographic.  If this infographic is supposed to be about all U.S. undergraduate students, it’s inappriopriate to generalize from a survey of high school students and misleading to project their behaviors and desires directly onto undergraduate students.  For example, just over half (51.1%) of all undergraduate students are 21 years old or younger (source) so it’s problematic to assume that the half of college students who are over 21 exhibit the same behaviors and desires as high school students.

    I can’t help but also note just how bad the visual display of information is in the “social interactions” part of this infographic.  The three proportionally-sized rectangles placed immediately next to one another make the entire thing appear to be one horizontal stacked bar when in fact they are three independent values unrelated to one another. This is very misleading!

    Section 2: Cyberbullying

    It’s laudable to include information about a specific use of technology that is harmful for many students but like the first section this information is inappropriately and irresponsibly generalizing from a small survey to a large population.  In this instance, 276 responses to a survey of students at one university are being presented as representative of all students.  Further, the one journal article cited as the source for these data doesn’t provide very much information about the survey used to gather these data so we don’t even have many reassurances about the quality of these 276 responses.  And although response rate isn’t the only indicator of data quality we should use to evaluate survey data, this particular survey only had a 1.6% response rate which is quite worrying and makes me wonder if the data are even representative of the students at that one university.

    Section 3: Information-seeking

    The third section of this infographic is well-labeled and uses a high quality source.  I’m not sure how useful it is to present information about high school students in AP classes if we’re interested in the broader undergraduate population but at least the infographic correctly labels the data so we can make that judgement ourselves. In fact, the impeccable source and labels used in this section make the problems in other two sections even more perplexing.


    This is all very frustrating given the context of the image in the ACPA president’s blog post that explicitly calls for ACPA to “advance the application of digital technology in student affairs scholarship and practice and to further enhance ACPA’s digital stamp and its role as a leader in higher education in the information age.”  Given that context, I don’t what to make of the problems with this infographic.  Is this just a sloppy image hurriedly put together by one or two people who made some embarassing errors in judgement?  Or does this reveal some larger problems with how some student affairs professionals locate, apply, and reference research?*

    * I bet that one problem is that many U.S. college and university administrators, including those in student affairs, automatically think of “college student” as meaning “young undergraduate student at 4-year non-profit college or university.”  It’s completely natrual that we all tend to focus on the students on our campuses but when discussing the larger context – such as when working on a task force in an international professional organization that includes members from all sectors of higher education – those assumptions need to at least be made clear if not completely set aside.  In other words, it’s somewhat understandable if the authors of this image only work with younger students at 4-year institutions because then some of their generalizations make some sense.  They’re still inappropriate and indefensible generalizations, however, but they’re at least understandable.

    Source link