Tag: proposals

  • Policy Proposals Lack Clarity About How to Evaluate Graduates’ Additional Degrees

    Policy Proposals Lack Clarity About How to Evaluate Graduates’ Additional Degrees

    Title: Accounting for Additional Credentials in Postsecondary Earnings Data

    Authors: Jason Delisle, Jason Cohn, and Bryan Cook

    Source: The Urban Institute

    As policymakers across both parties consider how to evaluate postsecondary outcomes and earnings data, the authors of a new brief from the Urban Institute pose a major question: How should students who earn multiple credentials be included in data collection for the college that awarded their first degree?

    For example, should the earnings of a master’s degree recipient be included in the data for the institution where they earned their bachelor’s degree? Additionally, students who finish an associate degree at a community college are likely to earn higher wages when they complete a bachelor’s degree at another institution. Thus, multiple perspectives need to be considered to help both policymakers and institutions understand, interpret, and treat additional degrees earned.

    Additional key findings include:

    Earnings Data and Accountability Policies

    Many legislative proposals would expand the use of earnings data to provide further accountability and federal aid restrictions. For example, the House Republicans’ College Cost Reduction Act, proposed in 2024, would put institutions at risk of losing funding if they have low student loan repayment rates. The brief’s authors state that the bill does not indicate if students who earn additional credentials should be included in the cohort of students where they completed their first credential.

    The recently implemented gainful employment rule from the Biden administration is explicit in its inclusion of those who earn additional credentials. Under the rule, students who earn an additional degree are included in both calculations for their recent degree and the program that awarded their first credential.

    How Much Do Additional Credential Affect Earnings Data?

    Determining how much additional credentials affect wages and earnings for different programs is difficult. The first earnings measurement—the first year after students leave school—is usually too early to include additional income information from a second credential.

    Although the entire data picture is lacking, a contrast between first- and fifth-year earnings suggests that the number of students earning additional degrees may be very high for some programs. As an example, students who earn associate degrees in liberal arts and general studies often have some of their quickest increases in earnings during these first five years. A potential explanation is because students are then completing a bachelor’s degree program at a four-year institution.

    Policy Implications: How Should Earnings Data Approach Subsequent Credentials?

    In general, it seems that many policymakers have not focused on this complicated question of students who earn additional degrees. However, policy and data professionals may benefit from excluding students who earn additional credentials to more closely measure programs’ return on investment. This can be especially helpful when examining the costs of bachelor’s programs and their subsequent earnings benchmarks, by excluding additional earnings premiums generated from master’s programs.

    Additionally, excluding students who earn additional credentials may be particularly valuable to students in making consumer and financial aid decisions if the payoff from a degree is extremely different depending on whether students pursue an additional credential.

    However, some programs are intended to prepare students for an additional degree, and excluding data for students who earn another degree would mean excluding most graduates and paint a misleading picture.

    To read the full report from the Urban Institute, click here.

    —Austin Freeman


    If you have any questions or comments about this blog post, please contact us.

    Source link

  • FIRE statement on legislative proposals to regulate artificial intelligence

    FIRE statement on legislative proposals to regulate artificial intelligence

    As the 2025 legislative calendar begins, FIRE is preparing for lawmakers at both the state and federal levels to introduce a deluge of bills targeting artificial intelligence. 

    The First Amendment applies to artificial intelligence just as it does to other expressive technologies. Like the printing press, the camera, and the internet, AI can be used as an expressive tool — a technological advance that helps us communicate with one another and generate knowledge. As FIRE Executive Vice President Nico Perrino argued in The Los Angeles Times last month: “The Constitution shouldn’t be rewritten for every new communications technology.” 

    We again remind legislators that existing laws — cabined by the narrow, well-defined exceptions to the First Amendment’s broad protection — already address the vast majority of harms legislatures may seek to counter in the coming year. Laws prohibiting fraud, forgery, discrimination, and defamation, for example, apply regardless of how the unlawful activity is ultimately carried out. Liability for unlawful acts properly falls on the perpetrator of those acts, not the informational or communicative tools they use. 

    Some legislative initiatives seeking to govern the use of AI raise familiar First Amendment problems. For example, regulatory proposals that would require “watermarks” on artwork created by AI or mandate disclaimers on content generated by AI violate the First Amendment by compelling speech. FIRE has argued against these kinds of efforts to regulate the use of AI, and we will continue to do so — just as we have fought against government attempts to compel speech in school, on campus, or online

    Rather than compelling disclaimers or imposing content-based restrictions on AI-generated expression, legislators should remember the law already protects against defamation, fraud, and other illegal conduct. 

    Lawmakers have also sought to regulate or even criminalize the use of AI-generated content in election-related communications. But courts have been wary of legislative attempts to control AI’s output when political speech is implicated. Following a First Amendment challenge from a satirist who uses AI to generate parodies of political figures, for example, a federal district court recently enjoined a California statute aimed at “deepfakes” that regulated “materially deceptive” election-related content. 

    Content-based restrictions like California’s law require strict judicial scrutiny, no matter how the expression is created. As the federal court noted, the constitutional protections “safeguarding the people’s right to criticize government and government officials apply even in the new technological age when media may be digitally altered.” So while lawmakers might harbor “a well-founded fear of a digitally manipulated media landscape,” the court explained, “this fear does not give legislators unbridled license to bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment.” 

    Artificial intelligence, free speech, and the First Amendment

    Issue Pages

    FIRE offers an analysis of frequently asked questions about artificial intelligence and its possible implications for free speech and the First Amendment.


    Read More

    Other legislative proposals threaten the First Amendment by imposing burdens directly on the developers of AI models. In the coming months, for example, Texas lawmakers will consider the Texas Responsible Artificial Intelligence Governance Act, or TRAIGA, a sweeping bill that would impose liability on developers, distributors, and deployers of AI systems that may introduce a risk of “algorithmic discrimination,” including by private actors. The bill vests broad regulatory authority in a newly created state “Artificial Intelligence Council” and imposes steep compliance costs. TRAIGA compels developers to publish regular risk reports, a requirement that will raise First Amendment concerns when applied to an AI model’s expressive output or the use of AI as a tool to facilitate protected expression. Last year, a federal court held a similar reporting requirement imposed on social media platforms was likely unconstitutional.

    TRAIGA’s provisions incentivize AI developers to handicap their models to avoid any possibility of offering recommendations that some might deem discriminatory or simply offensive — even if doing so curtails the models’ usefulness or capabilities. Addressing unlawful discrimination is an important legislative aim, and lawmakers are obligated to ensure we all benefit from the equal protection of the law. At the same time, our decades of work defending student and faculty rights has left FIRE all too familiar with the chilling effect on speech that results from expansive or arbitrary interpretations of anti-discrimination law on campus. We will oppose poorly crafted legislative efforts that would functionally build the same chill into artificial intelligence systems.

    The sprawling reach of legislative proposals like TRAIGA run headlong into the expressive rights of the people building and using AI models. Rather than compelling disclaimers or imposing content-based restrictions on AI-generated expression, legislators should remember the law already protects against defamation, fraud, and other illegal conduct. And rather than preemptively saddling developers with broad liability for an AI model’s possible output, lawmakers must instead examine the recourse existing laws already provide victims of discrimination against those who would use AI — or any other communicative tool — to unlawful ends.

    FIRE will have more to say on the First Amendment threats presented by legislative proposals regarding AI in the weeks and months to come.

    Source link