Tag: Scores

  • Liaison Unveils New Intelligent Names Degree Intent Scores, Enhancing Predictive Power and Reach 

    Liaison Unveils New Intelligent Names Degree Intent Scores, Enhancing Predictive Power and Reach 

    Liaison, a leader in education technology and data-driven solutions, is excited to announce the release of its 2025 Intelligent Names Degree Intent Scores. These advanced scores represent a transformative leap in identifying adult learners nationwide with the highest potential for pursuing a degree. 

    The 2025 Degree Intent Scores are powered by cutting-edge data science, advanced modeling techniques, and insights from a national survey conducted in late 2024. Combined with responses from Liaison’s extensive consumer database of over 260 million Americans, this enhanced model offers unparalleled precision and reach into the adult learners market. 

    Recent testing using a national dataset of graduate program applicants showed a 20% improvement in predicting applicant activity within the highest intent band when comparing the new intent scores to the original. Similarly, an analysis of a national dataset of bachelor’s degree seekers found that Liaison’s Bachelor’s Degree Intent model accurately identified 91% of degree seekers under the age of 25 in the top two quintiles. These findings underscore the model’s remarkable accuracy, effectiveness, and value for higher education institutions. 

    “The 2025 Degree Intent Scores mark a major milestone in our mission to connect educational institutions with adult learners who are ready to take the next step in their academic journeys,” said Dr. Mark Voortman, Chief Data Scientist at Liaison. “By leveraging large-scale data and state-of-the-art modeling techniques, we’ve significantly enhanced our ability to help institutions identify adult learners most likely to pursue degree opportunities in the near future.” 

    The updated scoring model empowers colleges, universities, and other education providers with deeper, data-driven insights to refine recruitment strategies, enhance student engagement, and achieve enrollment goals more effectively. 

    Learn more about Intelligent Names here.

    Source link

  • Preliminary results show essentially flat state test scores

    Preliminary results show essentially flat state test scores

    New Mexico students made tiny gains in literacy and dipped slightly in math proficiency last school year, according to preliminary results released Sept. 19 by the Public Education Department.

    Notably, the results look slightly more favorable than those in data also released Sept.19 by the Legislative Education Study Committee, which showed 2023-24 scores flat. PED attributed the discrepancy to an error PED discovered on the data earlier this week.

    Regardless, the results show that a large majority of the state’s public education students continue to fall short of grade-level proficiency. This suggests that New Mexico will remain close to the bottom nationally in student achievement.

    The PED results do not include detailed demographic or grade-level breakdowns. Those results will be released on Oct. 4. 

    According to the PED, 39 percent of K-8 and 11th-grade students scored proficient or better on state literacy assessments, compared to 38 percent in 2023, and up from 34 percent in 2022. The LESC results showed that the literacy proficiency rate was flat at 38 percent.

    In math, PED data show 23 percent of students in grades 3-8 and 11 proficient. That’s down one percentage point from 2023 and two points down from 2022. LESC numbers showed 22 percent proficient in 2024.

    In grades 5, 8 and 11 science, scores were up three percentage points, from 34 percent proficient last year to 37 percent proficient in 2024.

    Source link

  • Changes in AP Scores, 2022 to 2024

    Changes in AP Scores, 2022 to 2024

    Used to be, with a little work, you could download very detailed data on AP results from the College Board website: For every state, and for every course, you could see performance by ethnicity.  And, if you wanted to dig really deep, you could break out details by private and public schools, and by grade level.  I used to publish the data every couple of years.

    Those days are gone.  The transparency The College Board touts as a value seems to have its limits, and I understand this to some extent: Racists loved to twist the data using single-factor analysis, and that’s not good for a company who is trying to make business inroads with under-represented communities as they cloak their pursuit of revenue as an altruistic push toward access.

    They still publish data, but as I wrote about in my last post, it’s far less detailed; what’s more, what is easily accessible is fairly sterile, and what’s more detailed seems to be structured in a way that suggests the company doesn’t want you digging down into it.

    But based on a series of tweets by Marco Learning, based on research by its founder John Moscatiello, I set about scraping the data off the website, as on this page for 2024, this page for 2023, and this page for 2022. After first making a mistake because of the way the data are formatted and laid out, I’ve done manual checks and double-checks on this, especially on the exams where the results look way out of whack with what you would expect.

    Marco Learning’s take was that this was intentional on the part of the College Board, and that it would continue on other exams in the future.

    They also pointed out that this would save students a lot of money in college tuition, and of course, that’s true; if the tests were correctly designed, and students did better, that would be good news.  But the question is really: Should they be getting credit for these results?  Do the changes in performance mean that students are more qualified, or that the tests are easier?  And in some subjects, does giving credit for some courses actually set students up for failure in subsequent classes?

    This is problematic because College Board has spent a lot of money lobbying state legislatures to pass laws requiring public universities grant credit for AP exams (usually a 3 or above).  The assumption on college campuses is that–despite some mistrust of the College Board and their methods–they have good psychometricians who ensure test design meets rigorous standards that ensure a grade of 4, for instance, means the same thing today as it did five years ago.

    But the incentive to enforce that rigor is gone, since states have effectively endorsed the outcomes of these exams as valid and worthy.  College Board can now shift to growing market penetration, as they do when they encourage school districts to push AP, and encourage even students who might not be prepared to take AP classes.

    And, of course, as always seems to be the case, there is some measure of hypocrisy in the current statements of College Board compared to things they’ve said in the past. Remember the book “Measuring Success” which was written in large part by College Board staff members and fans, and railed against grade inflation, using data that suggested otherwise? (College Board disavows any formal connection to the book, but their Communications Staff Members were thanked in the foreword.)

    Paul Tough, in his book “The Years that Matter Most” pointed out that College Board’s own conclusions contradict the evidence they published:

    The data are below, in three views: And before I allow you to leap to conclusions, there are a lot of things that might explain why scores in some exams are swinging so wildly in a year, but College Board’s refusal to publish this data in an easily, machine-readable format makes that insight really hard to get at (and they won’t do it themselves, as they never respond publicly to criticism like this.)

    At a bare minimum, when College Board exam results show wild swings like this (especially if they are intentional) I think they owe it to actively notify every university that accepts scores, and every state legislature they’ve lobbied to approve the tests, of the changes.

    View one (using the tabs across the top) shows thermometer charts: Choose any class using the drop down box.  You’ll find big changes in some of the classes, and some that seem perfectly tuned over time.

    View two shows the same data in a format some might find easier.

    View three shows all exams that have three years of data (thus, excluding African-American Studies and Pre-Calc) for a wider view of the program.

    Source link

  • Changes in SAT Scores after Test-optional

    Changes in SAT Scores after Test-optional

    One of the intended consequences of test-optional admission policies at some institutions prior to the COVID-19 pandemic was to raise test scores reported to US News and World Report.  It’s rare that you would see a proponent of test-optional admission like me admit that, but to deny it would be foolish.

    Because I worked at DePaul, which was an early adopter of the approach (at least among large universities), I fielded a lot of calls from colleagues who were considering it, some of whom were explicit in their reasons for doing so.  One person I spoke to came right out at the start of the call: She was only calling, she said, because her provost wanted to know how much they could raise scores if they went test-optional.

    If I sensed or heard that motivation, I advised people against it.  In those days, the vast majority of students took standardized admission tests like the SAT or ACT, but the percentage of students applying without tests was still relatively small; the needle would not move appreciably by going to test-optional admission.  

    On the other hand, of course, I knew the pressure admissions offices were under by trustees, presidents, provosts, and faculty, and as Campbell’s Law and its many variants tells us, what gets measured gets produced.  DePaul, a private university with a public mission, was using a test-optional approach to ensure those students who were a part of our mission would not be left behind as applications grew.  (I often say how lucky I was to work at a place where–in 17 years–I was never once asked about how to increase test scores or selectivity, but I heard frequently about the Pell percentage in the class.)

    If you wanted different outcomes, there were lots of ways to manipulate admissions statistics to effect the same outcome, I’d tell the callers.

    Motivations, of course, were different in the summer of 2020, when it had become clear that test-optional admission was a necessary utilitarian decision that also carried with it good reputational benefits: Even if you were doing it to survive, you could at least look like you were being altruistic.  And, of course, you could learn something in the process.

    So what’s happened? About what you would expect.  At the overwhelming majority of colleges, the Mean SAT EWR+M score has risen between the fall of 2019 and 2022.  I used 2019 as the base because the data reported to IPEDS is for enrolling students, and the 2020 term was affected by COVID. 

    It’s dangerous, of course, to try to figure out exactly why they went up, other than the expected sampling bias.  It could be that reputation that drove such things was already increasing.  It could be that the college took a lot more or a lot fewer chances in admission (either is possible); it could be location and migration (out on the west coast, people care about tests, it seems, a lot less than they do in the Eastern Time Zone), and students who cross state lines to attend college tend to be wealthier, and wealthier students tend to score higher on tests.

    Or it could be all of those things, and others.  We’ll never really know.  But it’s still fun to look at.  So here we go, with just one view this time.

    One the left are mean SAT scores in 2019 and 2022, calculated from the reported 25th and 75th percentiles of the two sections.  Numbers are rounded. On the left are gray bars with the 2019 figure, and purple bars with the 2022 score.  On the right are the changes, and the chart is sorted on the value in descending order.

    There are four filters to get the view you want: At top left you can use the control to limit the region; at top right you can look at public and/or private four-year universities.  You can also use the sliders to look at colleges by limiting the 2022 selectivity or class size.

    Again, this is interesting, but not necessarily instructive.  See if you can guess what your favorite college looks like before and after the pandemic.  Have fun.

    (Note: Some institutions that went test-optional stopped reporting test scores as a result, and they are not included here.)

    Source link

  • What It Is & What Your Scores Tell You

    What It Is & What Your Scores Tell You



    A Guide To The GRE: What It Is & What Your Scores Tell You





















    Source link