Tag: ages

  • Games for Change Opens 2026 Student Challenge to Game Creators and Innovators Ages 10–25

    Games for Change Opens 2026 Student Challenge to Game Creators and Innovators Ages 10–25

    The annual global game design awards $20,000 in grand prizes for creative and impactful games that advance the UN Sustainable Development Goals

    NEW YORK, NY — [NOV 10, 2025] — Games for Change (G4C), the leading nonprofit that empowers game creators and innovators to drive real-world change, today announced the kick off of the 2025- 2026 Games for Change Student Challenge, a global game design program inviting learners ages 10–25 years old to tackle pressing world issues that address the United Nations Sustainable Development Goals, through creativity, play, and purposeful design.

    Now in its eleventh year, the Student Challenge has reached more than 70,000 students and almost 2,000 educators and faculty across 600cities in 91 countries, inspiring the creation of over 6,600 original student-designed games that connect learning to action. From November to April 2026, participants will design and submit games for consideration in regional and global competitions, with Game Jams taking place worldwide throughout the season.

    “The G4C Student Challenge continues to show that when young people design games about real-world issues, they see themselves not just as players, but as problem solvers and changemakers,” said Arana Shapiro, Chief Operations and Programs Officer at Games for Change. “Through game design, students learn to think critically, collaborate, and build solutions with purpose. In a world shaped by AI and constant change, durable skills like problem solving, critical thinking, and game design will allow all learners to thrive in their communities and worldwide.”

    This year, students will explore three new themes developed with world-class partners to inspire civic imagination and problem-solving:

    Two grand-prize winners will receive a total of $20,000 in scholarships, generously provided by Take-Two Interactive and Endless. Winners and finalists will be celebrated at the Student Challenge Awards on May 28, 2026, in recognition of exceptional creativity, social impact, and innovation in student game design.

    “With 3.4 billion players worldwide, the video games industry has an unprecedented ability to reach and inspire audiences across cultures and our next generation of leaders,” said Lisa Pak, Head of Operations at Playing for the Planet. “We’re excited about our collaboration with Games for Change, empowering students to use their creativity to spotlight the threats to reefs, rainforests, and our climate. Together, we’re transforming play into a powerful tool for awareness, education, and action.”

    More than 319 million people face severe hunger around the world today,” said Jessamyn Sarmiento, Chief Marketing Officer at World Food Program USA. “Through the ‘Outgrow Hunger’ theme, we’re giving the next generation a way to explore the root causes of food insecurity and imagine solutions through research, game design, and play. This collaboration helps students connect their creativity to one of the most urgent challenges of our time—ending hunger for good.”

    Additionally, G4C is expanding its educator support with the launch of the G4C Learn website, the world’s largest online resource library featuring lesson plans, tutorials, and toolkits to guide students, teachers, and faculty on topics like game design, game-based learning, esports, career pathways, and more. In partnership with Global Game Jam, educators worldwide can receive funding, training, and support to host Student Challenge Game Jams in their classrooms and communities.

    “Games turn learning into challenges students actually want to take on,” said Luna Ramirez, CTE teacher at Thomas A. Edison CTE High School based in New York City. “When students design games to tackle pressing global problems affecting their communities, they become curious about the world around them, experimenting, and bringing ideas to life. The best learning happens when students take risks, fail forward, and collaborate, and that’s exactly what the Games for Change Student Challenge empowers.”

    Educators, parents, and learners ages 10–25 can now registerfor the 2026 Games for Change Student Challenge and access free tools and resources at learn.gamesforchange.org.

    This year’s Student Challenge is made possible through the generous support of key partners, including Endless, General Motors, Verizon, Motorola Solutions Foundation, Take-Two Interactive, World Food Program USA, Playing for the Planet, Unity, and Global Game Jam.

    About Games for Change

    Since 2004, Games for Change (G4C) has empowered game creators and innovators to drive real-world change through games and immersive media, helping people learn, improve their communities, and make the world a better place. G4C partners with technology and gaming companies, nonprofits, foundations, and government agencies to run world-class events, public arcades, design challenges, and youth programs. G4C supports a global community of developers using games to tackle real-world challenges, from humanitarian conflicts to climate change and education. For more information, visit: https://www.gamesforchange.org/.

    Media contact(s):

    Alyssa Miller

    Games for Change

    [email protected]

    973-615-1292

    Susanna Pollack
    [email protected]

    Latest posts by eSchool News Contributor (see all)

    Source link

  • Across All Ages & Demographics, Test Results Show Americans Are Getting Dumber – The 74

    Across All Ages & Demographics, Test Results Show Americans Are Getting Dumber – The 74


    Get stories like this delivered straight to your inbox. Sign up for The 74 Newsletter

    There’s no way to sugarcoat it: Americans have been getting dumber.

    Across a wide range of national and international tests, grade levels and subject areas, American achievement scores peaked about a decade ago and have been falling ever since. 

    Will the new NAEP scores coming out this week show a halt to those trends? We shall see. But even if those scores indicate a slight rebound off the COVID-era lows, policymakers should seek to understand what caused the previous decade’s decline. 

    There’s a lot of blame to go around, from cellphones and social media to federal accountability policies. But before getting into theories and potential solutions, let’s start with the data.

    Until about a decade ago, student achievement scores were rising. Researchers at Education Next found those gains were broadly shared across racial and economic lines, and achievement gaps were closing. But then something happened, and scores started to fall. Worse, they fell faster for lower-performing students, and achievement gaps started to grow.

    This pattern shows up on test after test. Last year, we looked at eighth grade math scores and found growing achievement gaps in 49 of 50 states, the District of Columbia and 17 out of 20 large cities with sufficient data.

    But it’s not just math, and it’s not just NAEP. The American Enterprise Institute’s Nat Malkus has documented the same trend in reading, history and civics. Tests like NWEA’s MAP Growth and Curriculum Associates’ i-Ready are showing it too. And, as Malkus found in a piece released late last year, this is a uniquely American problem. The U.S. now leads the world in achievement gap growth.

    What’s going on? How can students here get back on track? Malkus addresses these questions in a new report out last week and makes the point that any honest reckoning with the causes and consequences of these trends must account for the timing, scope and magnitude of the changes.

    Theory #1: It’s accountability

    As I argued last year, my top explanation has been the erosion of federal accountability policies. In 2011 and 2012, the Obama administration began issuing waivers to release states from the most onerous requirements of the No Child Left Behind Act. Congress made those policies permanent in the 2015 Every Student Succeeds Act. That timing fits, and it makes sense that easing up on accountability, especially for low-performing students, led to achievement declines among those same kids.

    However,  there’s one problem with this explanation: American adults appear to be suffering from similar achievement declines. In results that came out late last year, the average scores of Americans ages 16 to 65 fell in both literacy and numeracy on the globally administered Program for the International Assessment of Adult Competencies. 

    And even among American adults, achievement gaps are growing. The exam’s results are broken down into six performance levels. On the numeracy portion, for example, the share of Americans scoring at the two highest levels rose two points, from 10% to 12%, while the percentage of those at the bottom two levels rose from 29% to 34%. In literacy, the percentage of Americans scoring at the top two levels fell from 14% to 13%, while the lowest two levels rose from 19% to 28%. 

    These results caused Peggy Carr, the commissioner of the National Center for Education Statistics, to comment, “There’s a dwindling middle in the United States in terms of skills.” Carr could have made the same comment about K-12 education —  except that these results can’t be explained by school-related causes.

    Theory #2: It’s the phones

    The rise of smartphones and social media, and the decline in reading for pleasure, could be contributing to these achievement declines. Psychologist Jean Twenge pinpointed 2012 as the first year when more than half of Americans owned a smartphone, which is about when achievement scores started to decline. This theory also does a better job of explaining why Americans of all ages are scoring lower on achievement tests.

    But there are some holes in this explanation. For one, why are some of the biggest declines seen in the youngest kids? Are that many 9-year-olds on Facebook or Instagram? Second, why are the lowest performers suffering the largest declines in achievement? Attention deficits induced by phones and screens should affect all students in similar ways, and yet the pattern shows the lowest performers are suffering disproportionately large drops.

    But most fundamentally, why is this mostly a U.S. trend? Smartphones and social media are global phenomena, and yet scores in Australia, England, Italy, Japan and Sweden have all risen over the last decade. A couple of other countries have seen some small declines (like Finland and Denmark), but no one has else seen declines like we’ve had here in the States.

    Other theories: Immigration, school spending or the Common Core

    Other theories floating around have at least some kernels of truth. Immigration trends could explain some portion of the declines, although it’s not clear why those would be affecting scores only now. The Fordham Institute’s Mike Petrilli has partly blamed America’s “lost decade” on economic factors, but school spending has rebounded sharply in recent years without similar gains in achievement. Others, including historian Diane Ravitch and the Pioneer Institute’s Theodor Rebarber, blame the shift to the Common Core state standards, which was happening about the same time. But non-Common Core states suffered similar declines, and scores have also dropped in non-Common Core subjects.

    Note that COVID is not part of my list. It certainly exacerbated achievement declines and reset norms within schools, but achievement scores were already falling well before it hit America’s shores.

    Instead of looking for one culprit, it could be a combination of these factors. It could be that the rise in technology is diminishing Americans’ attention spans and stealing their focus from books and other long-form written content. Meanwhile, schools have been de-emphasizing basic skills, easing up on behavioral expectations and making it easier to pass courses. At the same time, policymakers in too many parts of the country have stopped holding schools accountable for the performance of all students.

    That’s a potent mix of factors that could explain these particular problems. It would be helpful to have more research to pinpoint problems and solutions, but if this diagnosis is correct, it means students, teachers, parents and policymakers all have a role to play in getting achievement scores back on track. 


    Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

    Source link

  • The data dark ages | Wonkhe

    The data dark ages | Wonkhe

    Is there something going wrong with large surveys?

    We asked a bunch of people but they didn’t answer. That’s been the story of the Labour Force Survey (LFS) and the Annual Population Survey (APS) – two venerable fixtures in the Office for National Statistics (ONS) arsenal of data collections.

    Both have just lost their accreditation as official statistics. A statement from the Office for Statistical Regulation highlights just how much of the data we use to understand the world around us is at risk as a result: statistics about employment are affected by the LFS concerns, whereas APS covers everything from regional labour markets, to household income, to basic stuff about the population of the UK by nationality. These are huge, fundamental, sources of information on the way people work and live.

    The LFS response rate has historically been around 50 per cent, but it had fallen to 40 per cent by 2020 and is now below 20 per cent. The APS is an additional sample using the LFS approach – current advice suggests that response rates have deteriorated to the extent that it is no longer safe to use APS data at local authority level (the resolution it was designed to be used at).

    What’s going on?

    With so much of our understanding of social policy issues coming through survey data, problems like these feel almost existential in scope. Online survey tools have made it easier to design and conduct surveys – and often design in the kind of good survey development practices that used to be the domain of specialists. Theoretically, it should be easier to run good quality surveys than ever before – certainly we see more of them (we even run them ourselves).

    Is it simply a matter of survey fatigue? Or are people less likely to (less willing to?) give information to researchers for reasons of trust?

    In our world of higher education, we have recently seen the Graduate Outcomes response rate drop below 50 per cent for the first time, casting doubt as to its suitability as a regulatory measure. The survey still has accredited official statistics status, and there has been important work done on understanding the impact of non-response bias – but it is a concerning trend. The national student survey (NSS) is an outlier here – it has a 72 per cent response rate last time round (so you can be fairly confident in validity right down to course level), but it does enjoy an unusually good level of survey population awareness even despite the removal of a requirement for providers to promote the survey to students. And of course, many of the more egregious issues with HESA Student have been founded on student characteristics – the kind of thing gathered during enrollment or entry surveys.

    A survey of the literature

    There is a literature on survey response rates in published research. A meta-analysis by Wu et al (Computers in Human Behavior, 2022) found that, at this point, the average online survey result was 44.1 per cent – finding benefits for using (as NSS does) a clearly defined and refined population, pre-contacting participants, and using reminders. A smaller study by Diaker et al (Journal of Survey Statistics and Methodology, 2020) found that, in general, online surveys yield lower response rates (on average, 12 percentage point lower) than other approaches.

    Interestingly, Holton et al (Human Relations, 2022) show an increase in response rates over time in a sample of 1014 journals, and do not find a statistically significant difference linked to survey modes.

    ONS itself works with the ESRC-funded Survey Futures project, which:

    aims to deliver a step change in survey research to ensure that it will remain possible in the UK to carry out high quality social surveys of the kinds required by the public and academic sectors to monitor and understand society, and to provide an evidence base for policy

    It feels like timely stuff. Nine strands of work in the first phase included work on mode effects, and on addressing non-response.

    Fixing surveys

    ONS have been taking steps to repair LFS – implementing some of the recontacting/reminder approaches that have been successfully implemented and documented in the academic literature. There’s a renewed focus on households that include young people, and a return to the larger sample sizes we saw during the pandemic (when the whole survey had to be conducted remotely). Reweighting has led to a bunch of tweaks to the way samples are chosen, and non-responses accounted for.

    Longer term, the Transformed Labour Force Survey (TLFS) is already being trialed, though the initial March 2024 plans for full introduction has been revised to allow for further testing – important given a bias towards older age group responses, and an increased level of partial responses. Yes, there’s a lessons learned review. The old LFS and the new, online first, TLFS will be running together at least until early 2025 – with a knock on impact on APS.

    But it is worth bearing in mind that, even given the changes made to drive up responses, trial TLFS response rates have been hovering around just below 40 per cent. This is a return to 2020 levels, addressing some of the recent damage, but a long way from the historic norm.

    Survey fatigue

    More usually the term “survey fatigue” is used to describe the impact of additional questions on completion rate – respondents tire during long surveys (as Jeong et al observe in the Journal of Development Economics) and deliberately choose not to answer questions to hasten the end of the survey.

    But it is possible to consider the idea of a civilisational survey fatigue. Arguably, large parts of the online economy are propped up on the collection and reuse of personal data, which can then be used to target advertisements and reminders. Increasingly, you now have to pay to opt out of targeted ads on websites – assuming you can view the website at all without paying. After a period of abeyance, concerns around data privacy are beginning to reemerge. Forms of social media that rely on a constant drive to share personal information are unexpectedly beginning to struggle – for younger generations participatory social media is more likely to be a group chat or discord server, while formerly participatory services like YouTube and TikTok have become platforms for media consumption.

    In the world of public opinion research the struggle with response rates has partially been met via a switch from randomised phone or in-person to the use of pre-vetted online panels. This (as with the rise of focus groups) has generated a new cadre of “professional respondents” – with huge implications for the validity of polling even when weighting is applied.

    Governments and industry are moving towards administrative data – the most recognisable example in higher education being the LEO dataset of graduate salaries. But this brings problems in itself – LEO lets us know how much income graduates pay tax on from their main job, but deals poorly with the portfolio careers that are the expectation of many graduates. LEO never cut it as a policymaking tool precisely because of how broadbrush it is.

    In a world where everything is data driven, what happens when the quality of data drops? If we were ever making good, data-driven decisions, a problem with the raw material suggests a problem with the end product. There are methodological and statistical workarounds, but the trend appears to be shifting away from people being happy to give out personal information without compensation. User interaction data – the traces we create as we interact with everything from ecommerce to online learning – are for now unaffected, but are necessarily limited in scope and explanatory value.

    We’ve lived through a generation where data seemed unlimited. What tools do we need to survive a data dark age?

    Source link