A research contract valued at about $200,000 has been lost by Charles Darwin University (CDU) as a result of US President Donald Trump’s “America First” agenda.
Please login below to view content or subscribe now.
Membership Login

A research contract valued at about $200,000 has been lost by Charles Darwin University (CDU) as a result of US President Donald Trump’s “America First” agenda.
Please login below to view content or subscribe now.

Can you tell fact from fiction online? In a digital world, few questions are more important or more challenging.
For years, some commentators have called for K-12 teachers to take on fake news, media literacy, or online misinformation by doubling down on critical thinking. This push for schools to do a better job preparing young people to differentiate between low- and high-quality information often focuses on social studies classes.
As an education researcher and former high school history teacher, I know that there’s both good and bad news about combating misinformation in the classroom. History class can cultivate critical thinking – but only if teachers and schools understand what critical thinking really means.
First, the bad news.
When people demand that schools teach critical thinking, it’s not always clear what they mean. Some might consider critical thinking a trait or capacity that teachers can encourage, like creativity or grit. They could believe that critical thinking is a mindset: a habit of being curious, skeptical and reflective. Or they might be referring to specific skills – for instance, that students should learn a set of steps to take to assess information online.
Unfortunately, cognitive science research has shown that critical thinking is not an abstract quality or practice that can be developed on its own. Cognitive scientists see critical thinking as a specific kind of reasoning that involves problem-solving and making sound judgments. It can be learned, but it relies on specific content knowledge and does not necessarily transfer between fields.
Early studies on chess players and physicists in the 1970s and ’80s helped show how the kind of flexible and reflective cognition often called critical thinking is really a product of expertise. Chess masters, for instance, do not start out with innate talent. In most cases, they gain expertise by hours of thoughtfully playing the game. This deliberate practice helps them recognize patterns and think in novel ways about chess. Chess masters’ critical thinking is a product of learning, not a precursor.
Because critical thinking develops in specific contexts, it does not necessarily transfer to other types of problem-solving. For example, chess advocates might hope the game improves players’ intelligence, and studies do suggest learning chess may help elementary students with the kind of pattern recognition they need for early math lessons. However, research has found that being a great chess player does not make people better at other kinds of complex critical thinking.
Since context is key to critical thinking, learning to analyze information about current events likely requires knowledge about politics and history, as well as practice at scrutinizing sources. Fortunately, that is what social studies classes are for.
Social studies researchers often describe this kind of critical thinking as “historical thinking”: a way to evaluate evidence about the past and assess its reliability. My own research has shown that high school students can make relatively quick progress on some of the surface features of historical thinking, such as learning to check a text’s date and author. But the deep questioning involved in true historical thinking is much harder to learn.
Social studies classrooms can also build what researchers call “civic online reasoning.” Fact-checking is complex work. It is not enough to tell young people that they should be wary online, or to trust sites that end in “.org” instead of “.com.” Rather than learning general principles about online media, civic online reasoning teaches students specific skills for evaluating information about politics and social issues.
Still, learning to think like a historian does not necessarily prepare someone to be a skeptical news consumer. Indeed, a recent study found that professional historians performed worse than professional fact-checkers at identifying online misinformation. The misinformation tasks the historians struggled with focused on issues such as bullying or the minimum wage – areas where they possessed little expertise.
That’s where background knowledge comes in – and the good news is that social studies can build it. All literacy relies on what readers already know. For people wading through political information and news, knowledge about history and civics is like a key in the ignition for their analytical skills.
Readers without much historical knowledge may miss clues that something isn’t right – signs that they need to scrutinize the source more closely. Political misinformation often weaponizes historical falsehoods, such as the debunked and recalled Christian nationalist book claiming that Thomas Jefferson did not believe in a separation of church and state, or claims that the nadir of African American life came during Reconstruction, not slavery. Those claims are extreme, but politicians and policymakers repeat them.
For someone who knows basic facts about American history, those claims won’t sit right. Background knowledge will trigger their skepticism and kick critical thinking into gear.
For this reason, the best approach to media literacy will come through teaching that fosters concrete skills alongside historical knowledge. In short, the new knowledge crisis points to the importance of the traditional social studies classroom.
But it’s a tenuous moment for history education. The Bush- and Obama-era emphasis on math and English testing resulted in decreased instructional time in history classes, particularly in elementary and middle schools. In one 2005 study, 27% of schools reported reducing social studies time in favor of subjects on state exams.
Now, history teachers are feeling heat from politically motivated culture wars over education that target teaching about racism and LGBTQ+ issues and that ban books from libraries and classrooms. Two-thirds of instructors say that they’ve limited classroom discussions about social and political topics.
Attempts to limit students’ knowledge about the past imperil their chances of being able to think critically about new information. These attacks are not just assaults on the history of the country; they are attempts to control its future.
This article is republished from The Conversation under a Creative Commons license. Read the original article.

Media attention has emphasised that the financial issues facing universities continue to worsen. While research is a cornerstone and strength of the sector, it is often regarded as a cost, which leads to scrutiny as part of institutional savings targets. Despite calls to acknowledge the value of research, the focus understandably remains on research costs.
The focus of universities on the volume and cost of unfunded research, or more accurately, internally funded research, is a question that must be addressed. Institutions are reflecting on and revising internal research allowances as part of their efforts to achieve a more sustainable financial position, as the cross-subsidy from international student fees is no longer as viable as it once was.
The question of funded research, however, is a different matter. For quite some time, there have been questions about what constitutes the full economic cost (FEC) and how these costs are recovered when projects are funded. Both issues have once again come to the forefront in the current climate, especially as institutions are failing to recover the eligible costs of funded projects.
As part of the Innovation & Research Caucus, an investment funded by UKRI, we have been investigating why the recovery of UKRI-funded research is often below the stated rates. To put it simply, if the official recovery rate is 80 per cent FEC, why is 80 per cent not being recovered on UKRI-funded projects?
We conducted a series of interviews with chief financial officers, pro vice chancellors for research, and directors of research services across mission groups, the Transparent Approach to Costing (TRAC) group, and various geographic regions. They identified several key reasons why universities are not recovering the funding to which they are entitled.
Before exploring the causes of under-recovery on UKRI-funded projects, the project aimed to establish the extent to which TRAC data was curated and utilised. Notably, the study found that the data collected for TRAC does not exist within research organisations and would not otherwise be collected in this form if it were not for the TRAC reporting requirement.
While scrutinising TRAC data was less of a priority when the financial situation was more stable, in many institutions, it is now of interest to the top table and serves as the basis for modelling, projections, and scenario planning. That said, such analysis did not always recognise TRAC’s limitations in terms of how it was compiled and, therefore, its comparability.
In many of the research organisations consulted, the responsibilities for TRAC, project costing, and project delivery are distinct. Given the growing significance of TRAC data in influencing resource allocation and strategic decision-making, it is essential for research organisations to adopt a more integrated approach to compiling and utilising TRAC data to achieve improved outcomes.
A wide range of factors explains why the cost recovered at the end of a funding grant is less than anticipated at the point of submission and award. Almost all respondents highlighted three factors as significant in low cost recovery:
Beyond these top three, the report highlights the implications of the often “hidden” costs associated with supporting and administering UKRI grants, the perennial issues of match funding, and the often inevitable delays in starting and delivering projects – all of which add to the cost and increase the prospect of under-recovery.
In addition, an array of other contributing factors were also raised. These included the impact of exchange rates, eligibility criteria, the capital intensity of projects, cost recovery for partners, recruitment challenges, lack of contingency, and no cost extensions. While not pinpointing the importance of a single factor, the interplay and cumulative effect were considered to result in under-recovery.
Universities bear the cost of under-recovery, but funders and universities can take several actions to improve under-recovery – some of which are low- or no-cost, could be implemented in the short term, and would make a real difference.
Funders, such as UKRI, should provide clearer guidance for research organisations on how to cost facilities and equipment, as well as how to include these costs in research bids. Similarly, applicants and reviewers should receive clearer guidance regarding realistic expectations from PIs in leading projects, emphasising that value should be prioritised over cost. Another area that warrants clearer guidance is match funding, specifically for institutions regarding expectations and for reviewers on how match funding should be assessed. We are pleased to see that UKRI is already taking steps to address these points in its funding policies [editor’s note: this link will be live around 9am on Friday morning].
In the medium term, research funders could also review their approaches to indexation, which could help mitigate the impact of inflation in driving under-recovery, although this is, of course, not without cost. Another area worth exploring by both research organisations and funders is the provision of shared infrastructures and assets, both within and across institutions – again, a longer-term project.
We are already seeing institutions taking steps to manage and mitigate under-recovery, and there is scope to extend good practice. Perhaps the main challenge to improving cost recovery is better managing the link between project budgets – based on proposal costs – and project delivery costs. Ensuring a joined-up approach from project costing to reporting is important, but more important is developing a deeper understanding across these areas.
A final point is the need to ensure that academics vying for funding really understand the new realities of cost and recovery. This has not always been the case, and arguably still is not the case. These skills – from clarifying the importance of realistic staff costs to accurately costing the use of facilities to effectively managing project budgets – will help close the cost recovery gap.
The current project has focused on under-recovery in project delivery. The next step is to understand the real cost to research organisations of UKRI grant funding.
This means understanding the cost of developing, preparing and submitting a UKRI grant application – whether successful or not. It means understanding the costs associated with administering and reporting on a UKRI grant during and beyond the life of a project (think ResearchFish!).
For more information, please get in touch – or watch this space for further findings.
The Innovation & Research Caucus report, Understanding low levels of FEC cost recovery on UKRI grants, will be published on the UKRI site later today.

On the whole research funding is not configured to be sensitive to place.
It does good things in regions but this is different to funding being configured to do so. For example, universities in the North East performed strongly in the REF and as a consequence they received an uplift in QR funding. This will allow them to invest in their research capacity, this will bring agglomerate benefits in the North East, and go some small way to rebalancing the UK’s research ecosystem away from London.
REF isn’t designed to do that. It has absolutely no interest where research takes place, just that the research that takes place is excellent. The UK isn’t a very big place and it has a large number of universities. Eventually, if you fund enough things in enough places you will eventually help support regional clusters of excellence.
There are of course some specific place based funds but this doesn’t mean they are redistributive as well as being regionally focussed. The Higher Education Innovation Fund (HEIF) is focussed on regional capacity but it is £260m of a total annual Research England funding distribution of £2.8bn. HEIF is calculated using provider knowledge exchange work on businesses, public and third sector engagement, and the wider public. A large portion of the data is gathered through the HE-BCI Survey.
The result of this is that there is place based funding but inevitably institutions with larger research capacities receive larger amounts of funding. Of the providers that received the maximum HEIF funding in 2024/25 five were within the golden triangle, one was in the West Midlands, one was in the East Midlands, two were in Yorkshire and the Humber, one was in the North West, and one was in the South East but not the golden triangle. It is regional but it is not redistributive.
RAND Europe has released a process evaluation of wave two of the Strength in Places Fund (SIPF). As RAND Europe describe the fund is
The Strength in Places Fund (SIPF) is a £312.5 million competitive funding scheme that takes a place-based approach to research and innovation (R&I) funding. SIPF is a UK Research and Innovation (UKRI) strategic fund managed by the SIPF delivery team based at Innovate UK and Research England. The aim of the Fund is to help areas of the UK build on existing strengths in R&I to deliver benefits for their local economy
This fund has been more successful in achieving a more regionally distributed spread of funding. For example, the fund has delivered £47m to Wales compared to only £18m in South East England. Although quality was a key factor, and there are some challenges to how aligned projects are to wider regional priorities, it seems that a focus on a balanced portfolio made a difference. As RAND Europe note
[…]steps were taken to ensure a balanced portfolio in terms of geographical spread and sectors; however, quality was the primary factor influencing panel recommendations (INTXX). Panel members considered the projects that had been funded in Wave 1 and the bids submitted in Wave 2, and were keen on ensuring no one region was overrepresented. One interviewee mentioned that geographical variation of awards contributed to the credibility of a place-based funding system[…].
The Regional Innovation Fund which aimed to support local innovation capacity was allocated with a specific modifier to account for where there had historically been less research investment. SPIF has been a different approach to solving the same conundrum of how best support research potential in every region of the UK.
It’s within this context that it is interesting to arrive at UKRI’s most recent analysis of the geographical distribution of its funding in 2022/23 and 2023/24. There are two key messages the first is that
All regions and nations received an increase in UKRI investment between the financial years 2021 to 2022 and 2023 to 2024. The greatest absolute increases in investment were seen in the North West, West Midlands and East Midlands. The greatest proportional increases were seen in Northern Ireland, the East Midlands and North West.
And the second is that
The percentage of UKRI funding invested outside London, the South East and East of England, collectively known as the ‘Greater South East’, rose to 50% in 2023 to 2024. This is up from 49% in the 2022 to 2023 financial year and 47% in the 2021 to 2022 financial year. This represents a cumulative additional £1.4 billion invested outside the Greater South East since the 2021 to 2022 financial year.
In the most literal sense the funding between the Greater South East and the rest of the country could not be more finely balanced. In flat cash terms the rest of the UK outside of the Greater South East has overtaken the Greater South East for the first time while investment per capita in the Greater South East still outstrips the rest of the country by a significant amount.
The reasons for this shift is because of greater investments in the North West, West Midlands, and East Midlands who cumulatively saw an increase of £550m worth of funding over the past three years. The regions with the highest absolute levels of funding saw some of the smallest proportions of increases in investment.
The evaluations and UKRI’s dataset present an interesting picture. There is nothing unusual about the way funding is distributed as it follows where the highest numbers of researchers, providers, and economic activity is located. It would be an entirely arbitrary mechanism which penalised the South East for having research strengths.
Simultaneously, with constrained resources there are lots of latent assets outside of the golden triangle that will not get funding. The UK is unusually reliant on its capital as an economic contributor and research funding follows this. The only way to rebalance this is to make deliberate efforts, like with SIPF, to lean toward a more balanced portfolio of funding.
This isn’t a plea to completely rip up the rule book, and a plea for more money in an era of fiscal constraint will not be listened to, but it does bring into sharp relief a choice. Either research policy is about bolstering the UK’s economic centre or it is about strengthening the potential of research where it receives less funding. There simply is not enough money to do both.

Our most recent research into the working lives of faculty gave us some interesting takeaways about higher education’s relationship with AI. While every faculty member’s thoughts about AI differ and no two experiences are the same, the general trend we’ve seen is that faculty have moved from fear to acceptance. A good deal of faculty were initially concerned about AI’s arrival on campus. This concern was amplified by a perceived rise in AI-enabled cheating and plagiarism among students. Despite that, many faculty have come to accept that AI is here to stay. Some have developed working strategies to ensure that they and their students know the boundaries of AI usage in the classroom.
Early-adopting educators aren’t just navigating around AI. They have embraced and integrated it into their working lives. Some have learned to use AI tools to save time and make their working lives easier. In fact, over half of instructors reported that they wanted to use AI for administrative tasks and 10% were already doing so. (Find the highlights here.) As more faculty are seeing the potential in AI, that number has likely risen. So, in what ways are faculty already using AI to lighten the load of professional life? Here are three use-cases we learned about from education professionals:
“Give me a list of 10 German pop songs that contain irregular verbs.”
“Summarize the five most contentious legal battles happening in U.S. media law today.”
“Create a set of flashcards that review the diagnostic procedure and standard treatment protocol for asthma.”
The possibilities (and the prompts!) are endless. AI is well-placed to assist with idea generation, conversation-starters and lesson materials for educators on any topic. It’s worth noting that AI tends to prove most helpful as a starting point for teaching and learning fodder, rather than for providing fully-baked responses and ideas. Those who expect the latter may be disappointed, as the quality of AI results can vary widely depending on the topic. Educators can and should, of course, always be the final determinants and reviewers of the accuracy of anything shared in class.
Faculty have told us that they spend a hefty proportion (around 28%) of their time on course preparation. Differentiating instruction for the various learning styles and levels in any given class constitutes a big part of that prep work. A particular lesson may land well with a struggling student, but might feel monotonous for an advanced student who has already mastered the material. To that end, some faculty are using AI to readily differentiate lesson plans. For example, an English literature instructor might enter a prompt like, “I need two versions of a lesson plan about ‘The Canterbury Tales;’ one for fluent English speakers and one for emergent English speakers.” This simple step can save faculty hours of manual lesson plan differentiation.
An instructor in Kansas shared with Cengage their plans to let AI help in this area, “I plan to use AI to evaluate students’ knowledge levels and learning abilities and create personalized training content. For example, AI will assess all the students at the beginning of the semester and divide them into ‘math-strong’ and ‘math-weak’ groups based on their mathematical aptitude, and then automatically assign math-related materials, readings and lecture notes to help the ‘math-weak’ students.”
When used in this way, AI can be a powerful tool that gives students of all backgrounds an equal edge in understanding and retaining difficult information.
Reviewing the work of dozens or hundreds of students and finding common threads and weak spots is tedious work, and seems an obvious area for a little algorithmic assistance.
Again, faculty should remain in control of the feedback they provide to students. After all, students fully expect faculty members to review and critique their work authentically. However, using AI to more deeply understand areas where a student’s logic may be consistently flawed, or types of work on which they repeatedly make mistakes, can be a game-changer, both for educators and students.
An instructor in Iowa told Cengage, “I don’t want to automate my feedback completely, but having AI suggest areas of exigence in students’ work, or supply me with feedback options based on my own past feedback, could be useful.”
Some faculty may even choose to have students ask AI for feedback themselves as part of a critical thinking or review exercise. Ethan and Lilach Mollick of the Wharton School of the University of Pennsylvania share in an Harvard Business Publishing Education article, “Though AI-generated feedback cannot replicate the grounded knowledge that teachers have about their students, it can be given quickly and at scale and it can help students consider their work from an outside perspective. Students can then evaluate the feedback, decide what they want to incorporate, and continue to iterate on their drafts.”
AI is not a “fix-all” for the administrative side of higher education. However, many faculty members are gaining an advantage and getting some time back by using it as something of a virtual assistant.
In a future piece, we’ll share 3 more ways in which faculty are using AI to make their working lives easier. In the meantime, you can fully explore our research here:

Impact in the arts is fundamentally different from other fields. It is built on relationships, trust, and long-term engagement with communities, businesses, and cultural institutions.
Unlike traditional research models, where success is often measured through large-scale returns or policy influence, impact in the creative industries is deeply personal, embedded in real-world collaborations, and evolves over time.
For specialist arts institutions, impact is not just about knowledge transfer – it’s about experimental knowledge exchange. It emerges from years of conversations, interdisciplinary convergence, and shared ambitions. This process is not transactional; it is about growing networks, fostering trust, and developing meaningful partnerships that bridge creative research with industry and society.
The AHRC Impact Acceleration Account (IAA) has provided a vital framework for this work, but to fully unlock the potential of arts-led innovation, it needs to be bigger, bolder, and more flexible. The arts sector thrives on adaptability, yet traditional funding structures often fail to reflect the reality of how embedded impact happens – rarely immediate or linear.
At the University for the Creative Arts (UCA), we have explored a new model of knowledge exchange—one that moves beyond transactional partnerships to create impact at the convergence of arts, business, culture, and technology.
At UCA, IAA impact has grown not through top-down frameworks, but through years of relationship-building with creative businesses, independent artists, cultural organisations, and museums. These partnerships are built on trust, long-term engagement, and shared creative exploration, rather than short-term funding cycles.
Creative industries evolve through conversation, experimentation, and shared risk-taking. Artists, designers, filmmakers, and cultural institutions need time to test ideas, adapt, and develop new ways of working that blend creative practice with commercial and social impact.
This approach has led to collaborations that demonstrate how arts impact happens in real-time, to name a few:
These projects are creative interventions that converge research, industry, and social change. We don’t just measure impact; we create it through action.
The AHRC IAA has provided an important platform for arts-led impact, but if we are serious about supporting creative industries as a driver of economic, cultural, and social transformation, we must rethink how impact is funded and measured. Traditional funding models often overlook the long-term, embedded collaborations that define arts impact.
To make the impact funding more effective, we need to:
In academic teaching and training, knowledge exchange must be reconsidered beyond the REF framework. Rather than focusing solely on individual research outputs, assessment frameworks should value collective impact, long-term partnerships, and iterative creative inquiry. Funding models should support infrastructure that enables researchers to develop skills in knowledge exchange, ensuring it is a fundamental pillar of academic and professional growth.
By embedding knowledge exchange principles into creative education, we can cultivate a new generation of researchers who are not only scholars but also creative change makers, equipped to collaborate with industry, drive cultural innovation, and shape the future of the creative economy.
UCA’s approach demonstrates how arts institutions are developing a new model of impact—one rooted in collaboration, creativity, and social change. However, for this model to thrive, impact funding must evolve to recognise and support the unique ways in which creative research generates real change.
To keep pace with the evolving needs of cultural, creative, and technology industries, research funding must acknowledge that impact in the arts is about stories, communities, and the human connections that drive transformation. It’s time to expand our vision of what impact means – and to build a funding model that reflects the true value of the arts in shaping business, culture, and society.

Collaborative Classroom, a leading nonprofit publisher of K–12 instructional materials, announces the publication of SIPPS, a systematic decoding program. Now in a new fifth edition, this research-based program accelerates mastery of vital foundational reading skills for both new and striving readers.
Twenty-Five Years of Transforming Literacy Outcomes
“As educators, we know the ability to read proficiently is one of the strongest predictors of academic and life success,” said Kelly Stuart, President and CEO of Collaborative Classroom. “Third-party studies have proven the power of SIPPS. This program has a 25-year track record of transforming literacy outcomes for students of all ages, whether they are kindergarteners learning to read or high schoolers struggling with persistent gaps in their foundational skills.
“By accelerating students’ mastery of foundational skills and empowering teachers with the tools and learning to deliver effective, evidence-aligned instruction, SIPPS makes a lasting impact.”
What Makes SIPPS Effective?
Aligned with the science of reading, SIPPS provides explicit, systematic instruction in phonological awareness, spelling-sound correspondences, and high-frequency words.
Through differentiated small-group instruction tailored to students’ specific needs, SIPPS ensures every student receives the necessary targeted support—making the most of every instructional minute—to achieve grade-level reading success.
“SIPPS is uniquely effective because it accelerates foundational skills through its mastery-based and small-group targeted instructional design,” said Linda Diamond, author of the Teaching Reading Sourcebook. “Grounded in the research on explicit instruction, SIPPS provides ample practice, active engagement, and frequent response opportunities, all validated as essential for initial learning and retention of learning.”
Personalized, AI-Powered Teacher Support
Educators using SIPPS Fifth Edition have access to a brand-new feature: immediate, personalized responses to their implementation questions with CC AI Assistant, a generative AI-powered chatbot.
Exclusively trained on Collaborative Classroom’s intellectual content and proprietary program data, CC AI Assistant provides accurate, reliable information for educators.
Other Key Features of SIPPS, Fifth Edition
Accelerating Reading Success for Students of All Ages
In small-group settings, students actively engage in routines that reinforce phonics and decoding strategies, practice with aligned texts, and receive immediate feedback—all of which contribute to measurable gains.
“With SIPPS, students get the tools needed to read, write, and understand text that’s tailored to their specific abilities,” said Desiree Torres, ENL teacher and 6th Grade Team Lead at Dr. Richard Izquierdo Health and Science Charter School in New York. “The boost to their self-esteem when we conference about their exam results is priceless. Each and every student improves with the SIPPS program.”