In a time when institutions are being asked to do more with less, reimagining how teams solve problems is critical. That’s where design thinking comes in.
This workbook introduces a proven framework for creative problem-solving that centers empathy, collaboration, and experimentation. Whether you’re launching a new program, reworking a process, or building cross-functional alignment, design thinking can help your institution move faster and smarter.
What’s Inside?
A breakdown of each phase of the design thinking process
Guided activities to structure collaborative work sessions
Prompts to help teams challenge assumptions and generate solutions
Space to capture insights and action steps in real time
Tips for applying design thinking to institutional challenges
It’s built for higher ed professionals looking to drive innovation without overcomplicating the process.
Complete the form on the right to download your free copy and start unlocking smarter solutions, faster.
Our mission is to enable impact in higher education. We help our partners achieve more, deliver superior experiences, and drive impact across the entire student lifecycle by leveraging and aligning data, technology, and talent.
PITTSBURGH — Saisri Akondi had already started a company in her native India when she came to Carnegie Mellon University to get a master’s degree in biomedical engineering, business and design.
Before she graduated, she had co-founded another: D.Sole, for which Akondi, who is 28, used the skills she’d learned to create a high-tech insole that can help detect foot complications from diabetes, which results in 6.8 million amputations a year.
D.Sole is among technology companies in Pittsburgh that collectively employ a quarter of the local workforce at wages much higher than those in the city’s traditional steel and other metals industries. That’s according to the business development nonprofit the Pittsburgh Technology Council, which says these companies pay out an annual $27.5 billion in salaries alone.
A “significant portion” of Pittsburgh’s transformation into a tech hub has been driven by international students like Akondi, said Sean Luther, head of InnovatePGH, a coalition of civic groups and government agencies promoting innovation businesses.
The Pittsburgh Innovation District along Forbes Avenue in Pittsburgh’s Oakland section, near the campuses of the University of Pittsburgh and Carnegie Mellon University. Credit: Nancy Andrews for The Hechinger Report
“Next Happens Here,” reads the sign above the entrance to the co-working space where Luther works and technology companies are incubated, in an area near Carnegie Mellon and the University of Pittsburgh dubbed the Pittsburgh Innovation District. The neighborhood is filled with people of various ethnicities speaking a variety of languages over lunch and coffee.
What might happen next to the international students and graduates who have helped fuel this tech economy has become an anxiety-inducing subject of those conversations, as the second presidential administration of Donald Trump brings visa crackdowns, funding cuts and other attacks on higher education — including here, in a state that voted for Trump.
Related: Interested in innovations in higher education? Subscribe to our free biweekly higher education newsletter.
Inside the bubble of the universities and the tech sector, “there’s so much support you get,” Akondi observed, in a gleaming conference room at Carnegie Mellon. “But there still is a part of the population that asks, ‘What are you doing here?’ ”
Much of the ongoing conversation about international students has focused on undergraduates and their importance to university revenues and enrollment. Many of these students — especially in graduate schools — fill a less visible role in the economy, however. They conduct research that can lead to commercial applications, have skills employers need and start a surprising number of their own companies in the United States.
Sean Luther, head of InnovatePGH, at one of the organization’s co-working spaces. One reason tech companies have come to Pittsburgh “is because of those non-native-born workers,” Luther says. Credit: Nancy Andrews for The Hechinger Report
“The high-tech engineering and computer science activities that are central to regional economic development today are hugely dependent on these students,” said Mark Muro, a senior fellow at the Brookings Institution who studies technology and innovation. “If you go into a lab, it will be full of non-American people doing the crucial research work that leads to intellectual property, technology partnerships and startups.”
Some 143 U.S. companies valued at $1 billion or more were started by people who came to the country as international students, according to the National Foundation for American Policy, a nonprofit that conducts research on immigration and trade. These companies have an average of 860 employees each and include SpaceX, founded by Elon Musk, who was born in South Africa and graduated from the University of Pennsylvania.
Whether or not they invent new products or found businesses of their own, international graduates are “a vital source” of workers for U.S.-based tech companies, the National Science Foundation reported last year in an annual survey on the state of American science and engineering.
Dave Mawhinney, founding executive director of the Swartz Center for Entrepreneurship at Carnegie Mellon University, with Saisri Akondi, an international graduate and co-founder of the startup D.Sole. “There still is a part of the population that asks, ‘What are you doing here?’ ” says Akondi. Credit: Nancy Andrews for The Hechinger Report
It’s supply and demand, said Dave Mawhinney, a professor of entrepreneurship at Carnegie Mellon and founding executive director of its Swartz Center for Entrepreneurship, which helps many of that school’s students do research that can lead to products and startups. “And the demand for people with those skills exceeds the supply.”
That’s in part because comparatively few Americans are going into fields including science, technology, engineering and math. Even before the pandemic disrupted their educations, only 20 percent ofcollege-bound American high school students were prepared for college-level courses in these subjects. U.S. students scored lower in math than their counterparts in 21 of the 37 participating nations of the Organization for Economic Cooperation and Development on an international assessment test in 2022, the most recent year for which the outcomes are available.
One result is that international students make up more than a third of master’s and doctoral degree recipients in science and engineering at American universities. Two-thirds of U.S. university graduate students and more than half of workers in AI and AI-related fields are foreign born, according to Georgetown University’s Center for Security and Emerging Technology.
“A real point of strength, and a reason our robotics companies especially have been able to grow their head counts, is because of those non-native-born workers,” said Luther, in Pittsburgh. “Those companies are here specifically because of that talent.”
International students are more than just contributors to this city’s success in tech. “They have been drivers” of it, Mawhinney said, in his workspace overlooking the studio where the iconic children’s television program “Mister Rogers’ Neighborhood” was taped.
Jake Mohin, director of solution engineering at a company that uses AI to predict how chemicals will synthesize, uses a co-working space at InnovatePGH in Pittsburgh’s Innovation District. Credit: Nancy Andrews for The Hechinger Report
“Every year, 3,000 of the smartest people in the world come here, and a large proportion of those are international,” he said of Carnegie Mellon’s graduate students. “Some of them go into the research laboratories and work on new ideas, and some come having ideas already. You have fantastic students who are here to help you build your company or to be entrepreneurs themselves.”
Boosters of the city’s tech-driven turnaround say what’s been happening in Pittsburgh is largely unappreciated elsewhere. It followed the effective collapse of the steel industry in the 1980s, when unemployment hit 18 percent.
In 2006, Google opened a small office at Carnegie Mellon to take advantage of the faculty and student expertise in computer science and other fields there and at neighboring higher education institutions; the company later moved to a nearby former Nabisco factory and expanded its Pittsburgh workforce to 800 employees. Apple, software and AI giant SAP and other tech firms followed.
“It was the talent that brought them here, and so much of that talent is international,” said Audrey Russo, CEO of the Pittsburgh Technology Council.
Sixty-one percent of the master’s and doctoral students at Carnegie Mellon come from abroad, according to the university. So do 23 percent of those at Pitt, an analysis of federal data shows.
The city has become a world center for self-driving car technology. Uber opened an advanced research center here. The autonomous vehicle company Motional — a joint venture between Hyundai and the auto parts supplier Aptiv — moved in. So did the Ford- and Volkswagen-backed Argo AI, which eventually dissolved, but whose founders went on to create the Pittsburgh-based self-driving truck developer Stack AV. The Ford subsidiary Latitude AI and the autonomous flight company Near Earth Autonomy also are headquartered in Pittsburgh.
Among other tech firms with homes here: Duolingo, which has 830 employees and is worth an estimated $22 billion. It was co-founded by a professor at Carnegie Mellon and a graduate of the university who both came to the United States as international students, from Guatemala and Switzerland, respectively.
InnovatePGH tracks 654 startups that are smaller than those big conglomerates but together employ an estimated 25,000 workers. Unemployment in Pittsburgh (3.5 percent in April) is below the national average (3.9 percent). Now Pitt and others are developing Hazelwood Green, which includes a former steel mill that closed in 1999, into a new district housing life sciences, robotics and other technology companies.
In a series of webinars about starting businesses, offered jointly to students at Pitt and Carnegie Mellon, the most popular installment is about how to found a startup on a student visa, said Rhonda Schuldt, director of Pitt’s Big Idea Center, in a storefront on Forbes Avenue in the Innovation District.
One of the co-working spaces operated by InnovatePHG in the Pittsburgh Innovation District. Credit: Nancy Andrews for The Hechinger Report
Some international undergraduates continue into graduate school or take jobs with companies that sponsor them so they can keep working on their ideas, Schuldt said.
“They want to stay in Pittsburgh and build businesses here,” she said.
There are clear worries that this momentum could come to a halt if the supply of international students continues a slowdown that began even before the new Trump term, thanks to visa processing delays and competition from other countries.
The number of international graduate students dropped in the fall by 2 percent, before the presidential election, according to the Institute of International Education. Further declines are expected following the government’s pause on student visa interviews, publicity surrounding visa revocations and arrests and cuts to federal research funding.
Rhonda Schuldt, director of the Big Idea Center at the University of Pittsburgh. International students “want to stay in Pittsburgh and build businesses here,” Schuldt says. Credit: Nancy Andrews for The Hechinger Report
It’s too early to know what will happen this fall. But D. Sole co-founder Saisri Akondi has heard from friends who planned to come to the United States that they can’t get visas. “Most of these students wanted to start companies,” she said.
“I would be lying if I said nothing has changed,” said Akondi, who has been accepted into a master’s degree program in business administration at the Stanford University Graduate School of Business under her existing student visa, though she said her company will stay in Pittsburgh. “The fear has increased.”
This could affect whether tech companies continue to come to Pittsburgh, said Russo, at least unless and until more Americans are better prepared for and recruited into tech-related graduate programs. That’s something universities have not yet begun to do, since the unanticipated threat to their international students erupted only in March, and that would likely take years.
Audrey Russo, CEO of the Pittsburgh Technology Council. If the number of international students declines, “Who’s going to do the research? Who’s going to be in these teams?” she asks. Credit: Nancy Andrews for The Hechinger Report
“Who’s going to do the research? Who’s going to be in these teams?” asked Russo. “We’re hurting ourselves deeply.”
The impact could transcend the research and development ecosystem. “I think we’ll see almost immediate ramifications in Pittsburgh in terms of higher-skilled, higher-wage companies hiring here,” said Sean Luther, at InnovatePGH. “And that affects the grocery shops, the barbershops, the real estate.”
There are other, more nuanced impacts.
Mike Madden, left, vice president of InnovatePGH and director of the Pittsburgh Innovation District, talks with University of Pittsburgh graduate student Jayden Serenari in one of InnovatePGH’s co-working spaces. Credit: Nancy Andrews for The Hechinger Report
“Whether we like it or not, it’s a global world. It’s a global economy. The problems that these students want to solve are global problems,” Schuldt said. “And one of the things that is really important in solving the world’s problems is to have a robust mix of countries, of cultures — that opportunity to learn how others see the world. That is one of the most valuable things students tell us they get here.”
Pittsburgh is a prime example of a place whose economy is vulnerable to a decline in the number of international students, said Brookings’ Muro. But it’s not unique.
“These scholars become entrepreneurs. They’re adding to the U.S. economy new ideas and new companies,” he said. Without them, “the economy would be smaller. Research wouldn’t get done. Journal articles wouldn’t be written. Patents wouldn’t be filed. Fewer startups would occur.”
The United States, said Muro, “has cleaned up by being the absolute central place for this. The system has been incredibly beneficial to the United States. The hottest technologies are inordinately reliant on these excellent minds from around the world. And their being here is critical to American leadership.”
Contact writer Jon Marcus at 212-678-7556, [email protected]orjpm.82 on Signal.
The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.
Researchers at the University of Kansas have produced a set of guidelines to help educators from preschool through higher education responsibly implement artificial intelligence in a way that empowers teachers, parents, students and communities alike.
Earlier this year, President Donald Trump issued an executive order instructing schools to incorporate AI into their operations. The framework is intended to help all schools and educational facilities do so in a manner that fits their unique communities and missions.
“We see this framework as a foundation,” said James Basham, director of CIDDL and professor of special education at KU. “As schools consider forming an AI task force, for example, they’ll likely have questions on how to do that, or how to conduct an audit and risk analysis. The framework can help guide them through that, and we’ll continue to build on this.”
The framework features four primary recommendations.
Establish a stable, human-centered foundation.
Implement future-focused strategic planning for AI integration.
Ensure AI educational opportunities for every student.
Conduct ongoing evaluation, professional learning and community development.
First, the framework urges schools to keep humans at the forefront of AI plans, prioritizing educator judgment, student relationships and family input on AI-enabled processes and not relying on automation for decisions that affect people. Transparency is also key, and schools should communicate how AI tools work, how decisions are made and ensure compliance with student protection laws such as the Individuals with Disabilities Education Act and Family Education Rights and Privacy Act, the report authors write.
The document also outlines recommendations for how educational facilities can implement the technology. Establishing an AI integration task force including educators, administrators, families, legal advisers and specialists in instructional technology and special education is key among the recommendations. The document also shares tips on how to conduct an audit and risk analysis before adoption and consider how tools can affect student placement and identification and consider possible algorithmic error patterns. As the technologies are trained on human data, they run the risk of making the same mistakes and repeating biases humans have made, Basham said.
That idea is also reflected in the framework’s third recommendation. The document encourages educators to commit to learner-centered AI implementation that considers all students, from those in gifted programs to students with cognitive disabilities. AI tools should be prohibited from making final decisions on IEP eligibility, disciplinary actions and student progress decisions, and mechanisms should be installed that allow for feedback on students, teachers and parents’ AI educational experiences, the authors wrote.
Finally, the framework urges ongoing evaluation, professional learning and community development. As the technology evolves, schools should regularly re-evaluate it for unintended consequences and feedback from those who use it. Training both at implementation and in ongoing installments will be necessary to address overuse or misuse and clarify who is responsible for monitoring AI use and to ensure both the school and community are informed on the technology.
The framework was written by Basham; Trey Vasquez, co-principal investigator at CIDDL, operating officer at KU’s Achievement & Assessment Institute and professor of special education at KU; and Angelica Fulchini Scruggs, research associate and operations director for CIDDL.
“The priority at CIDDL is to share transparent resources for educators on topics that are trending and in a way that is easy to digest,” Fulchini Scruggs said. “We want people to join the community and help them know where to start. We also know this will evolve and change, and we want to help educators stay up to date with those changes to use AI responsibly in their schools.”
Mike Krings, the University of Kansas
Mike Krings is a Public Affairs Officer with the KU News Service at the University of Kansas.
Latest posts by eSchool Media Contributors (see all)
Academic researchers are worried that the government’s plans to stop investing in the development of messenger RNA vaccines, a technology university scientists first used to help develop the COVID-19 vaccines, will undermine the United States’ standing as a global leader in biomedical research and development.
As promising as mRNA technology may be for treating a range of maladies, including numerous types of cancer and autoimmune diseases, its role in developing the COVID vaccine has thrust it into a political crossfire, fueled by the Trump administration’s smoldering criticisms of the Biden administration’s handling of the pandemic.
Last week, Robert F. Kennedy Jr., director of the Department of Health and Human Services, who frequently cites misinformation about vaccines and other public health issues, announced that the department is winding down mRNA vaccine research under the Biomedical Advanced Research and Development Authority and canceling $500 million worth of contracts and grants with numerous biotech companies and Emory University in Atlanta.
“We reviewed the science, listened to the experts, and acted,” Kennedy, a lawyer by training, said in a statement, claiming that “thedata show these vaccines fail to protect effectively against upper respiratory infections like COVID and flu. We’re shifting that funding toward safer, broader vaccine platforms that remain effective even as viruses mutate.”
Jeff Coller, director of the RNA Innovation Center at Johns Hopkins University, whose own graduate student helped develop Moderna’s COVID vaccine, said that “mRNA technology is incredibly misunderstood by the public and many of our politicians.”
Despite that, “the science has always been consistently clear about the powerful medical benefits of the mRNA platform,” he said. “It’s saved millions of lives, is incredibly safe, has huge potential and will revolutionize medicine in the next 100 years. Yet, we’re ceding American leadership in this technology.”
The half-a-billion-dollar cut comes at the same time that the Trump administration has withdrawn support for federally funded scientific research that doesn’t align with its ideological views, including projects focused on vaccine hesitancy, LGBTQ+ health and climate change.
According to a report from STAT News, the 181-page document Kennedy cited as his evidence that mRNA vaccines aren’t safe or effective references disputed studies written by other skeptics of COVID mitigation protocols, including stay-at-home orders and vaccines.
In his op-ed, Bhattacharya acknowledged that mRNA is a “promising technology” that “may yet deliver breakthroughs in treating diseases such as cancer,” but that “as a vaccine intended for broad public use, especially during a public health emergency, the platform has failed a crucial test: earning public trust.”
“Unfortunately, the Biden administration did not manage public trust in the coronavirus vaccines, largely because it chose a strategy of mandates rather than a risk-based approach and did not properly acknowledge Americans’ growing concerns regarding safety and effectiveness,” he wrote.
‘Political Shot Across the Bow’
The vast majority of scientists agree that the mRNA-based COVID vaccine—which was created in record time as a result of President Donald Trump’s Operation Warp Speed, launched in 2020—is generally safe and effective.
“I’m concerned about [the cut] weakening our country and putting us at a disadvantage,” said an mRNA researcher who asked to remain anonymous out of fear of retaliation. “The promise of mRNA is almost limitless, and I’d like to see those advances being made in this country. But currently it seems those advances are more likely to come from Europe and Asia. I’m also worried about the impact this could have on our economy—this is a growing field of industry.”
Coller, of Johns Hopkins, said Kennedy’s decision to withdraw funding for mRNA vaccine research has more than financial implications.
“It was a political shot across the bow of the entire research community, both in industry and academia,” Coller said. “What it says is that the government doesn’t want to support this technology and is going to make sure it doesn’t happen. If you’re an academic thinking about starting a new program in mRNA medicines, don’t waste your time.”
And now it will be even easier for political whims to drive the government’s scientific research priorities. Last week, Trump issued an executive order that will put political appointees—rather than subject-matter experts—in charge of federal grant-making decisions.
Heather Pierce, senior director for science policy and regulatory counsel at the Association of American Medical Colleges, said that while Kennedy’s decision won’t end all of the nation’s mRNA research, “the indication that a certain technology or scientific area won’t be pursued regardless of the progress made so far is worrisome as a concept.”
That’s in part because “when we unilaterally close the door on a specific type of research or technology, we don’t know what would have come from that,” she said. “It’s not to say that every research project using every technology and scientific tool will necessarily lead to a cure or breakthrough, but the initial funding of these projects shows that there was promise that made it worth exploring.”
Both Kennedy and Bhattacharya have said the government will continue to support research on other uses of mRNA technology unrelated to infectious disease vaccines. But experts say separating those research areas isn’t so simple.
“They’re all interconnected,” said Florian Krammer, a professor of vaccinology at the Icahn School of Medicine at Mount Sinai. “If you take away funding in the infectious disease space and innovation doesn’t happen there, it’s also not happening in other spaces where mRNA technology is used.”
That will create a “huge problem for researchers,” he added, “because a lot of fields are using this technology, and if it’s not moving forward, it closes doors.”
What happens when over 100 passionate educators converge in Chicago to celebrate two decades of educational innovation? A few weeks ago, I had the thrilling opportunity to immerse myself in the 20th anniversary of the Discovery Educator Network (the DEN), a week-long journey that reignited my passion for transforming classrooms.
From sunrise to past sunset, my days at Loyola University were a whirlwind of learning, laughter, and relentless exploration. Living the dorm life, forging new connections, and rekindling old friendships, we collectively dove deep into the future of learning, creating experiences that went far beyond the typical professional development.
As an inaugural DEN member, the professional learning community supported by Discovery Education, I was incredibly excited to return 20 years after its founding to guide a small group of educators through the bountiful innovations of the DEN Summer Institute (DENSI). Think scavenger hunts, enlightening workshops, and collaborative creations–every moment was packed with cutting-edge ideas and practical strategies for weaving technology seamlessly into our teaching, ensuring our students are truly future-ready.
During my time at DENSI, I learned a lot of new tips and tricks that I will pass on to the educators I collaborate with. From AI’s potential to the various new ways to work together online, participants in this unique event learned a number of ways to weave digital citizenship into edtech innovation. I’ve narrowed them down to five core concepts; each a powerful step toward building future-ready classrooms and fostering truly responsible digital citizens.
Use of artificial intelligence
Technology integration: When modeling responsible AI use, key technology tools could include generative platforms like Gemini, NotebookLM, Magic School AI, and Brisk, acting as ‘thought partners’ for brainstorming, summarizing, and drafting. Integration also covers AI grammar/spell-checkers, data visualization tools, and feedback tools for refining writing, presenting information, and self-assessment, enhancing digital content interaction and production.
Learning & application: Teaching students to ethically use AI is key. This involves modeling critical evaluation of AI content for bias and inaccuracies. For instance, providing students with an AI summary of a historical event to fact-check with credible sources. Students learn to apply AI as a thought partner, boosting creativity and collaboration, not replacing their own thinking. Fact-checking and integrating their unique voices are essential. An English class could use AI to brainstorm plot ideas, but students develop characters and write the narrative. Application includes using AI for writing refinement and data exploration, fostering understanding of AI’s academic capabilities and limitations.
Connection to digital citizenship: This example predominantly connects to digital citizenship. Teaching responsible AI use promotes intellectual honesty and information literacy. Students can grasp ethical considerations like plagiarism and proper attribution. The “red, yellow, green” stoplight method provides a framework for AI use, teaching students when to use AI as a collaborator, editor, or thought partner–or not at all.This approach cultivates critical thinking and empowers students to navigate the digital landscape with integrity, preparing them as responsible digital citizens understanding AI’s implications.
Digital communication
Technology integration: Creating digital communication norms should focus on clarity with visuals like infographics, screenshots, and video clips. Canva is a key tool for a visual “Digital Communication Agreement” defining online interaction expectations. Include student voice by the integration and use of pictures and graphics to illustrate behaviors and potentially collaborative presentation / polling tools for student involvement in norm-setting.
Learning & application: Establishing clear online interaction norms is the focus of digital communication. Applying clear principles teaches the importance of visuals and setting communication goals. Creating a visual “Digital Communication Agreement” with Canva is a practical application where students define respectful online language and netiquette. An elementary class might design a virtual classroom rules poster, showing chat emojis and explaining “think before you post.” Using screenshots and “SMART goals” for online discussions reinforces learning, teaching constructive feedback and respectful debate. In a middle school science discussion board, the teacher could model a respectful response like “I understand your point, but I’m wondering if…” This helps students apply effective digital communication principles.
Connection to digital citizenship: This example fosters respectful communication, empathy, and understanding of online social norms. By creating and adhering to a “Digital Communication Agreement,” students develop responsibility for online interactions. Emphasizing respectful language and netiquette cultivates empathy and awareness of their words’ impact. This prepares them as considerate digital citizens, contributing positively to inclusive online communities.
Content curation
Technology integration: For understanding digital footprints, one primary tool is Google Drive when used as a digital folder to curate students’ content. The “Tech Toolbox” concept implies interaction with various digital platforms where online presence exists. Use of many tools to curate content allows students to leave traces on a range of technologies forming their collective digital footprint.
Learning & application: This centers on educating students about their online presence’s permanence and nature. Teaching them to curate digital content in a structured way, like using a Google Drive folder, is key. A student could create a “Digital Portfolio” in Google Drive with online projects, proud social media posts, and reflections on their public identity. By collecting and reviewing online artifacts, students visualize their current “digital footprint.” The classroom “listening tour” encourages critical self-reflection, prompting students to think about why they share online and how to be intentional about their online identity. This might involve students reviewing anonymized social media profiles, discussing the impression given to future employers.
Connection to digital citizenship: This example cultivates awareness of online permanence, privacy, responsible self-presentation, and reputation management. Understanding lasting digital traces empowers students to make informed decisions. The reflection process encourages the consideration of their footprint’s impact, fostering ownership and accountability for online behavior. This helps them become mindful, capable digital citizens.
Promoting media literacy
Technology integration: One way to promote media literacy is by using “Paperslides” for engaging content creation, leveraging cameras and simple video recording. This concept gained popularity at the beginning of the DEN through Dr. Lodge McCammon. Dr. Lodge’s popular 1-Take Paperslide Video strategy is to “hit record, present your material, then hit stop, and your product is done” style of video creation is something that anyone can start using tomorrow. Integration uses real-life examples (likely digital media) to share a variety of topics for any audience. Additionally, to apply “Pay Full Attention” in a digital context implies online viewing platforms and communication tools for modeling digital eye contact and verbal cues.
Learning & application: Integrating critical media consumption with engaging content creation is the focus. Students learn to leverage “Paperslides” or another video creation method to explain topics or present research, moving beyond passive consumption. For a history project, students could create “Paperslides” explaining World War II causes, sourcing information and depicting events. Learning involves using real-life examples to discern credible online sources, understanding misinformation and bias. A lesson might show a satirical news article, guiding students to verify sources and claims through their storyboard portion. Applying “Pay Full Attention” teaches active, critical viewing, minimizing distractions. During a class viewing of an educational video, students could pause to discuss presenter credentials or unsupported claims, mimicking active listening. This fosters practical media literacy in creating and consuming digital content.
Connection to digital citizenship: This example enhances media literacy, critical online information evaluation, and understanding persuasive techniques. Learning to create and critically consume content makes students informed, responsible digital participants. They identify and question sources, essential for navigating a digital information-saturated world. This empowers them as discerning digital citizens, contributing thoughtfully to online content.
Collaborative problem-solving
Technology integration: For practicing digital empathy and support, key tools are collaborative online documents like Google Docs and Google Slides. Integration extends to online discussion forums (Google Classroom, Flip) for empathetic dialogue, and project management tools (Trello, Asana) for transparent organization.
Learning & application: This focuses on developing effective collaborative skills and empathetic communication in digital spaces. Students learn to work together on shared documents, applying a “Co-Teacher or Model Lessons” approach where they “co-teach” each other new tools or concepts. In a group science experiment, students might use a shared Google Doc to plan methodology, with one “co-teaching” data table insertion from Google Sheets. They practice constructive feedback and model active listening in digital settings, using chat for clarification or emojis for feelings. The “red, yellow, green” policy provides a clear framework for online group work, teaching when to seek help, proceed cautiously, or move forward confidently. For a research project, “red” means needing a group huddle, “yellow” is proceeding with caution, and “green” is ready for review.
Connection to digital citizenship: This example is central to digital citizenship, developing empathy, respectful collaboration, and responsible problem-solving in digital environments. Structured online group work teaches how to navigate disagreements and offers supportive feedback. Emphasis on active listening and empathetic responses helps internalize civility, preparing students as considerate digital citizens contributing positively to online communities.
These examples offer a powerful roadmap for cultivating essential digital citizenship skills and preparing all learners to be future-ready. The collective impact of thoughtfully utilizing these or similar approaches , or even grab and go resources from programs such as Discovery Education’s Digital Citizenship Initiative, can provide the foundation for a strong academic and empathetic school year, empowering educators and students alike to navigate the digital world with confidence, integrity, and a deep understanding of their role as responsible digital citizens.
In addition, this event reminded me of the power of professional learning communities. Every educator needs and deserves a supportive community that will share ideas, push their thinking, and support their professional development. One of my long-standing communities is the Discovery Educator Network (which is currently accepting applications for membership).
Dr. Stephanie J. Madlinger, Discovery Educator Network (DEN)
Dr. Stephanie J. Madlinger, Ed.D., is an experienced educator and inaugural member of the Discovery Educator Network (DEN). With a deep passion for integrating technology into learning environments, she focuses on innovative teaching strategies and the responsible use of technological tools to enhance student engagement and promote digital citizenship. Her work emphasizes preparing students to be future-ready learners who can navigate the digital world with confidence and integrity. As a seasoned educator, professor, and staff developer, Stephanie is dedicated to fostering innovative learning environments in both traditional and online settings. Dr. Madlinger has facilitated and taught thousands of learners in traditional, online, and hybrid formats. She actively attends & presents at educational events and is the proud parent of four adult children, including three educators.
Latest posts by eSchool Media Contributors (see all)
Dr. Emmanuel LalandeHistorically Black Colleges and Universities (HBCUs) have always stood on the frontlines of educational equity, carving pathways to excellence for generations of Black students against overwhelming odds. Today, as higher education faces a shift driven by technology, declining enrollment, and resource disparities, a new opportunity emerges: the power of Artificial Intelligence (AI) to reshape, reimagine, and reinforce the mission of HBCUs.
From admissions automation and predictive analytics to personalized learning and AI-powered tutoring, artificial intelligence is no longer theoretical, it is operational. At large institutions, AI-driven chatbots and enrollment algorithms have already improved student engagement and reduced summer melt. Meanwhile, HBCUs, particularly smaller and underfunded ones, risk being left behind.
The imperative for HBCUs to act now is not about chasing trends about survival, relevance, and reclaiming leadership in shaping the future of Black education.
AI as a Force Aligned with the HBCU Mission
Artificial intelligence, when developed and implemented with intention and ethics, can be one of the most powerful tools for educational justice. HBCUs already do more with less. They enroll 10% of Black students in higher education and produce nearly 20% of all Black graduates. These institutions are responsible for over 25% of Black graduates in STEM fields, and they produce a significant share of Black teachers, judges, engineers, and public servants.
The power of AI can amplify this legacy.
Predictive analytics can flag at-risk students based on attendance, financial aid gaps, and academic performance, helping retention teams intervene before a student drops out.
AI chatbots can provide round-the-clock support to students navigating complex enrollment, financial aid, or housing questions.
AI tutors and adaptive platforms can meet students where they are, especially for those in developmental math, science, or writing courses.
Smart scheduling and resource optimization tools can help HBCUs streamline operations, offering courses more efficiently and improving completion rates.
For small HBCUs with limited staff, outdated technology, and tuition-driven models, AI can serve as a strategic equalizer. But accessing these tools requires intentional partnerships, resources, and cultural buy-in.
The Philanthropic Moment: A Unique Opportunity
The recent announcement from the Bill & Melinda Gates Foundation that it plans to spend its entire $200 billion endowment by 2045 presents a monumental opportunity. The foundation has declared a sharpened focus on “unlocking opportunity” through education, including major investments in AI-powered innovations in K-12 and higher education, particularly in mathematics and student learning platforms.
One such investment is in Magma Math, an AI-driven platform that helps teachers deliver personalized math instruction. The foundation is also actively funding research and development around how AI can close opportunity gaps in postsecondary education and increase economic mobility. Their call for “AI for Equity” aligns with the HBCU mission like no other.
Now is the time for HBCUs to boldly approach philanthropic organizations like the Gates Foundation as strategic partners capable of leading equity-driven AI implementation.
Other foundations should follow suit. Lumina Foundation, Carnegie Corporation, Kresge Foundation, and Strada Education Network have all expressed interest in digital learning and postsecondary success. A targeted, collaborative initiative to equip HBCUs with AI infrastructure, training, and research capacity could be transformative.
Tech Industry Engagement: From Tokenism to True Partnership
The tech industry has begun investing in HBCUs, but more is needed.
OpenAI recently partnered with North Carolina Central University (NCCU) to support AI literacy through its Institute for Artificial Intelligence and Emerging Research. The vision includes scaling support to other HBCUs.
Intel has committed $750,000 to Morgan State University to advance research in AI, data science, and cybersecurity.
Amazon launched the Educator Enablement Program, supporting faculty at HBCUs in learning and teaching AI-related curricula.
Apple and Google have supported HBCU initiatives around coding, machine learning, and entrepreneurship, though these efforts are often episodic or branding-focused. What’s needed now is sustained, institutional investment.
Huston-Tillotson University hosted an inaugural HBCU AI Conference and Training Summit back in April, bringing together AI researchers, students, educators, and industry leaders from across the country. This gathering focused on building inclusive pathways in artificial intelligence, offering interactive workshops, recruiter engagement, and a platform for collaboration among HBCUs, community colleges, and major tech firms.
We call on Microsoft, Salesforce, Nvidia, Coursera, Anthropic, and other major EdTech firms to go beyond surface partnerships. HBCUs are fertile ground for workforce development, AI research, and inclusive tech talent pipelines. Tech companies should invest in labs, curriculum development, student fellowships, and cloud infrastructure, especially at HBCUs without R1 status or multi-million-dollar endowments.
A Framework for Action Across HBCUs
To operate AI within the HBCU context, a few strategic steps can guide implementation:
1. AI Capacity Building Across Faculty and Staff
Workshops, certification programs, and summer institutes can train faculty to integrate AI into pedagogy, advising, and operations. Staff training can ensure AI tools support, not replace, relational student support.
2. Student Engagement Through Research and Internships
HBCUs can establish AI learning hubs where students gain real-world experience developing or auditing algorithms, especially those designed for educational equity.
3. AI Governance
Every HBCU adopting AI must also build frameworks for data privacy, transparency, and bias prevention. As institutions historically rooted in justice, HBCUs can lead the national conversation on ethical AI.
4. Regional and Consortial Collaboration
HBCUs can pool resources to co-purchase AI tools, share grant writers, and build regional research centers. Joint proposals to federal agencies and tech firms will yield greater impact.
5. AI in Strategic Planning and Accreditation
Institutions should embed AI as a theme in Quality Enhancement Plans (QEPs), Title III initiatives, and enrollment management strategies. AI should not be a novelty, it should be a core driver of sustainability and innovation.
Reclaiming the Future
HBCUs were built to meet an unmet need in American education. They responded to exclusion with excellence. They turned marginalization into momentum. Today, they can do it again, this time with algorithms, neural networks, and digital dashboards.
But this moment calls for bold leadership. We must go beyond curiosity and into strategy. We must demand resources, form coalitions, and prepare our institutions not just to use AI, but to shape it.
Let them define what culturally competent, mission-driven artificial intelligence looks like in real life, not in theory.
And to the Gates Foundation, Intel, OpenAI, Amazon, and all who believe in the transformative power of education: invest in HBCUs. Not as charity, but as the smartest, most impactful decision you can make for the future of American innovation.
Because when HBCUs lead, communities rise. And with AI in our hands, the next level of excellence is well within reach.
Dr. Emmanuel Lalande currently serves as Vice President for Enrollment and Student Success and Special Assistant to the President at Voorhees University.
By Professor Alejandro Armellini, Dean of Education and Digital Innovation at the University of Portsmouth.
Universities want to be at the cutting edge of knowledge creation, but many are grappling with a paradox: how to harness the potential of AI while minimising its pitfalls. Done well, generative AI can help institutions run more efficiently, enhance teaching quality and support students in new and exciting ways. Done poorly, it can generate misinformation, introduce bias and make students (and staff) over-reliant on technology they do not fully understand. The challenge is not whether to use AI but how to make it work for human-driven, high-quality education.
Across the sector, institutions are already putting AI to work in ways that go far beyond administrative efficiencies. At many universities, AI-driven analytics are helping identify students at risk of disengagement before they drop out. By analysing attendance, engagement and performance data, tutors can intervene earlier, offering personalised support before problems escalate. Others have deployed AI-powered feedback systems that provide students with instant formative feedback on their writing. The impact? Students who actually improve before their assignments are due, rather than after they’ve been graded.
Concerns about the accuracy, transparency and provenance of AI tools have been well documented. Many of them operate as ‘black boxes’, making it difficult to verify outputs or attribute sources. These challenges run counter to academic norms of evidence, citation and rigour. AI tools continue to occupy a liminal space: they promise and deliver a lot, but are not yet fully trusted. AI can get things spectacularly wrong. AI-powered recruitment tools have been found to be biased against women and minority candidates, reinforcing rather than challenging existing inequalities. AI-driven assessment tools have been criticised for amplifying bias, grading students unfairly or making errors that, when left unchallenged, can have serious consequences for academic progression.
With new applications emerging almost daily, it’s becoming harder to assess their quality, reliability and appropriateness for academic use. Some institutions rush headlong into AI adoption without considering long-term implications, while others hesitate, paralysed by the sheer number of options, risks and potential costs. Indeed, a major barrier to AI adoption at all levels in higher education is fear: fear of the unknown, fear of losing control, fear of job displacement, fear of fostering metacognitive laziness. AI challenges long-held beliefs about authorship, expertise and what constitutes meaningful engagement with learning. Its use can blur the boundaries between legitimate assistance and academic misconduct. Students express concerns about being evaluated by algorithms rather than humans. These fears are not unfounded, but they must be met with institutional transparency, clear communication, ethical guidelines and a commitment to keeping AI as an enabler, not a replacement, for human judgment and interaction. Universities are learning too.
No discussion on AI in universities would be complete without addressing the notion of ‘future-proofing’. The very idea that we can somehow freeze a moving target is, at best, naive and, at worst, an exercise in expensive futility. Universities drafting AI policies today will likely find them obsolete before the ink has dried. Many have explicitly reversed earlier AI policies. That said, having an AI policy is not without merit: it signals an institutional commitment to ethical AI use, academic integrity and responsible governance. The trick is to focus on agile, principle-based approaches that can adapt as AI continues to develop. Over-regulation risks stifling innovation, while under-regulation may lead to confusion or misuse. A good AI policy should be less about prediction and more about preparation: equipping staff and students with the skills and capabilities to navigate an AI-rich world, while creating a culture that embraces change. Large-scale curriculum and pedagogic redesign is inevitable.
Where does all this leave us? Universities must approach AI with a mix of enthusiasm and caution, ensuring that innovation does not come at the expense of academic integrity or quality. Investing in AI fluency (not just ‘literacy’) for staff and students is essential, as is institutional clarity on responsible AI use. Universities should focus on how AI can support (not replace) the fundamental principles of good teaching and learning. They must remain committed to the simple but powerful principle of teaching well, consistently well: every student, every session, every time.
AI is a tool – powerful, perhaps partly flawed, but full of potential. It is the pocket calculator of the 1970s. How universities wield it will determine whether it leads to genuine transformation or a series of expensive (and reputationally risky) missteps. The challenge, then, is to stay in control, keep the focus on successful learning experiences in their multiple manifestations, and never let AI run the show alone. After all, no algorithm has yet mastered the art of handling a seminar full of students who haven’t done the reading.
The UK needs a plan for growth and innovation – an industrial strategy is a way of picking winners in terms of sector investments and prioritisation.
Today’s iteration (the fourth in recent times, with Theresa May’s government providing the previous one) chooses eight high-potential sectors to prioritise funding and skills interventions, with the overall intention of encouraging private investment over the long term.
Picking winners for the long term
The choices are the important bit – as the strategy itself notes
Past UK industrial strategies have not lasted because they have either refused to make choices or have failed to back their choices up by reallocating resources and driving genuine behaviour change in both government and industry.
And there are clear commonalities between previous choices and the new ones. Successive governments have prioritised “clean growth”, data and technology, and health – based both on the potential for growth and the impact that investment could have.
What is different this time is the time scales on which the government is thinking – much of the spending discussed today is locked in to the next five years of departmental spending via the spending review, and of course we have those infamous 10-year research and development plans in some areas: the Aerospace Technology Institute (linked to Cranfield), the National Quantum Computing Centre (at the Harwell STFC campus), the Laboratory of Molecular Biology (MRC supported at Cambridge), and the new DRIVE35 automotive programme are the first to be announced.
Government funded innovation programmes will prioritise the IS-8, within a wider goal to focus all of research and development funding on long term economic growth. This explicitly does not freeze out curiosity delivered research – but it is clear that there will be a focus on the other end of the innovation pipeline.
At a macro level UKRI will be pivoting financial support towards the IS-8 sectors – getting new objectives around innovation, commercialisation, and scale-up. If you are thinking that this sounds very Innovate UK you would be right, the Catapult Network will also get tweaks to refocus.
The £500m Local Innovation Partnerships Fund is intended to generate a further £1bn of additional investment and £700m of value to local economies, and there are wider plans to get academia and industry working together: a massive expansion in supercomputer resources (the AI research resource, inevitably) and a new Missions Accelerator programme supported by £500m of funding. And there’s the Sovereign AI Unit within government (that’s another £500m of industry investments) in “frontier AI”. On direct university allocation we get the welcome news that the Higher Education Innovation Fund (HEIF) is here to stay.
There’s an impressively hefty chunk of plans for getting the most out of public sector data – specifically the way in which government (“administrative”) data can be used by research and industry. Nerds like me will have access to a wider range of data under a wider range of licenses – the government will also get better at valuing data in order to maximise returns for the bits it does sell, and there will be ARDN-like approaches available to more businesses to access public data in a safe and controlled way (if parliamentary time allows, legislation will be brought forward) – plus money (£12m) for data sharing infrastructure and the (£100m) national data library.
By sector
The sector plans themselves have a slant towards technology adoption (yes even the creative sector – “createch” is absolutely a thing). But there’s plenty of examples throughout of specific funding to support university-based research, innovation, and bringing discoveries to market – alongside (as you’ll see from Michael’s piece) plenty on skills.
Clearly the focus varies between sectors. For example, there will be a specific UKRI professional and business services innovation programme; while digital and technologies work is more widely focused on the entirety of the UK’s research architecture: there we get promises of “significant” investment via multiple UKRI and ARIA programmes alongside a £240m focus on advanced communication technologies (ACT). The more research-focused sectors also get the ten-year infrastructure-style investments like the £1bn on AI research resources.
Somewhat surprisingly clean energy is not one of the big research funding winners – there’s just £20m over 7 years for the sustainable industrial futures programme (compare the £1bn energy programme in the last spending review). With sustainability also being a mission it also gets a share of the missions accelerator programme (£500m), but for such a research-intensive field that doesn’t feel like a lot.
The creative industries, on the other hand, get £100m via UKRI over the spending review period – there’s a specific creative industries research and development plan coming later this year, alongside (£500m) creative clusters, and further work on measuring the output of the sector. And “createch” (the increasingly technical underpinnings of the creative industries) is a priority too.
It’s also worth mentioning advanced manufacturing as a sector where business and industry are major funders. Here the government is committing “up to £4.3bn” for the sector, with £2.8bn of this going to research and development. Key priorities include work on SME technology adoption, and advanced automotive technologies – the focus is very much on commercialisation, and there is recognition that private finance needs to be a big part of this.
Choice cuts?
The IS-8 are broadly drawn – it is difficult to think of an academic research sector that doesn’t get a slice. But there will be a shaking out of sub-specialisms, and the fact that one of the big spenders (health and medicine) is currently lacking detail doesn’t help understand how the profile of research within that area will shift during the spending review period.
Industrial policy has always been a means of picking winners – focusing necessarily limited investment on the places it will drive benefits. The nearly flat settlement for UKRI in the spending review was encouraging, but it is starting to feel like new announcements like these need to be seen both as net benefits (for the lucky sectors) and funding cuts (for the others).
Universities love to talk about innovation. Pedagogical innovation is framed as a necessity in an era of rapid change, yet those expected to enact it – academics – are caught in an identity crisis.
In our research on post-pandemic pedagogical innovation, we found that the decision to engage with or resist innovation is not just about workload, resources, or institutional strategy. It’s about identity – who academics see themselves as, how they are valued within their institutions, and what risks they perceive in stepping beyond the status quo.
Academics are asked to be both risk-taking pedagogical entrepreneurs and compliant employees within increasingly bureaucratic, metric-driven institutions. This paradox creates what we call the moral wiggle room of innovation – a space where educators justify disengagement, not necessarily because they oppose change, but because their institutional environment does not meaningfully reward it.
The paradox of pedagogical innovation
During the pandemic, universities celebrated those who embraced new digital tools, hybrid learning, and flexible teaching formats. “Necessity breeds innovation” became the dominant narrative. Yet, as the crisis has subsided, many of these same institutions have reverted to rigid processes, managerial oversight, and bureaucratic hurdles, making innovation feel like an uphill battle.
On paper, universities support innovation. Education strategies abound with commitments to “transformative learning experiences” and “sector-leading digital education.” However, in practice, academics face competing pressures – expectations to drive innovation while being weighed down by institutional inertia.
The challenge is not just about introducing innovation but sustaining it in ways that foster long-term change. While institutions may advocate for pedagogical innovation, the reality for many educators is a system that does not provide the necessary time, support, or recognition to make such innovation a viable, sustained effort.
The result? Many feel disillusioned. As one academic in our research put it:
I definitely think there’s a drive to be more innovative, but it feels like a marketized approach. It’s not tangible – I can’t say, ‘Oh, they’re really supporting me to be more innovative.’ There’s no clear pathway, no structured process. Academic at a post-92 university
For some, engaging in pedagogical innovation is a source of professional fulfilment. For others, it is a career gamble. Whether academics choose to innovate or resist depends largely on how their identity aligns with institutional structures, career incentives, and personal values.
Three identity tensions shaping pedagogical innovation
Regulated versus self-directed identity Institutions shape identity through expectations: teaching excellence frameworks, fellowship accreditations, and workload models dictate what “counts” in an academic career. Yet, many educators see their professional identity as self-driven – rooted in disciplinary expertise and a commitment to students. When institutional definitions of innovation clash with personal motivations, resistance emerges.
As one participant put it:
When you’re (personally) at the forefront of classroom innovation…you’re constantly looking outwards for ideas. Within the institution, there isn’t really anyone I can go to and say, ‘What are you doing differently?’ It’s more about stumbling upon people rather than having a proactive approach to being innovative. I think there’s a drive for PI, but it feels like a marketised approach. Academic at a post-92 university
For some, innovation is an extension of their identity as educators; for others, it is a compliance exercise – an expectation imposed from above rather than a meaningful pursuit.
This tension is explored in Wonkhe’s discussion of institutional silos, which highlights how universities often create structures that inadvertently restrict collaboration and cross-disciplinary innovation, making it harder for educators to engage with meaningful change.
Risk versus reward in academic careers Engaging in pedagogical innovation takes time and effort. For those on teaching and scholarship contracts, it is often an expectation. For research and scholarship colleagues, it is rarely a career priority.
Despite strategic commitments to pedagogical innovation, career incentives in many institutions still favour traditional research outputs over pedagogical experimentation. The opportunity cost is real – why invest in something that holds little weight in promotions or workload models?
As one academic reflected:
I prioritise what has immediate impact. Another teaching award isn’t a priority. Another publication directly benefits my CV.
Senior leader at a Russell Group university
Until pedagogical I is properly recognised in career progression, it will remain a secondary priority for many. As explored on Wonkhe here, the question is not just whether innovation happens but whether institutions create environments that allow it to spread. Without clear incentives, pedagogical innovation remains the domain of the few rather than an embedded part of academic practice.
Autonomy versus bureaucracy Academics value autonomy. It is one of the biggest predictors of job satisfaction in higher education. Yet pedagogical innovation is often entangled in institutional bureaucracy (perceived or real) through slow approval processes, administrative hurdles, and performance monitoring.
The pandemic showed that universities can be agile. But many educators now feel that flexibility has been replaced by managerialism, stifling creativity.
I’ve had people in my office almost crying at the amount of paperwork just to get an innovation through. People get the message: don’t bother.
Senior leader at a Russell Group university
To counteract this, as one educator put it:
It’s better to ask forgiveness afterwards than ask permission beforehand.
Senior leader at a Russell Group university
This kind of strategic rule-bending highlights the frustration many educators feel – a desire to innovate constrained by institutional red tape.
Mark Andrews, in a Wonkhe article here, argues that institutions need to focus on making education work rather than simply implementing digital tools for their own sake. The same logic applies to pedagogical innovation – if the focus is solely on regulation, innovation will always struggle to take root.
Beyond the rhetoric: what needs to change
If universities want sustained innovation, they must address these identity tensions. Pedagogical innovation needs to be rewarded in promotions, supported through streamlined processes, and recognised as legitimate academic work – not an optional extra.
The post-pandemic university is at a crossroads. Will pedagogical innovation be institutionalised in meaningful ways, or will it remain a talking point rather than a transformation? Academics are already navigating an identity crisis – caught between structural constraints, career incentives, and their own motivations. Universities must decide whether to ease that tension or allow it to widen.
A series of key government announcements over the coming weeks will set the direction of travel for research and innovation for years to come. Next week’s spending review will set the financial parameters for the remainder of this Parliament – and we shouldn’t expect this outcome to maintain the status quo, given this is the first zero-based review under a Labour government for 17 years.
Accompanying this will be the industrial strategy white paper, which is likely to have a focus on driving innovation and increasing the diffusion and adoption of technologies across the economy – in which the UK’s universities will need to be key delivery partners. We can also expect more detail on the proposals in the immigration white paper, with implications for international student and staff flows to the UK.
The outcome for higher education and research remains hard to call, but the government has sent early signals that it recognises the value of investment in R&D as crucial to transforming the UK’s economy. In a volatile fiscal environment, DSIT’s R&D budget saw a real-terms increase of 8.5 per cent for 2025–26 with protection for “core research” activity within this.
Looking ahead to the spending review, the Institute for Fiscal Studies has pointed out that the fiscal envelope set by the Chancellor for capital spending – which is how R&D is classified – at the spring statement is significantly frontloaded. There is scope for increases in the early years of the spending review period and then real-terms declines from 2027–28. With such significant constraints on the public finances, it’s more essential than ever that the UK’s R&D funding system maximises efficiency and impact, making the best possible use of available resources.
International comparisons
Last month, the Russell Group published a report commissioned from PwC and funded by Wellcome which considered the experiences of countries with very different R&D funding systems, to understand what the UK might learn from our competitors.
Alongside the UK, the report examined four countries: Canada, Germany, the Netherlands and South Korea, scoring them across five assessment criteria associated with a strong R&D system: strategic alignment to government priorities; autonomy, stability and sustainability; efficiency; and leveraging external investment. It also scored the countries on two measures of output: research excellence and innovation excellence.
The analysis can help to inform government decisions about how to strike a balance between these criteria. For example, on the face of it there’s a trade-off between prioritising institutional autonomy and ensuring strategic alignment to government priorities. But PwC found that providing universities with more freedom in how they allocate their research funding – for example, through flexible funding streams like Quality-Related (QR) funding – means they can also take strategic long-term decisions, which create advantage for the UK in key research fields for the future.
Over the years, QR funding and its equivalents in the devolved nations have enabled universities to make investments which have led to innovations and discoveries such as graphene, genomics, opto-electronics, cosmology research, and new tests and treatments for everything from bowel disease to diabetes, dementia and cancer.
Conversely, aligning too closely to changing political priorities can stifle impact and leave the system vulnerable. PwC found that, at its extreme, a disproportionate reliance on mission-led or priority-driven project grant funding inhibits the ability of institutions to invest outside of government’s immediate priority areas, resulting in less long-term strategic investment.
With a stretching economic growth mission to deliver, policymakers will be reaching for interventions which encourage private investment into the economy. The PwC report found long-term, stable government incentives are crucial in leveraging industry investment in R&D, alongside supporting a culture of industry-university collaboration. This has worked well in Germany and South Korea with a mix of incentives including tax credits, grants and loans to strengthen innovation capabilities.
Getting the balance right
The UK currently lags behind global competitors on the proportion of R&D funded by the business sector, at just over 58 per cent compared to the OECD average of 65 per cent. However, when considering R&D financed by business but performed by higher education institutions, the UK performs fifth highest in the OECD – well above the average.
This demonstrates the current system is successfully leveraging private sector collaboration and investment into higher education R&D. We should now be pursuing opportunities to bolster this even further. Schemes such as the Higher Education Innovation Fund (HEIF) deliver a proven return on investment: every £1 invested in HEIF yields £14.8 in economic return at the sector-level. PwC’s report noted that HEIF has helped develop “core knowledge exchange capabilities” within UK HEIs which are crucial to building successful partnerships with industry and spinning out new companies and technologies.
In a time of global uncertainty, economic instability and rapid technological change, investments in R&D still play a key role in tackling our most complex challenges. In its forthcoming spending review – the Russell Group submission is available here – as well as in the industrial strategy white paper and in developing reforms to the visa system, the government will need to balance a number of competing but interrelated objectives. Coordination across government departments will be crucial to ensure all the incentives are pointing in the right direction and to enable sectors such as higher education to maximise the contribution they can make to delivering the government’s missions.