Category: brain

  • Teaching math the way the brain learns changes everything

    Teaching math the way the brain learns changes everything

    Key points:

    Far too many students enter math class expecting to fail. For them, math isn’t just a subject–it’s a source of anxiety that chips away at their confidence and makes them question their abilities. A growing conversation around math phobia is bringing this crisis into focus. A recent article, for example, unpacked the damage caused by the belief that “I’m just not a math person” and argued that traditional math instruction often leaves even bright, capable students feeling defeated.

    When a single subject holds such sway over not just academic outcomes but a student’s sense of self and future potential, we can’t afford to treat this as business as usual. It’s not enough to explore why this is happening. We need to focus on how to fix it. And I believe the answer lies in rethinking how we teach math, aligning instruction with the way the brain actually learns.

    Context first, then content

    A key shortcoming of traditional math curriculum–and a major contributor to students’ fear of math–is the lack of meaningful context. Our brains rely on context to make sense of new information, yet math is often taught in isolation from how we naturally learn. The fix isn’t simply throwing in more “real-world” examples. What students truly need is context, and visual examples are one of the best ways to get there. When math concepts are presented visually, students can better grasp the structure of a problem and follow the logic behind each step, building deeper understanding and confidence along the way.

    In traditional math instruction, students are often taught a new concept by being shown a procedure and then practicing it repeatedly in hopes that understanding will eventually follow. But this approach is backward. Our brains don’t learn that way, especially when it comes to math. Students need context first. Without existing schemas to draw from, they struggle to make sense of new ideas. Providing context helps them build the mental frameworks necessary for real understanding.

    Why visual-first context matters

    Visual-first context gives students the tools they need to truly understand math. A curriculum built around visual-first exploration allows students to have an interactive experience–poking and prodding at a problem, testing ideas, observing patterns, and discovering solutions. From there, students develop procedures organically, leading to a deeper, more complete understanding. Using visual-first curriculum activates multiple parts of the brain, creating a deeper, lasting understanding. Shifting to a math curriculum that prioritizes introducing new concepts through a visual context makes math more approachable and accessible by aligning with how the brain naturally learns.

    To overcome “math phobia,” we also need to rethink the heavy emphasis on memorization in today’s math instruction. Too often, students can solve problems not because they understand the underlying concepts, but because they’ve memorized a set of steps. This approach limits growth and deeper learning. Memorization of the right answers does not lead to understanding, but understanding can lead to the right answers.

    Take, for example, a third grader learning their times tables. The third grader can memorize the answers to each square on the times table along with its coordinating multipliers, but that doesn’t mean they understand multiplication. If, instead, they grasp how multiplication works–what it means–they can figure out the times tables on their own. The reverse isn’t true. Without conceptual understanding, students are limited to recall, which puts them at a disadvantage when trying to build off previous knowledge.

    Learning from other subjects

    To design a math curriculum that aligns with how the brain naturally learns new information, we can take cues from how other subjects are taught. In English, for example, students don’t start by memorizing grammar rules in isolation–they’re first exposed to those rules within the context of stories. Imagine asking a student to take a grammar quiz before they’ve ever read a sentence–that would seem absurd. Yet in math, we often expect students to master procedures before they’ve had any meaningful exposure to the concepts behind them.

    Most other subjects are built around context. Students gain background knowledge before being expected to apply what they’ve learned. By giving students a story or a visual context for the mind to process–breaking it down and making connections–students can approach problems like a puzzle or game, instead of a dreaded exercise. Math can do the same. By adopting the contextual strategies used in other subjects, math instruction can become more intuitive and engaging, moving beyond the traditional textbook filled with equations.

    Math doesn’t have to be a source of fear–it can be a source of joy, curiosity, and confidence. But only if we design it the way the brain learns: with visuals first, understanding at the center, and every student in mind. By using approaches that provide visual-first context, students can engage with math in a way that mirrors how the brain naturally learns. This shift in learning makes math more approachable and accessible for all learners.

    Source link

  • 10+ Years of Lasting Impact and Local Commitment

    10+ Years of Lasting Impact and Local Commitment

    Over 60,000 students have benefited from the math program built on how the brain naturally learns

    A new analysis shows that students using ST Math at Phillips 66-funded schools are achieving more than twice the annual growth in math performance compared to their peers. A recent analysis by MIND Research Institute, which included 3,240 students in grades 3-5 across 23 schools, found that this accelerated growth gave these schools a 12.4 percentile point advantage in spring 2024 state math rankings.

    These significant outcomes are the result of a more than 10-year partnership between Phillips 66 and MIND Research Institute. This collaboration has brought ST Math, created by MIND Education, the only PreK–8 supplemental math program built on the science of how the brain learns, fully funded to 126 schools, 23 districts, and more than 60,000 students nationwide. ST Math empowers students to explore, make sense of, and build lasting confidence in math through visual problem-solving.

    “Our elementary students love JiJi and ST Math! Students are building perseverance and a deep conceptual understanding of math while having fun,” said Kim Anthony, Executive Director of Elementary Education, Billings Public Schools. “By working through engaging puzzles, students are not only fostering a growth mindset and resilience in problem-solving, they’re learning critical math concepts.”

    The initiative began in 2014 as Phillips 66 sought a STEM education partner that could deliver measurable outcomes at scale. Since then, the relationship has grown steadily, and now, Phillips 66 funds 100% of the ST Math program in communities near its facilities in California, Washington, Montana, Oklahoma, Texas, Illinois, and New Jersey. Once involved, schools rarely leave the program.

    To complement the in-class use of ST Math, Phillips 66 and MIND introduced Family Math Nights. These events, hosted at local schools, bring students, families, and Phillips 66 employee volunteers together for engaging, hands-on activities. The goal is to build math confidence in a fun, interactive setting and to equip parents with a deeper understanding of the ST Math program and new tools to support their child’s learning at home.

    “At Phillips 66, we believe in building lasting relationships with the communities we serve,” said Courtney Meadows, Manager of Social Impact at Phillips 66. “This partnership is more than a program. It’s a decade of consistent, community-rooted support to build the next generation of thinkers and improve lives through enriching educational experiences.”

    ST Math has been used by millions of students across the country and has a proven track record of delivering a fundamentally different approach to learning math. Through visual and interactive puzzles, the program breaks down math’s abstract language barriers to benefit all learners, including English Learners, Special Education students, and Gifted and Talented students.

    “ST Math offers a learning experience that’s natural, intuitive, and empowering—while driving measurable gains in math proficiency,” said Brett Woudenberg, CEO of MIND Education. “At MIND, we believe math is a gateway to brighter futures. We’re proud to partner with Phillips 66 in expanding access to high-quality math learning for thousands of students in their communities.”

    Explore how ST Math is creating an impact in Phillips 66 communities with this impact story: https://www.mindeducation.org/success-story/brazosport-isd-texas/

    About MIND Education
    MIND Education engages, motivates and challenges students towards mathematical success through its mission to mathematically equip all students to solve the world’s most challenging problems. MIND is the creator of ST Math, a pre-K–8 visual instructional program that leverages the brain’s innate spatial-temporal reasoning ability to solve mathematical problems; and InsightMath, a neuroscience-based K-6 curriculum that transforms student learning by teaching math the way every brain learns so all students are equipped to succeed. Since its inception in 1998, MIND Education and ST Math has served millions and millions of students across the country. Visit MINDEducation.org.

    About Phillips 66
    Phillips 66 (NYSE: PSX) is a leading integrated downstream energy provider that manufactures, transports and markets products that drive the global economy. The company’s portfolio includes Midstream, Chemicals, Refining, Marketing and Specialties, and Renewable Fuels businesses. Headquartered in Houston, Phillips 66 has employees around the globe who are committed to safely and reliably providing energy and improving lives while pursuing a lower-carbon future. For more information, visit phillips66.com or follow @Phillips66Co on LinkedIn.

    eSchool News Staff
    Latest posts by eSchool News Staff (see all)

    Source link

  • What really shapes the future of AI in education?

    What really shapes the future of AI in education?

    This post originally appeared on the Christensen Institute’s blog and is reposted here with permission.

    Key points:

    A few weeks ago, MIT’s Media Lab put out a study on how AI affects the brain. The study ignited a firestorm of posts and comments on social media, given its provocative finding that students who relied on ChatGPT for writing tasks showed lower brain engagement on EEG scans, hinting that offloading thinking to AI can literally dull our neural activity. For anyone who has used AI, it’s not hard to see how AI systems can become learning crutches that encourage mental laziness.

    But I don’t think a simple “AI harms learning” conclusion tells the whole story. In this blog post (adapted from a recent series of posts I shared on LinkedIn), I want to add to the conversation by tackling the potential impact of AI in education from four angles. I’ll explore how AI’s unique adaptability can reshape rigid systems, how it both fights and fuels misinformation, how AI can be both good and bad depending on how it is used, and why its funding model may ultimately determine whether AI serves learners or short-circuits their growth.

    What if the most transformative aspect of AI for schools isn’t its intelligence, but its adaptability?

    Most technologies make us adjust to them. We have to learn how they work and adapt our behavior. Industrial machines, enterprise software, even a basic thermostat—they all come with instructions and patterns we need to learn and follow.

    Education highlights this dynamic in a different way. How does education’s “factory model” work when students don’t come to school as standardized raw inputs? In many ways, schools expect students to conform to the requirements of the system—show up on time, sharpen your pencil before class, sit quietly while the teacher is talking, raise your hand if you want to speak. Those social norms are expectations we place on students so that standardized education can work. But as anyone who has tried to manage a group of six-year-olds knows, a class of students is full of complicated humans who never fully conform to what the system expects. So, teachers serve as the malleable middle layer. They adapt standardized systems to make them work for real students. Without that human adaptability, the system would collapse.

    Same thing in manufacturing. Edgar Schein notes that engineers aim to design systems that run themselves. But operators know systems never work perfectly. Their job—and often their sense of professional identity—is about having the expertise to adapt and adjust when things inevitably go off-script. Human adaptability in the face of rigid systems keeps everything running.

    So, how does this relate to AI? AI breaks the mold of most machines and systems humans have designed and dealt with throughout history. It doesn’t just follow its algorithm and expect us to learn how to use it. It adapts to us, like how teachers or factory operators adapt to the realities of the world to compensate for the rigidity of standardized systems.

    You don’t need a coding background or a manual. You just speak to it. (I literally hit the voice-to-text button and talk to it like I’m explaining something to a person.) Messy, natural human language—the age-old human-to-human interface that our brains are wired to pick up on as infants—has become the interface for large language models. In other words, what makes today’s AI models amazing is their ability to use our interface, rather than asking us to learn theirs.

    For me, the early hype about “prompt engineering” never really made sense. It assumed that success with AI required becoming an AI whisperer who knew how to speak AI’s language. But in my experience, working well with AI is less about learning special ways to talk to AI and more about just being a clear communicator, just like a good teacher or a good manager.

    Now imagine this: what if AI becomes the new malleable middle layer across all kinds of systems? Not just a tool, but an adaptive bridge that makes other rigid, standardized systems work well together. If AI can make interoperability nearly frictionless—adapting to each system and context, rather than forcing people to adapt to it—that could be transformative. It’s not hard to see how this shift might ripple far beyond technology into how we organize institutions, deliver services, and design learning experiences.

    Consider two concrete examples of how this might transform schools. First, our current system heavily relies on the written word as the medium for assessing students’ learning. To be clear, writing is an important skill that students need to develop to help them navigate the world beyond school. Yet at the same time, schools’ heavy reliance on writing as the medium for demonstrating learning creates barriers for students with learning disabilities, neurodivergent learners, or English language learners—all of whom may have a deep understanding but struggle to express it through writing in English. AI could serve as that adaptive layer, allowing students to demonstrate their knowledge and receive feedback through speech, visual representations, or even their native language, while still ensuring rigorous assessment of their actual understanding.

    Second, it’s obvious that students don’t all learn at the same pace—yet we’ve forced learning to happen at a uniform timeline because individualized pacing quickly becomes completely unmanageable when teachers are on their own to cover material and provide feedback to their students. So instead, everyone spends the same number of weeks on each unit of content and then moves to the next course or grade level together, regardless of individual readiness. Here again, AI could serve as that adaptive layer for keeping track of students’ individual learning progressions and then serving up customized feedback, explanations, and practice opportunities based on students’ individual needs.

    Third, success in school isn’t just about academics—it’s about knowing how to navigate the system itself. Students need to know how to approach teachers for help, track announcements for tryouts and auditions, fill out paperwork for course selections, and advocate for themselves to get into the classes they want. These navigation skills become even more critical for college applications and financial aid. But there are huge inequities here because much of this knowledge comes from social capital—having parents or peers who already understand how the system works. AI could help level the playing field by serving as that adaptive coaching layer, guiding any student through the bureaucratic maze rather than expecting them to figure it out on their own or rely on family connections to decode the system.

    Can AI help solve the problem of misinformation?

    Most people I talk to are skeptical of the idea in this subhead—and understandably so.

    We’ve all seen the headlines: deep fakes, hallucinated facts, bots that churn out clickbait. AI, many argue, will supercharge misinformation, not solve it. Others worry that overreliance on AI could make people less critical and more passive, outsourcing their thinking instead of sharpening it.

    But what if that’s not the whole story?

    Here’s what gives me hope: AI’s ability to spot falsehoods and surface truth at scale might be one of its most powerful—and underappreciated—capabilities.

    First, consider what makes misinformation so destructive. It’s not just that people believe wrong facts. It’s that people build vastly different mental models of what’s true and real. They lose any shared basis for reasoning through disagreements. Once that happens, dialogue breaks down. Facts don’t matter because facts aren’t shared.

    Traditionally, countering misinformation has required human judgment and painstaking research, both time-consuming and limited in scale. But AI changes the equation.

    Unlike any single person, a large language model (LLM) can draw from an enormous base of facts, concepts, and contextual knowledge. LLMs know far more facts from their training data than any person can learn in a lifetime. And when paired with tools like a web browser or citation database, they can investigate claims, check sources, and explain discrepancies.

    Imagine reading a social media post and getting a sidebar summary—courtesy of AI—that flags misleading statistics, offers missing context, and links to credible sources. Not months later, not buried in the comments—instantly, as the content appears. The technology to do this already exists.

    Of course, AI is not perfect as a fact-checker. When large language models generate text, they aren’t producing precise queries of facts; they’re making probabilistic guesses at what the right response should be based on their training, and sometimes those guesses are wrong. (Just like human experts, they also generate answers by drawing on their expertise, and they sometimes get things wrong.) AI also has its own blind spots and biases based on the biases it inherits from its training data. 

    But in many ways, both hallucinations and biases in AI are easier to detect and address than the false statements and biases that come from millions of human minds across the internet. AI’s decision rules can be audited. Its output can be tested. Its propensity to hallucinate can be curtailed. That makes it a promising foundation for improving trust, at least compared to the murky, decentralized mess of misinformation we’re living in now.

    This doesn’t mean AI will eliminate misinformation. But it could dramatically increase the accessibility of accurate information, and reduce the friction it takes to verify what’s true. Of course, most platforms don’t yet include built-in AI fact-checking, and even if they did, that approach would raise important concerns. Do we trust the sources that those companies prioritize? The rules their systems follow? The incentives that guide how their tools are designed? But beyond questions of trust, there’s a deeper concern: when AI passively flags errors or supplies corrections, it risks turning users into passive recipients of “answers” rather than active seekers of truth. Learning requires effort. It’s not just about having the right information—it’s about asking good questions, thinking critically, and grappling with ideas. That’s why I think one of the most important things to teach young people about how to use AI is to treat it as a tool for interrogating the information and ideas they encounter, both online and from AI itself. Just like we teach students to proofread their writing or double-check their math, we should help them develop habits of mind that use AI to spark their own inquiry—to question claims, explore perspectives, and dig deeper into the truth. 

    Still, this focuses on just one side of the story. As powerful as AI may be for fact-checking, it will inevitably be used to generate deepfakes and spin persuasive falsehoods.

    AI isn’t just good or bad—it’s both. The future of education depends on how we use it.

    Much of the commentary around AI takes a strong stance: either it’s an incredible force for progress or it’s a terrifying threat to humanity. These bold perspectives make for compelling headlines and persuasive arguments. But in reality, the world is messy. And most transformative innovations—AI included—cut both ways.

    History is full of examples of technologies that have advanced society in profound ways while also creating new risks and challenges. The Industrial Revolution made it possible to mass-produce goods that have dramatically improved the quality of life for billions. It has also fueled pollution and environmental degradation. The internet connects communities, opens access to knowledge, and accelerates scientific progress—but it also fuels misinformation, addiction, and division. Nuclear energy can power cities—or obliterate them.

    AI is no different. It will do amazing things. It will do terrible things. The question isn’t whether AI will be good or bad for humanity—it’s how the choices of its users and developers will determine the directions it takes. 

    Because I work in education, I’ve been especially focused on the impact of AI on learning. AI can make learning more engaging, more personalized, and more accessible. It can explain concepts in multiple ways, adapt to your level, provide feedback, generate practice exercises, or summarize key points. It’s like having a teaching assistant on demand to accelerate your learning.

    But it can also short-circuit the learning process. Why wrestle with a hard problem when AI will just give you the answer? Why wrestle with an idea when you can ask AI to write the essay for you? And even when students have every intention of learning, AI can create the illusion of learning while leaving understanding shallow.

    This double-edged dynamic isn’t limited to learning. It’s also apparent in the world of work. AI is already making it easier for individuals to take on entrepreneurial projects that would have previously required whole teams. A startup no longer needs to hire a designer to create its logo, a marketer to build its brand assets, or an editor to write its press releases. In the near future, you may not even need to know how to code to build a software product. AI can help individuals turn ideas into action with far fewer barriers. And for those who feel overwhelmed by the idea of starting something new, AI can coach them through it, step by step. We may be on the front end of a boom in entrepreneurship unlocked by AI.

    At the same time, however, AI is displacing many of the entry-level knowledge jobs that people have historically relied on to get their careers started. Tasks like drafting memos, doing basic research, or managing spreadsheets—once done by junior staff—can increasingly be handled by AI. That shift is making it harder for new graduates to break into the workforce and develop their skills on the job.

    One way to mitigate these challenges is to build AI tools that are designed to support learning, not circumvent it. For example, Khan Academy’s Khanmigo helps students think critically about the material they’re learning rather than just giving them answers. It encourages ideation, offers feedback, and prompts deeper understanding—serving as a thoughtful coach, not a shortcut. But the deeper issue AI brings into focus is that our education system often treats learning as a means to an end—a set of hoops to jump through on the way to a diploma. To truly prepare students for a world shaped by AI, we need to rethink that approach. First, we should focus less on teaching only the skills AI can already do well. And second, we should make learning more about pursuing goals students care about—goals that require curiosity, critical thinking, and perseverance. Rather than training students to follow a prescribed path, we should be helping them learn how to chart their own. That’s especially important in a world where career paths are becoming less predictable, and opportunities often require the kind of initiative and adaptability we associate with entrepreneurs.

    In short, AI is just the latest technological double-edged sword. It can support learning, or short-circuit it. Boost entrepreneurship—or displace entry-level jobs. The key isn’t to declare AI good or bad, but to recognize that it’s both, and then to be intentional about how we shape its trajectory. 

    That trajectory won’t be determined by technical capabilities alone. Who pays for AI, and what they pay it to do, will influence whether it evolves to support human learning, expertise, and connection, or to exploit our attention, take our jobs, and replace our relationships.

    What actually determines whether AI helps or harms?

    When people talk about the opportunities and risks of artificial intelligence, the conversation tends to focus on the technology’s capabilities—what it might be able to do, what it might replace, what breakthroughs lie ahead. But just focusing on what the technology does—both good and bad—doesn’t tell the whole story. The business model behind a technology influences how it evolves.

    For example, when advertisers are the paying customer, as they are for many social media platforms, products tend to evolve to maximize user engagement and time-on-platform. That’s how we ended up with doomscrolling—endless content feeds optimized to occupy our attention so companies can show us more ads, often at the expense of our well-being.

    That incentive could be particularly dangerous with AI. If you combine superhuman persuasion tools with an incentive to monopolize users’ attention, the results will be deeply manipulative. And this gets at a concern my colleague Julia Freeland Fisher has been raising: What happens if AI systems start to displace human connection? If AI becomes your go-to for friendship or emotional support, it risks crowding out the real relationships in your life.

    Whether or not AI ends up undermining human relationships depends a lot on how it’s paid for. An AI built to hold your attention and keep you coming back might try to be your best friend. But an AI built to help you solve problems in the real world will behave differently. That kind of AI might say, “Hey, we’ve been talking for a while—why not go try out some of the things we’ve discussed?” or “Sounds like it’s time to take a break and connect with someone you care about.”

    Some decisions made by the major AI companies seem encouraging. Sam Altman, OpenAI’s CEO, has said that adopting ads would be a last resort. “I’m not saying OpenAI would never consider ads, but I don’t like them in general, and I think that ads-plus-AI is sort of uniquely unsettling to me.” Instead, most AI developers like OpenAI and Anthropic have turned to user subscriptions, an incentive structure that doesn’t steer as hard toward addictiveness. OpenAI is also exploring AI-centric hardware as a business model—another experiment that seems more promising for user wellbeing.

    So far, we’ve been talking about the directions AI will take as companies develop their technologies for individual consumers, but there’s another angle worth considering: how AI gets adopted into the workplace. One of the big concerns is that AI will be used to replace people, not necessarily because it does the job better, but because it’s cheaper. That decision often comes down to incentives. Right now, businesses pay a lot in payroll taxes and benefits for every employee, but they get tax breaks when they invest in software and machines. So, from a purely financial standpoint, replacing people with technology can look like a smart move. In the book, The Once and Future Worker, Oren Cass discusses this problem and suggests flipping that script—taxing capital more and labor less—so companies aren’t nudged toward cutting jobs just to save money. That change wouldn’t stop companies from using AI, but it would encourage them to deploy it in ways that complement, rather than replace, human workers.

    Currently, while AI companies operate without sustainable business models, they’re buoyed by investor funding. Investors are willing to bankroll companies with little or no revenue today because they see the potential for massive profits in the future. But that investor model creates pressure to grow rapidly and acquire as many users as possible, since scale is often a key metric of success in venture-backed tech. That drive for rapid growth can push companies to prioritize user acquisition over thoughtful product development, potentially at the expense of safety, ethics, or long-term consequences. 

    Given these realities, what can parents and educators do? First, they can be discerning customers. There are many AI tools available, and the choices they make matter. Rather than simply opting for what’s most entertaining or immediately useful, they can support companies whose business models and design choices reflect a concern for users’ well-being and societal impact.

    Second, they can be vocal. Journalists, educators, and parents all have platforms—whether formal or informal—to raise questions, share concerns, and express what they hope to see from AI companies. Public dialogue helps shape media narratives, which in turn shape both market forces and policy decisions.

    Third, they can advocate for smart, balanced regulation. As I noted above, AI shouldn’t be regulated as if it’s either all good or all bad. But reasonable guardrails can ensure that AI is developed and used in ways that serve the public good. Just as the customers and investors in a company’s value network influence its priorities, so too can policymakers play a constructive role as value network actors by creating smart policies that promote general welfare when market incentives fall short.

    In sum, a company’s value network—who its investors are, who pays for its products, and what they hire those products to do—determines what companies optimize for. And in AI, that choice might shape not just how the technology evolves, but how it impacts our lives, our relationships, and our society.

    Latest posts by eSchool Media Contributors (see all)

    Source link