Tag: edtech

  • Generative AI and the Near Future of Work: An EdTech Example –

    Generative AI and the Near Future of Work: An EdTech Example –

    A friend recently asked me for advice on a problem he was wrestling with related to an issue he was having with a 1EdTech interoperability standard. It was the same old problem of a standard not quite getting true interoperability because people implement it differently. I suggested he try using a generative AI tool to fix his problem. (I’ll explain how shortly.)

    I don’t know if my idea will work yet—he promised to let me know once he tries it—but the idea got me thinking. Generative AI probably will change EdTech integration, interoperability, and the impact that interoperability standards can have on learning design. These changes, in turn, impact the roles of developers, standards bodies, and learning designers.

    In this post, I’ll provide a series of increasingly ambitious use cases related to the EdTech interoperability work of 1EdTech (formerly known as IMS Global). In each case, I’ll explore how generative could impact similar work going forward, how it changes the purpose of interoperability standards-making, and how it impacts the jobs and skills of various people whose work is touched by the standards in one way or another.

    Generative AI as duct tape: fixing QTI

    1EdTech’s Question Test Interoperability (QTI) standard is one of its oldest standards that’s still widely used. The earliest version on the 1EdTech website dates back to 2002, while the most recent version was released in 2022. You can guess from the name what it’s supposed to do. If you have a test, or a test question bank, in one LMS, QTI is supposed to let you migrate it into another without copying and pasting. It’s an import/export standard.

    It never worked well. Everybody has their own interpretation of the standard, which means that importing somebody else’s QTI export is never seamless. When speaking recently about QTI to a friend at an LMS company, I commented that it only works about 80% of the time. My friend replied, “I think you’re being generous. It probably only works about 40% of the time.” 1EdTech has learned many lessons about achieving consistent interoperability in the decades since QTI was created. But it’s hard to fix a complex legacy standard like this one.

    Meanwhile, the friend I mentioned at the top of the post asked me recently about practical advice for dealing with this state of affairs. His organization imports a lot of QTI question banks from multiple sources. So his team spends a lot of time debugging those imports. Is there an easier way?

    I thought about it.

    “Your developers probably have many examples that they’ve fixed by hand by now. They know the patterns. Take a handful of before and after examples. Embed them into a prompt in a generative AI that’s good at software code, like Hugging Chat. [As I was drafting this post, OpenAI announced that ChatGPT now has a code interpreter.] “Then give the generative AI a novel input and see if it produces the correct output.”

    Generative AI are good at pattern matching. The differences in QTI implementations are likely to have patterns to them that an LLM can detect, even if those differences change over time (because, for example, one vendor’s QTI implementation changed over time).

    In fact, pattern matching on this scale could work very well with a smaller generative AI model. We’re used to talking about ChatGPT, Google Bard, and other big-name systems that have between half a billion and a billion transformers. Think of transformers as computing legos. One major reason that ChatGPT is so impressive is that it uses a lot of computing legos. Which makes it expensive, slow, and computationally intensive. But if your goal is to match patterns against a set of relatively well-structured set of texts such as QTI files, you could probably train a much smaller model than ChatGPT to reliably translate between implementations for you. The smallest models, like Vicuña LLM, are only 7 billion transformers. That may sound like a lot but it’s small enough to run on a personal computer (or possibly even a mobile phone). Think about it this way: The QTI task we’re trying to solve for is roughly equivalent in complexity to the spell-checking and one-word type-ahead functions that you have on your phone today. A generative AI model for fixing QTI imports could probably be trained for a few hundred dollars and run for pennies.

    This use case has some other desirable characteristics. First, it doesn’t have to work at high volume in real time. It can be a batch process. Throw the dirty dishes in the dishwasher, turn it on, and take out the clean dishes when the machine shuts off. Second, the task has no significant security risks and wouldn’t expose any personally identifiable information. Third, nothing terrible happens if the thing gets a conversion wrong every now and then. Maybe the organization would have to fix 5% of the conversions rather than 100%. And overall, it should be relatively cheap. Maybe not as cheap as running an old-fashioned deterministic program that’s optimized for efficiency. But maybe cheap enough to be worth it. Particularly if the organization has to keep adding new and different QTI implementation imports. It might be easier and faster to adjust the model with fine-tuning or prompting than it would be to revise a set of if/then statements in a traditional program.

    How would the need for skilled programmers change? Somebody would still need to understand how the QTI mappings work well enough to keep the generative AI humming along. And somebody would have to know how to take care of the AI itself (although that process is getting easier every day, especially for this kind of a use case). The repetitive work they are doing now would be replaced by the software over time, freeing up the human brains for other things that human brains are particularly good at. In other words, you can’t get rid of your programmer but you can have that person engaging in more challenging, high-value work than import bug whack-a-mole.

    How does it change the standards-making process? In the short term, I’d argue that 1EdTech should absolutely try to build an open-source generative AI of the type I’m describing rather than trying to fix QTI, which is a task they’ve not succeeded in doing over 20 years. This strikes me as a far shorter path to achieving the original purpose for which QTI was intended, which is to move question banks from one system to another.

    This conclusion, in turn, leads to a larger question: Do we need interoperability standards bodies in the age of AI?

    My answer is a resounding “yes.”

    Going a step further: software integration

    QTI provides data portability but not integration. It’s an import/export format. The fact that Google Docs can open up a document exported from Microsoft Word doesn’t mean that the two programs are integrated in any meaningful way.

    So let’s consider Learning Tool Interoperability (LTI). LTI was quietly revolutionary. Before it existed, any company building a specialized educational tool would have to write separate integrations for every LMS.

    The nature of education is that it’s filled with what folks in the software industry would disparagingly call “point solutions.” If you’re teaching students how to program in python, you need a python programming environment simulator. But that tool won’t help a chemistry professor who really needs virtual labs and molecular modeling tools. And none of these tools are helpful for somebody teaching English composition. There simply isn’t a single generic learning environment that will work well for teaching all subjects. None of these tools will ever sell enough to make anybody rich.

    Therefore, the companies that make these necessary niche teaching tools will tend to be small. In the early days of the LMS, they couldn’t afford to write a separate integration for every LMS. Which meant that not many specialized learning tools were created. As small as these companies’ target markets already were, many of them couldn’t afford to limit themselves to the subset of, say, chemistry professors whose universities happened to use Blackboard. It didn’t make economic sense.

    LTI changed all that. Any learning tool provider could write integration once and have their product work with every LMS. Today, 1EdTech lists 240 products that are officially certified as supporting LTI interoperability standard. Many more support the standard but are not certified.

    Would LTI have been created in a world in which generative AI existed? Maybe not. The most straightforward analogy is Zapier, which connects different software systems via their APIs. ChatGPT and its ilk could act as instant Zapier. A programmer using generative AI could use the API documentation of both systems, ask the generative AI to write integration to perform a particular purpose, and then ask the same AI for help with any debugging.

    Again, notice that one still needs a programmer. Somebody needs to be able to read the APIs, understand the goals, think about the trade-offs, give the AI clear instructions, and check the finished program. The engineering skills are still necessary. But the work of actually writing the code is greatly reduced. Maybe by enough that generative AI would have made LTI unnecessary.

    But probably not. LTI connections pass sensitive student identity and grade information back and forth. It has to be secure and reliable. The IT department has legal obligations, not to mention user expectations, that a well-tested standard helps alleviate (though not eliminate). On top of that, it’s just a bad idea to have spread bits of glue code here, there, and everywhere, regardless of whether a human or a machine writes it. Somebody—an architect—needs to look at the big picture. They need to think about maintainability, performance, security, data management, and a host of other concerns. There is value in having a single integration standard that has been widely vetted and follows a pattern of practices that IT managers can handle the same way across a wide range of product integrations.

    At some point, if a software integration fails to pass student grades to the registrar or leaks personal data, a human is responsible. We’re not close to the point where we can turn over ethical or even intellectual responsibility for those challenges to a machine. If we’re not careful, generative AI will simply write spaghetti code much faster the old days.

    The social element of knowledge work

    More broadly, there are two major value components to the technical interoperability standards process. The first is obvious: technical interoperability. It’s the software. The second is where the deeper value lies. It’s in the conversation that leads to the software. I’ve participated in a 1EdTech specification working group. When the process went well, we learned from each other. Each person at that table brought a different set of experiences to an unsolved problem. In my case, the specification we were working on sent grade rosters from the SIS to the LMS and final grades back from the LMS to the SIS. It sounds simple. It isn’t. We each brought different experiences and lessons learned regarding many aspects of the problem, from how names are represented in different cultures to how SIS and LMS users think differently in ways that impact interoperability. In the short term, a standard is always a compromise. Each creator of a software system has to make adjustments that accommodate the many ways in which others thought differently when they built their own systems. But if the process works right, everybody goes home thinking a little differently about how their systems could be built better for everybody’s benefit. In the longer term, the systems we continue to build over time reflect the lessons we learn from each other.

    Generative AI could make software integration easier. But without the conversation of the standards-making process, we would lose the opportunity to learn from each other. And if AI can reduce the time and cost of the former, then maybe participants in the standards-making effort will spend more time and energy on the latter. The process would have to be rejiggered somewhat. But at least in some cases, participants wouldn’t have to wait until the standard was finalized before they started working on implementing it. When the cost of implementation is low enough and the speed is fast enough, the process can become more of an iterative hackathon. Participants can build working prototypes more quickly. They would still have to go back to their respective organizations and do the hard work of thinking through the implications, finding problems or trade-offs and, eventually, hardening the code. But at least in some cases, parts of the standards-making process could be more fluid and rapidly iterative than they have been. We could learn from each other faster.

    This same principle could apply inside any organization or partnership in which different groups are building different software components that need to work together. Actual knowledge of the code will still be important to check and improve the work of the AI in some cases and write code in others. Generative AI is not ready to replace high-quality engineers yet. But even as it improves, humans will still be needed.

    Anthopologist John Seely Brown famously traced the drop in Xerox copier repair quality to a change in its lunch schedule for their repair technicians. It turns out that technicians learn a lot from solving real problems in the field and then sharing war stories with each other. When the company changed the schedule so that technicians had less time together, repair effectiveness dropped noticeably. I don’t know if a software program was used to optimize the scheduling but one could easily imagine that being the case. Algorithms are good at concrete problems like optimizing complex schedules. On the other hand, they have no visibility into what happens at lunch or around the coffee pot. Nobody writes those stories down. They can’t be ingested and processed by a large language model. Nor can they be put together in novel ways by quirky human minds to come up with new insights.

    That’s true in the craft of copier repair and definitely true in the craft of software engineering. I can tell you from direct experience that interoperability standards-making is much the same. We couldn’t solve the seemingly simple problem of getting the SIS to talk to the LMS until we realized that registrars and academics think differently about what a “class” or a “course” is. We figured that out by talking with each other and with our customers.

    At its heart, standards-making is a social process. It’s a group of people who have been working separately on solving similar problems coming together to develop a common solution. They do this because they’ve decided that the cost/benefit ratio of working together is better than the ratio they’ve achieved when working separately. AI lowers the costs of some work. But it doesn’t yet provide an alternative to that social interaction. If anything, it potentially lowers some of the costs of collaboration by making experimentation and iteration cheaper—if and only if the standards-making participants embrace and deliberately experiment with that change.

    That’s especially true the more 1EdTech tries to have a direct role in what it refers to as “learning impact.”

    The knowledge that’s not reflected in our words

    In 2019, I was invited to give a talk at a 1EdTech summit, which I published a version of under the title “Pedagogical Intent and Designing for Inquiry.” Generative AI was nowhere on the scene at the time. But machine learning was. At the same time, long-running disappointment and disillusionment with learning analytics—analytics that actually measure students’ progress as they are learning—was palpable.

    I opened my talk by speculating about how machine learning could have helped with SIS/LMS integration, much as I speculated earlier in the post about how generative AI might help with QTI:

    Now, today, we would have a different possible way of solving that particular interoperability problem than the one we came up with over a decade ago. We could take a large data set of roster information exported from the SIS, both before and after the IT professionals massaged it for import into the LMS, and aim a machine learning algorithm at it. We then could use that algorithm as a translator. Could we solve such an interoperability problem this way? I think that we probably could. I would have been a weaker product manager had we done it that way, because I wouldn’t have gone through the learning experience that resulted from the conversations we had to develop the specification. As a general principle, I think we need to be wary of machine learning applications in which the machines are the only ones doing the learning. That said, we could have probably solved such a problem this way and might have been able to do it in a lot less time than it took for the humans to work it out.

    I will argue that today’s EdTech interoperability challenges are different. That if we want to design interoperability for the purposes of insight into the teaching and learning process, then we cannot simply use clever algorithms to magically draw insights from the data, like a dehumidifier extracting water from thin air. Because the water isn’t there to be extracted. The insights we seek will not be anywhere in the data unless we make a conscious effort to put them there through design of our applications. In order to get real teaching and learning insights, we need to understand the intent of the students. And in order to understand that, we need insight into the learning design. We need to understand pedagogical intent.

    That new need, in turn, will require new approaches in interoperability standards-making. As hard as the challenges of the last decade have been, the challenges of the next one are much harder. They will require different people at the table having different conversations.

    Pedagogical Intent and Designing for Inquiry

    The core problem is that the key element for interpreting both student progress and the effectiveness of digital learning experiences—pedagogical intent—is not encoded in most systems. No matter how big your data set is, it doesn’t help you if the data you need aren’t in it. For this reason, I argued, fancy machine learning tricks aren’t going to give us shortcuts.

    That problem is the same, and perhaps even worse in some ways, with generative AI. All ChatGPT knows is what it’s read on the internet. And while it’s made progress in specific areas at reading between the lines, the fact is that important knowledge, including knowledge about applied learning design, simply is extremely scarce in the data it can access and even in the data living in our learning systems that it can’t access.

    The point of my talk was that interoperability standards could help by supplying critical metadata—context—if only the standards makers set that as their purpose, rather than simply making sure that quiz questions end up in the right place when migrating from one LMS to another.

    I chose to open the talk by highlighting the ambiguity of language that enables us to make art. I chose this passage from Shakespeare’s final masterpiece, The Tempest:

    O wonder!
    How many goodly creatures are there here!
    How beauteous mankind is! O brave new world
    That has such people in’t!

    William Shakespeare, The Tempest

    It’s only four lines. And yet it is packed with double entendres and the ambiguity that gives actors room to make art:

    Here’s the scene: Miranda, the speaker, is a young woman who has lived her entire life on an island with nobody but her father and a strange creature who she may think of as a brother, a friend, or a pet. One day, a ship becomes grounded on the shore of the island. And out of it comes, literally, a handsome prince, followed by a collection of strange (and presumably virile) sailors. It is this sight that prompts Miranda’s exclamation.

    As with much of Shakespeare, there are multiple possible interpretations of her words, at least one of which is off-color. Miranda could be commenting on the hunka hunka manhood walking toward her.

    “How beauteous mankind is!”

    Or. She could be commenting on how her entire world has just shifted on its axis. Until that moment, she knew of only two other people in all of existence, each of who she had known her entire life and with each of whom she had a relationship that she understood so well that she took it for granted. Suddenly, there was literally a whole world of possible people and possible relationships that she had never considered before that moment.

    “O brave new world / That has such people in’t”

    So what is on Miranda’s mind when she speaks these lines? Is it lust? Wonder? Some combination of the two? Something else?

    The text alone cannot tell us. The meaning is underdetermined by the data. Only with the metadata supplied by the actor (or the reader) can we arrive at a useful interpretation. That generative ambiguity is one of the aspects of Shakespeare’s work that makes it art.

    But Miranda is a fictional character. There is no fact of the matter about what she is thinking. When we are trying to understand the mental state of a real-life human learner, then making up our own answer because the data are not dispositive is not OK. As educators, we have a moral responsibility to understand a real-life Miranda having a real-life learning experience so that we can support her on her journey.

    Pedagogical Intent and Designing for Inquiry

    Generative AI like ChatGPT can answer questions about different ways to interpret Miranda’s lines in the play because humans have written about this question and made their answers available on the internet. If you give the chatbot an unpublished piece of poetry and ask it for an interpretation, its answers are not likely to be reliably sophisticated. While larger models are getting better at reading between the lines—a topic for a future blog post—they are not remotely as good as humans are at this yet.

    Making the implicit explicit

    This limitation of language interpretation is central to the challenge of applying generative AI to learning design. ChatGPT has reignited fantasies about robot tutors in the sky. Unfortunately, we’re not giving the AI the critical information it needs to design effective learning experiences:

    The challenge that we face as educators is that learning, which happens completely inside the heads of the learners, is invisible. We can not observe it directly. Accordingly, there are no direct constructs that represent it in the data. This isn’t a data science problem. It’s an education problem. The learning that is or isn’t happening in the students’ heads is invisible even in a face-to-face classroom. And the indirect traces we see of it are often highly ambiguous. Did the student correctly solve the physics problem because she understands the forces involved? Because she memorized a formula and recognized a situation in which it should be applied? Because she guessed right? The instructor can’t know the answer to this question unless she has designed a series of assessments that can disambiguate the student’s internal mental state.

    In turn, if we want to find traces of the student’s learning (or lack thereof) in the data, we must understand the instructor’s pedagogical intent that motivates her learning design. What competency is the assessment question that the student answered incorrectly intended to assess? Is the question intended to be a formative assessment? Or summative? If it’s formative, is it a pre-test, where the instructor is trying to discover what the student knows before the lesson begins? Is it a check for understanding? A learn-by-doing exercise? Or maybe something that’s a little more complex to define because it’s embedded in a simulation? The answers to these questions can radically change the meaning we assign to a student’s incorrect answer to the assessment question. We can’t fully and confidently interpret what her answer means in terms of her learning progress without understanding the pedagogical intent of the assessment design.

    But it’s very easy to pretend that we understand what the students’ answers mean. I could have chosen any one of many Shakespeare quotes to open this section, but the one I picked happens to be the very one from which Aldous Huxley derived the title of his dystopian novel Brave New World. In that story, intent was flattened through drugs, peer pressure, and conditioning. It was reduced to a small set of possible reactions that were useful in running the machine of society. Miranda’s words appear in the book in a bitterly ironic fashion from the mouth of the character John, a “savage” who has grown up outside of societal conditioning.

    We can easily develop “analytics” that tell us whether students consistently answer assessment questions correctly. And we can pretend that “correct answer analytics” are equivalent to “learning analytics.” But they are not. If our educational technology is going to enable rich and authentic vision of learning rather than a dystopian reductivist parody of it, then our learning analytics must capture the nuances of pedagogical intent rather than flattening it.

    This is hard.

    Pedagogical Intent and Designing for Inquiry

    Consider the following example:

    A professor knows that her students tend to develop a common misconception that causes them to make practical mistakes when applying their knowledge. She very carefully crafts her course to address this misconception. She writes the content to address it. In her tests, she provides wrong answer choices—a.k.a. “distractors”—that students would choose if they had the misconception. She can tell, both individually and collectively, whether her students are getting stuck on the misconception by how often they pick the particular distractor that fits with their mistaken understanding. Then she writes feedback that the students see when they choose that particular wrong answer. She crafts it so that it doesn’t give away the correct answer but does encourage students to rethink their mistakes.

    Imagine if all this information were encoded in the software. Their hierarchy would look something like this:

    • Here is learning objective (or competency) 1
      • Here is content about learning objective 1
        • Here is assessment question A about learning objective 1.
          • Here is distractor c in assessment question A. Distractor c addresses misconception alpha.
            • Here is feedback to distractor c. It is written specifically to help students rethink misconception alpha without giving away the answer to question A. This is critical because if we simply tell the student the answer to question A then we can’t get good data about the likelihood that the student has mastered learning objective 1.

    All of that information is in the learning designer’s head and, somehow, implicitly embedded in the content in subtle details of the writing. But good luck teasing it out by just reading the textbook if you aren’t an experienced teacher of the subject yourself.

    What if these relationships were explicit in the digital text? For individual students, we could tell which ones were getting stuck on a specific misconception. For whole courses, we could identify the spots that are causing significant numbers of students to get stuck on a learning objective or competency. And if that particular sticking point causes students to be more likely to fail either that course or a later course that relies on a correct understanding of a concept, then we could help more students persist, pass, stay in school, and graduate.

    That’s how learning analytics can work if learning designers (or learning engineers) have tools that explicitly encode pedagogical intent into a machine-readable format. They can use machine learning to help them identify and smooth over tough spots where students tend to get stuck and fall behind. They can find the clues that help them identify hidden sticking points and adjust the learning experience to help students navigate those rough spots. We know this can work because, as I wrote about in 2012, Carnegie Mellon University (among others) has been refining this science and craft for decades.

    Generative AI adds an interesting twist. The challenge with all this encoding of pedagogical intent is that it’s labor-intensive. Learning designers often don’t have time to focus on the work required to identify and improve small but high-value changes because they’re too busy getting the basics done. But generative AI that creates learning experiences modeled after the pedagogical metadata in the educational content it is trained on could provide a leg up. It could substantially speed up the work of writing the first-draft content so that designers can focus on the high-value improvements that humans are still better at than machines.

    Realistically, for example, generative AI is not likely to know particular common misconceptions that block students from mastering a competency. Or how to probe for and remediate those misconceptions. But if were trained on the right models, it could generate good first-draft content through a standards-based metadata format that could be imported into a learning platform. The format would have explicit placeholders for those critical probes and hints. Human experts. supported by machine learning. could focus their time on finding and remediating these sticking points in the learning process. Their improvements would be encoded with metadata, providing the AI with better examples of what effective educational content looks like. Which would enable the AI to generate better first-draft content.

    1EdTech could help bring about such a world through standards-making. But they’d have to think about the purpose of interoperability differently, bring different people to the table, and run a different kind of process.

    O brave new world that has such skilled people in’t

    I spoke recently to the head of product development for an AI-related infrastructure company. His product could enable me to eliminate hallucinations while maintaining references and links to original source materials, both of which would be important in generating educational content. I explained a more elaborate version of the basic idea in the previous section of this post.

    “That’s a great idea,” he said. “I can think of a huge number of applications. My last job was at Google. The training was terrible.”

    Google. The company that’s promoting the heck out of their free AI classes. The one that’s going to “disrupt the college degree” with their certificate programs. The one that everybody holds up as leading the way past traditional education and toward skills-based education.

    Their training is “terrible.”

    Yes. Of course it is. Because everybody’s training is terrible. Their learning designers have the same problem I described academic learning designers as having in the previous section. Too much to develop, too little time. Only much, much worse. Because they have far fewer course design experts (if you count faculty as course design experts). Those people are the first to get cut. And EdTech in the corporate space is generally even worse than academic EdTech. Worst of all? Nobody knows what anybody knows or what anybody needs to know.

    Academia, including 1EdTech and several other standards bodies, funded by corporate foundations, are pouring incredible amounts of time, energy, and money into building a data pipeline for tracking skills. Skill taxonomies move from repositories to learning environments, where evidence of student mastery is attached to those skills in the form of badges or comprehensive learner records. Which are then sent off to repositories and wallets.

    The problem is, pipelines are supposed to connect to endpoints. They move something valuable from the place where it is found to the place where it is needed. Many valuable skills are not well documented if they are documented at all. They appear quickly and change all the time. The field of knowledge management has largely failed to capture this information in a timely and useful way after decades of trying. And “knowledge” management has tended to focus on facts, which are easier to track than skills.

    In other words, the biggest challenge that folks interested in job skills face is not an ocean of well-understood skill information that needs to be organized but rather a problem of non-consumption. There isn’t enough real-world, real-time skill information flowing into the pipeline and few people who have real uses for it on the other side. Almost nobody in any company turns to their L&D departments to solve the kinds of skills problems that help people become more productive and advance in their careers. Certainly not at scale.

    But the raw materials for solving this problem exist. A CEO for HP once famously noted knows a lot. It just doesn’t know what it knows.

    Knowledge workers do record new and important work-related information, even if it’s in the form of notes and rough documents. Increasingly, we have meeting transcripts thanks to videoconferencing and AI speech-to-text capabilities. These artifacts could be used to train a large language model on skills as they are emerging and needed. If we could dramatically lower the cost and time required to create just-in-time, just-enough skills training then the pipeline of skills taxonomies and skill tracking would become a lot more useful. And we’d learn a lot about how it needs to be designed because we’d have many more real-world applications.

    The first pipeline we need is from skill discovery to learning content production. It’s a huge one, we’ve known about it for many decades, and we’ve made very little progress on it. Groups like 1EdTech could help us to finally make progress. But they’d have to rethink the role of interoperability standards in terms of the purpose and value of data, particularly in an AI-fueled world. This, in turn, would not only help match worker skills with labor market needs more quickly and efficiently but also create a huge industry of AI-aided learning engineers.

    Summing it up

    So where does this leave us? I see a few lessons:

    • In general, lowering the cost of coding through generative AI doesn’t eliminate the need for technical interoperability standards groups like 1EdTech. But it could narrow the value proposition for their work as currently applied in the market.
    • Software engineers, learning designers, and other skilled humans have important skills and tacit knowledge that don’t show up in text. It can’t be hoovered up by a generative AI that swallows the internet. Therefore, these skilled individuals will still be needed for some time to come.
    • We often gain access to tacit knowledge and valuable skills when skilled individuals talk to each other. The value of collaborative work, including standards work, is still high in a world of generative AI.
    • We can capture some of that tacit knowledge and those skills in machine-readable format if we set that as a goal. While doing so is not likely to lead to machines replacing humans in the near future (at least in the areas I’ve described in this post), it could lead to software that helps humans get more work done and spend more of their time working on hard problems that quirky, social human brains are good at solving.
    • 1EdTech and its constituents have more to gain than to lose by embracing generative AI thoughtfully. While I won’t draw any grand generalizations from this, I invite you to apply the thought process of this blog post to your own worlds and see what you discover.

    Source link

  • Classroom Games and Tech – Ed-Tech to Engage and Inspire: Free Number Line Game from TpT

    Classroom Games and Tech – Ed-Tech to Engage and Inspire: Free Number Line Game from TpT

    For the first week of April my elementary math number line game is free. You can find it here. It’s a card game that requires some printing and cutting to make the components. If you have any feedback on the game, please let me know!

    I included this information about the value of such games at the end of the rule document:

    I knew from teaching math that number lines were important for visualization. I created a number line game years ago, but when I tested it with some first graders, I soon found my initial ideas had some issues. I put the game on the shelf. Then recently I heard Jo Boaler make a statement like this one, 

    Researchers even found that after four 15-minute sessions of playing a game with a number line, differences in knowledge between students from low-income backgrounds and those from middle-income backgrounds were eliminated (Siegler & Ramani, 2008). (Quoted from here)

    I don’t know that Jo Boaler would endorse this game, but after hearing her, I knew I had to return to it! I fixed the gameplay problems and tested it with groups of students from ages 7 to 11. They were immediately hooked! I have been so impressed with how even the youngest students had no problems playing, even when some versions had a number line with negative numbers!

    Source link

  • Classroom Games and Tech – Ed-Tech to Engage and Inspire: Google Slides Template for Making Card Games

    Classroom Games and Tech – Ed-Tech to Engage and Inspire: Google Slides Template for Making Card Games

     

    Here’s a template I use when I create prototypes for my card games. It has 9 cards on each page, outlined in gray dotted lines. 

    Click here to make a copy in your Google Drive.

    I’ve used this method for years, from early stage prototypes to later versions. I’ve even sold some print-and-play games that I laid out completely in Google Slides. (Here’s an example of a free version of one game I made completely in Google Slides.)

    Here are some tips, all of which are just pointing out features of Google Slides that make it useful for layouts in general:

    • Use Word Art for large letters or numbers, as shown on my example image. 
    • If you use ctrl-c and ctrl-v to copy and paste a card or contents of a card, you can use the arrow keys to move it around. Each tap of an arrow key moves a selected object 1/12″. Moving on a grid like that makes it easy to copy, paste and move objects over or down to the next card quickly, still keeping everything lined up.
    • Use the shapes in Google Slides’ shape menu as building blocks for more complex shapes. For quick prototypes, you’ll be surprised how easy it is to make some functional images and icons to help with early playtesting.
    • When you’re done laying out the cards, you can just print them and cut them out. If you need to make them available to others, you can download the whole slideshow as a PDF. I’ve found some fonts will change size slightly and mess up the layout when you convert, so take a look at the file before sending it off to anyone. 

    Contact me if you have questions about using this template. I’ll update this post with more information as I get questions.

    Source link

  • EdTech for International Education via the Gateway International Group

    EdTech for International Education via the Gateway International Group

    The Gateway International Group just launched a compilation of EdTech companies/platforms for International Education. Compiled and edited by Erin Niday and Tony Ogden, this compilation has the goal of highlighting those EdTech platforms that have the potential to transform next generational international learning and engagement. You can learn more at https://gatewayinternational.org/edtech/.

    Note: I’m an affiliate of the Gateway International Group but receive no compensation for this post.

    Source link

  • EdTech Roundup Going on Extended Hiatus, edCircuit Taking Over in the Meantime!

    EdTech Roundup Going on Extended Hiatus, edCircuit Taking Over in the Meantime!

     

    Hi Everyone – 

    I will be taking an extended break from the blog.  My current workload and new responsibilities as a father have left me with not enough time to devote to the site.  I hope things balance back out in the future, but currently, I will be on break from the site until further notice. 

    In the meantime, our good friends over at edCircuit will be taking over the site and sharing some excellent posts and resources each week!  So even though I’ll be on break, we will still be sharing new material on a regular basis. 

    Thanks for understanding,

    Mike

    Source link

  • Reviews | Genially: Create Presentations, Infographics, and Visuals in Seconds

    Reviews | Genially: Create Presentations, Infographics, and Visuals in Seconds

     

    Genial.ly is a presentation creation platform that offers a wide range of possibilities for what teachers and students can create.  From excellent templates to interactive visuals, there are some really fun and exciting ways to easily create visuals of all kinds.  Plus, it’s a freemium resource, so teachers and students can get started creating completely for free. Continue reading on our Review’s Page.

    Source link

  • The Actual Work of an Edtech Sith Lord

    The Actual Work of an Edtech Sith Lord

    Coming to Terms With My Role as an Edtech Administrator and My Contribution to Education, Edtech, and Educators

    Recently, at the ISTE 2019 conference, I presented on building an educational technology professional development program on a budget. It was only the second time I had given this presentation and I was fortunate enough to be collaborating with my edtech sister, Kelly Martin. It was my first time presenting at what is the gold-standard conference in our field and I was feeling the incredibly trite combination of excited and nervous. Now that it is said and done, I’ve done a lot of thinking about the presentation, our message, the systems we have tried to build, and the pedagogical practice we have tried to improve. Our presentation, in many ways, speaks to a professional identity crisis I have been having since decided to cross over to the Dark Side and become an Edtech Sith Lord, an administrator.

    I came into edtech as a teacher coach. It was my job to be the expert on the tools AND to coach teachers. I would partner with them, build lesson plans and activities with them, and be in the room as support when we tested those lesson plans out with students. I loved it. If I am 100% honest with myself, after three years of being an administrator, I still miss it. In fact, I am going to commit a teaching taboo and admit: I think loved that job more than I loved my 14 years of being a classroom teacher, because the only thing I have loved more than helping middle school kids succeed by finding what they are capable of, is helping teachers succeed by finding what they AND their students are capable of.

    So when I became an administrator, when I made the conscious decision to join the Dark Side and trade my green lightsaber for a red one, I knew what it was I wanted to accomplish and why I was doing it. I wanted to build a great educational technology department in a district that was just starting out with edtech. I had had a great model at Fairfield-Suisun Unified School District, lead by Dr. Melissa Farrar. To this day, FSUSD still employs two of the best educational leadership role models I have met, Dr. Farrar and Kristen Witt. I was part of the team that first started the educational technology department in FSUSD, and naturally had some ideas on how I would do it “differently” or maybe “better.”

    I didn’t know how I was going to do it, but I had my what and my why in place. The how… well that has proved to be trickier, and more difficult than I expected. A thing I didn’t expect is how much being on The Dark Side was going to pull me farther and farther from direct contact with teachers. It has been a bit of a sacrifice, one that I am certain I would make again. Yet, it would have been easier if I had realized that going in.

    So now, after ISTE, after I have been allowed to represent myself as some sort of expert on building an educational technology department, I am reflecting even more on these questions: Am I effective? Am I an effective leader? What do I essentially do? How do I approach it? What do I believe in here?

    Teaching can be solitary work, but at least you have a whole school of other teachers, and a vast social media teacher community. One of the hardest things about my job is that to find a professional community you have to have friends outside of your district, because there’s only ever one of you. Even with that, I think what I still wrestle with most, having moved to the Dark Side, is missing teachers and classrooms.

    When I do get the chance to speak with men and women who do a similar job and we discuss how we interact with teachers and how we create professional development systems, there are common threads. What I have to say on this may not be terribly original. It could be summed up as “hire good people, and then get out of their way”, but since I have been reflecting on this, I thought I would share what the most important tasks of an Edtech Sith Lord are.

    Recruit Revolutionaries

    Every educational technology administrator I know in any public school district anywhere, who does not also have to do the IT portion of that work, is constantly messaging this to the entire organization: “We are not IT.” I like to tell people that IT works with boxes and wires, and edtech works with hearts and minds. Because we work with hearts and minds, we need the right people. Recruiting quality people is a quintessential part of building a good professional development program, educational technology or otherwise.

    One of the things you quickly find out when you’re recruiting for professional development is that even the best and most experienced teachers are not necessarily going to be the best professional developers. The skill-sets certainly overlap to a degree, the same way that there is overlap between pedagogy and andragogy, but they are not the same. Additionally, probably unsurprisingly, good teachers who are comfortable in front of a room full of second graders are not always comfortable in front of a room of their peers. So you do have to find people who are willing to do all the parts of the job. Good teachers are a must, but you cannot stop with that criteria.

    The reason I want to recruit “revolutionaries” is because an effective professional developer has to be willing to stand in front of a room full of teachers and say, “what you’re doing is good, but it could be great!” An effective professional developer is an agent of change. Being a champion of “it could be so much better” requires bravery, ardor, and perspicacity. Your people skills have to be on point. As a professional developer, you are yourself a recruiter. A recruiter to the cause of improved teaching practice, and you have to find a way to be both subtle and enthusiastic, to be a Pied Piper of teachers when you’re telling them “you can do better,” because that task is fraught with push back and hurt if you do it with a heavy hand. You must achieve a balance of gentle, yet relentless urging forward of your colleagues.

    Recruiting revolutionaries is no easy task, and they are usually in short supply. Another thing you have to be mindful of as a leader of revolutionaries, is that revolutionaries want change…and they want it now. Managing that expectation and engendering patience in them…also not easy. I wish I had better guidelines here, but I am not always patient myself, and sometimes my revolutionaries have had to teach me patience, but it is definitely a thing to think about. If you have done your recruiting right, you will find yourself being an Edtech Sith Lord who leads Edtech Jedi.

    Clear the Path

    The next thing I have learned over the last three years is that I need to clear the path for the revolution. In other words, I need to set up conditions so that my revolutionaries can get on with the work of proselytizing, being agents of change, and winning hearts and minds. What’s more, I need to ensure we don’t run out of the physical and emotional supplies they need to carry on. In short, you truly must support your revolutionaries in every conceivable way.

    Clearing the path can take many different forms. The most obvious is making sure that your team has the technology they need. You want them to be innovators and explorers so “standard issue” is often not enough. Hopefully, they will ask you, “Can we get some ____?” At first, or at least for me, my first compunction was to say, “yes.” But what you soon realize is that you are on the Dark Side, and you have peers and superiors on the Dark Side, and one of those higher Sith Lords is going to ask you why you spent $3,000 on 3D printers. You had better be ready to justify that cost using standards, superintendent goals, or board goals.

    In this case, one part of clearing the path is starting to ask your Jedi, your revolutionaries, “why,” and asking them to think about the pedagogical purpose for trying the cool new thing. Even when you yourself think it’s super cool and don’t want to ask why because you want to play with the new toys too, you have to ask that question. Another part of clearing the path is communicating and “educating” your peers and superiors behind the scenes to make connections between your experimental/innovative work and more conventional areas of education. If they already understand your department goals and vision to the point where they can guess why you’d be going to trainings or conferences, or purchasing technology they’re not themselves familiar with, then they’re less likely to question or push back.

    In fact, much of clearing the path is actually done away from your team. It might mean working with IT, principals, or union leaders. Sometimes clearing the path means finding paid professional development or peers for your Jedi and putting them in the same physical space to make connections and find support. And this last point, making sure your team has a professional community, is an example of clearing the emotional path for your team.

    I feel like, in order to do the work of professional development in education well, you have to really want to do it. If you have recruited revolutionary Jedi, and they are anxiously waiting to see change, then they might be in for some disappointment in the day-to-day. Especially in public education, changes are often incremental and slow. The word glacial comes to mind. However, if you can give your team a sense of belonging to something, remind and show them their accomplishments from time to time (hint: you will need this for yourself too as a Sith Lord) and provide opportunities for fun and bonding, then their emotional path will remain clear.

    Develop Your Developers

    This may not be as straightforward as it might sound. Obviously there is the normal goal-setting and driving people to develop their skills. In educational technology, we have the benefit of having many different certifications out there for our people to pursue. I work in a GSuiteEdu District, and I am very happy to say that we have added many Google Certified Trainers and Google Certified Innovators in our district, at all levels, and we have grown the number of Level 1 Google Certified Educators dramatically. This has been an outstanding achievement for our district, but this technical skill expertise is not enough.

    One of the things I have figured out, and it seems obvious when I read it, is to find out how people want to be developed, how they want to grow, and then find ways to grow them in those areas. This has two difficulties involved in it. The first difficulty is that sometimes you need to set aside how you want someone that you are leading to grow. Sometimes you have a need on your team, and you only have so many team members to fill it, and the team need can drive your actions in a way that isn’t always best for the person you’re trying to develop. There are, of course, certain basic team needs that must be fulfilled, but as new challenges or roles come along it’s good to be judicious and deliberate in assigning those roles and the accompanying development that goes along with them. The second difficulty comes when the team member isn’t really sure how they want to grow themselves. Allowing somebody the time and space for self-discovery and reflection can be difficult, especially if you are an impatient Sith Lord, but it will pay dividends in the long run.

    And then there’s this other thing, which seems to go opposite to the idea of developing people how they want to be developed. Sometimes you can see the potential for strengths in people; sometimes these strengths have no direct impact on the work of your team. Sometimes you can see that people are good at things even if they don’t know that they are good at those things, or, and this is a hard one, even if they don’t necessarily want to be good at those things.

    One of the members of my team is a natural diplomat, a clear-headed communicator, and has an overriding sense of fairness. It’s like he is a natural-born, level-headed leader. This is a role he shies away from. Every time he is in leadership he distinguishes himself so people keep asking him to do it. I think he would be a fabulous administrator, and it has taken me over a year to get him to a place to even consider it. For my part, I have had to be mindful and creative about how I use certain situations to help him see his strengths and the opportunities they might afford him.

    So in a way, being a Sith Lord is being a talent scout. This is pretty obvious at the recruiting phase when you’re looking for the initial attributes you want on your team. In addition, as you work with your individual team members–and you really should approach developing your team members as individuals–you need to be looking for their strengths so that you can build on them, and their areas of growth to mitigate them. The difficulty that comes as a team leader is when you know you need to push somebody up to a new position or a new challenge which will require them to leave your team. That can be hard, and downright annoying, but you develop yourself as a leader when you find new people to recruit and develop. You have to remember that teams succeed because of systems AND people. Build both, and in the long run the work will succeed, individuals will succeed, and the accomplishment will be satisfying.



    Source link

  • Inside an Expanding EdTech Company – EdTech Digest

    Inside an Expanding EdTech Company – EdTech Digest

    Why the CEO of a pioneering firm assembled a brand-new leadership team.  

    PANEL DISCUSSION | moderated by Victor Rivero

    Jamie Candee is relentless in her pursuit to put the educator first in everything that her online teaching and learning company does.

    So it follows that Jamie, recently featured as one of the Top 100 Influencers in EdTech by EdTech Digest, would expand her executive team with industry experts and educators:

    Marcus Lingenfelter, Senior VP of Strategic Initiatives and Partnerships;

    Karen Barton, Ph.D., Senior VP of Research and Design;

    Jason Scherschligt, VP of Product Strategy and Experience; and

    Christy Spivey, VP of Curriculum and Assessment Development

    …join her company, Edmentum, as it experiences strong growth and demand for partnerships with educators across the globe.

    (pictured above, from left: Marcus, Karen, Jason, Christy)   

    Built on 50 years of experience in education, the company’s solutions currently support educators and students in more than 40,000 schools nationwide.

    “I am beyond ecstatic to welcome all of our new executive team members,” says Jamie.

    “Our team has been trailblazers in education technology, and we are just getting started. As we enter a new chapter, we have assembled the right team to better serve educators worldwide,” she says.

    In this panel-in-print, we get up close with the individuals on her newly expanded team to get a better idea of the people leading the technology, and how their mindset and their approach will be key components of their future success.

    They have a lot to say.

    At a later point, we’ll follow up to see how they’ve done—and where they’re headed next.

    For now, enjoy an enlightening conversation.

    As former teachers, why do you think it’s important to have an educator’s perspective in the edtech business?

    Karen Barton: Educators, students, and their contexts vary, and sometimes vary in ways designers may not have anticipated, even in ways that may have unintentional outcomes. Understanding the contexts in which any solution is being utilized is critical to ensuring the solution best meets the needs of the educators and learners.

    Christy Spivey: The most important thing we can do is listen to educators and what they need. Any decision that is made for a new product must be made with the best interest of the teacher in mind. That’s a philosophy we embrace at Edmentum. What we need to do is extend that to having great empathy for educators and build programs that delight them and allow them to have a better connection to improving learning for their students.

    What was the biggest lesson you learned from your previous positions that will inform your current work?

    Karen Barton: 3 things: 1) The interactions we have with one another matter. They matter because when those interactions are positive, there is engagement, trust, joy, and success in our work, in the educators we serve, and in our personal lives and communities. 2) Data is only as powerful as our collection of it and should not be limited by the modeling paradigms of the past. And, 3) we should understand that even the best designs, methods, and intentions may not be aligned with the way in which educators leverage our solutions. As such, we should be sure to focus on what educators and learners really need – first – and work to meet those needs with flexibility, innovation, and usefulness.

    Marcus Lingenfelter: No significant challenges are overcome by any one individual, organization, or governmental entity – ergo, strategic partnerships are critical to mission success. Consider the partners enlisted to put American Neil Armstrong on the moon. Competing aerospace companies Boeing, Northrop Grumman, and Lockheed Martin aligned their unique capabilities to the demanding requirements to realize technological and engineering capabilities previously regarded as science fiction. Education’s challenges today are equally daunting and require a similar approach that aligns interests and capabilities of multiple organizations to ensure mission success. That mission – student success!

    Jason Scherschligt: Many times in my career I’ve been reminded of the value of observing and understanding the challenges of those who use your product. If you want to lead a product, you need to get out of your office and experience what your users experience. Educators work in complex environments with a unique set of demands and pressures. I want our product teams to obsess over the needs of educators. Getting to know educators in their natural environment is the best way to do that.

    Christy Spivey: Most of my career has been spent in Education. First as a classroom teacher and then at Edmentum. Over the years, I have worked with many educators and school districts, and all of them have one thing in common – needing more time to help students achieve their goals. With this experience, I am excited to continue to work with educators to make sure that everything we create exceeds educators’ and students’ expectations. 


    MarcusMarcus Lingenfelter, Edmentum’s newly-appointed Senior Vice President of Strategic Initiatives and Partnerships, has two decades of leadership experience in postsecondary education including campus roles at the University of Virginia and Penn State University along with cabinet-level positions at Widener University and Harrisburg University of Science and Technology. He previously served as Senior Vice President of Advancement for the National Math and Science Initiative (NMSI) – the non-profit established to dramatically improve math and science educational outcomes for the country.


    What would you say is the biggest need for educators today?

    Karen: Empowerment – the freedom to meet the needs of their learners in a variety of ways; and the training and support to ensure they are equipped to do so. 

    Christy: Educators need to feel supported and to have their voices heard. If we can provide educators with tools and data that give them the information they need to maximize their time with their students, then we are supporting teachers in a way that gives them the ability to use their talents in the classroom.

    Why should educators feel optimistic about the future of education technology?

    Marcus: Education technology, when properly deployed, has the ability to put a highly effective state-certified teacher in front of every student regardless of location or personal circumstances. Whether it is reaching students located in exceptionally rural parts of the country where teachers can’t be recruited or into the urban centers that experience high teacher turnover rates, technology has the ability to positively impact some of education’s most intractable challenges. Therefore, educators concerned about equity and access for all students should feel tremendous optimism about what the present and future hold for impacting student outcomes via education technology.

    Jason: Several trends in technology and culture are converging in ways that will help educators and students succeed. These include big data sets and powerful analytics, which will help us understand the outcomes produced by our instruction and assessment, and new interactive capabilities, like virtual and augmented reality, which will enable teachers to provide really creative and engaging learning experiences. At our company, we won’t chase technology for sizzle, but apply it on a solid base of research to genuinely help teachers teach and students learn. 

    What trends in education have you most excited?  

    Karen: I’m most excited about the shift away from a sole focus on state summative tests and a focus on supporting educators in collecting evidence through multiple and varied measures and meeting the needs of all learners.

    Christy: I’m really excited about the focus on equity and access in education today. We live in a time where the power of technology should be providing access to all kinds of instruction for all student populations. There’s no reason we shouldn’t be fighting as edtech professionals to ensure every student has access to this technology. I’m also excited about the discussions around new ways to assess student learning. If we really want to know how students learn, then we need to rethink how we assess that knowledge. 


    Karen

    Karen Barton, Ph.D. joins Edmentum as the Senior Vice President of Research and Design, bringing 20 years of experience in education to the role. She previously served as Vice President of Learning Analytics at Discovery Education and most recently as Vice President of Assessment Solutions at NWEA. In her new role, Karen will lead Edmentum’s research, academic program design, and psychometric efforts, with a focus on ensuring every Edmentum program is valid, reliable, endorsed by educators, and produces positive student outcomes.


    What is technology’s role in education?  

    Karen: To support educators in providing engaging and accessible material to all students, freeing their preparation time to focus on needs of individual students (not just prepping material for the whole class), and providing data, recommendations, and relevant content for each student – where they are in their learning journey and what they need to move forward. For students to have access to greater experiences and learning in the medium of their future, and exposure to digital opportunities (careers, access to information, working digitally, etc.). 

    Jason: I think it’s fundamentally augmentative. You can never take people out of education, because the entire enterprise of education is intended to help people become more capable and societies become stronger. So, technology in education exists to help teachers and students thrive, rather than to replace anyone. 

    What four trends in the coming years are edtech “trends to watch”, ones that will require some shaping and leadership? Name and define the trend, and describe briefly the sort of leadership that may be required to navigate it. 

    Karen:

    • Innovations in data modeling to expand our understanding of student learning, strategies, and needs.
    • Collaborative problem solving to build skills necessary for working in teams in career and even in college courses.
    • Adaptive learning, that is, not only adapting assessments but adapting recommendations of instruction. This should incorporate rich models of data and learning. It will be important for these systems to incorporate educator and student perspectives relative to learning needs.
    • Tighter integration of assessment and learning, where assessments close to the classroom become part of the larger system of assessments connected to state-level assessments and accountability. Ensuring the purposes and usefulness of classroom based assessments do not lose their value and meaning when put into an accountability context will be critical.

    Jason:

    These four really come to mind:

    • Really granular competency-based education (CBE), where every morsel of learning and assessment is closely aligned to specific standards and outcomes.
    • Adaptive learning, where we use artificial intelligence to deliver the right kind of instruction at the right time for the student.
    • Gamification, where the psychology of motivation and competition are applied to enable learning.
    • This one’s a bit of a personal interest, but technology applied to teaching the arts and humanities. Historically, edtech has emphasized math and hard sciences, where right and wrong answers might be easy to calculate, but I’m especially interested in watching how technology is applied to more nebulous and complex artifacts like poems and plays and paintings and history.

    JasonJason Scherschligt joins Edmentum as Vice President of Product Strategy and Experience. He brings more than 18 years of experience leading innovative product management and user experience approaches in organizations like the Star Tribune, Capella University, Jostens and GoKart Labs. Jason will focus on designing and managing industry changing, captivating education products that truly empower educators, engage students, and provide education leadership with the insights they need to develop and maintain education programs that create access, equity, engagement, and positive learning outcomes.


    What is the responsibility of a more established company such as yours to the edtech space? What makes you say that? 

    Marcus: As the nation’s original distance learning platform (PLATO), Edmentum most certainly has a responsibility to lead. However, such leadership is not just in the “edtech space” per se, but rather for helping all our education partners realize positive student learning outcomes for the benefit of all concerned. The challenges facing our communities – local and global – are significant and require everyone to lift their gaze to get the bigger perspective and then act responsibly, in collaboration with others, to solve the problems.

    Christy: For any edtech company, there is a big responsibility to make sure that all programs and solutions we provide really have the educator in mind. We can only do that if we work with educators directly and listen to what they need. We also have a responsibility to use technology to bring more access and equity to student learning. At our company, we want to be at the forefront of pushing for innovative and proven programs that work for all students and educators. 


    ChristyChristy Spivey assumes the role of Vice President of Curriculum and Assessment Development for Edmentum. During her tenure, Christy has served in a variety of roles shaping the company’s curriculum development strategy. Embracing her experience in the classroom, she ensures educators are involved in every step of Edmentum’s design and development to provide programs that meet the needs of educators everywhere. Christy leads a team of deeply committed curriculum and assessment designers focused on creating a new paradigm in differentiated instruction, blended learning, and active learning models.


    Anything else you might care to add or emphasize, any parting words of edtech-relevant wisdom, or commentary about the future of edtech?

    Karen: Educational technology has the potential to revolutionize our education systems – from primary through university and career training, and how we teach, train, learn and collaborate, and to bring societal balance. To do so, we need to attend to building platforms, data, content, and assessments that balance our attention across the diversities of learners and contexts, and work to address the discomfort of change that comes with progress – change that will require edtech companies to attend to legislation around the educational systems that govern educational models, as well as to be very cognizant and intentional around data privacy.

    Jason: I often think about Marc Andreesen’s aphorism that software is eating the world: almost every interaction or transaction we conduct somehow involves software, and that includes educational experiences. We can’t escape that, but we can shape it. Education remains humanity’s best hope for its future, and I’m excited that we get to shape how teachers apply software to enable student achievement.

    Christy: Today’s students should be going to school and interacting with technology in the same way they do in their social lives and time outside of school. We have a responsibility to support educators who want to make technology a seamless part of their instruction to engage students in learning. We need to have empathy for the complexity of teaching and learning and find ways to make both easier for students and educators.

    Victor Rivero is the Editor-in-Chief of EdTech Digest. Write to: [email protected]

    Source link