Tag: ALDA

  • See and Try the ALDA Rapid Prototype –

    See and Try the ALDA Rapid Prototype –

    As regular readers know, I recently announced a design/build workshop series for an AI Learning Design Assistant (ALDA). The idea is this:

    • If we can reduce the time it takes to design a course by about 20%, the productivity and quality impacts for organizations that need to build enough courses to strain their budget and resources will gain “huge” benefits.
    • We should be able to use generative AI to achieve that goal fairly easily without taking ethical risks and without needing to spend massive amounts of time or money.
    • Beyond the immediate value of ALDA itself, learning the AI techniques we will use—which are more sophisticated than learning to write better ChatGPT prompts but far less involved than trying to build our own ChatGPT—will help the participants learn to accomplish other goals with AI.

    This may sound great in theory, but like most tech blah blah blah, it’s very abstract.

    Today I’m going to share with you a rapid prototype of ALDA. I’ll show you a demo video of it in action and I’ll give you the “source code” so you can run it—and modify it—yourself. (You’ll see why I’ve put “source code” in scare quotes as we get further in.) You will have a concrete demo of the very basic ALDA idea. You can test it yourself with some colleagues. See what works well and what falls apart. And, importantly, see how it works and, if you like, try to make it better. While the ALDA project is intended to produce practically useful software, its greatest value is in what the participants learn (and the partnerships they forge between workshop teams).

    The Miracle

    The ALDA prototype is a simple AI assistant for writing a first draft of a single lesson. In a way, it is a computer program that runs on top of ChatGPT. But only in a way. You can build it entirely in the prompt window using a few tricks that I would hardly call programming. You need a ChatGPT Plus subscription. But that’s it.

    It didn’t occur to me to build an ALDA proof-of-concept myself until Thursday. I thought I would need to raise the money first, then contract the developers, and then build the software. As a solo consultant, I don’t have the cash in my back pocket to pay the engineers I’m going to work with up-front.

    Last week, one of the institutions that are interested in participating asked me if I could show a demo as part of a conversation about their potential participation. My first thought was, “I’ll show them some examples of working software that other people have built.” But that didn’t feel right. I thought about it some more. I asked ChatGPT some questions. We talked it through. Two days later, I had a working demo. ChatGPT and I wrote it together. Now that I’ve learned a few things, it would take me less than half a day to make something similar from scratch. And editing it easy.

    Here’s a video of the ALDA rapid prototype in action:

    ALDA Rapid Prototype Demo and Tips

    This is the starting point for the ALDA project. Don’t think of it as what ALDA is going to be. Think of it as a way to explore what you would want ALDA to be.

    The purpose of ALDA rapid prototype

    Before I give you the “source code” and let you play with it yourselves, let’s review the point of this exercise and some warnings about the road ahead.

    Let’s review the purpose of the ALDA project in general and this release in particular. The project is designed to discover the minimum amount of functionality—and developer time, and money—required to build an app on top of a platform like ChatGPT to make a big difference in the instructional design process. Faster, better, cheaper. Enough that people and organizations begin building more courses, building them differently, keeping them more up-to-date and higher quality, and so on. We’re trying to build as little application as is necessary.

    The purpose of the prototype is to design and test as much of our application as we can before we bring in expensive programmers and build the functionality in ways that will be more robust but harder to change.

    While you will be able to generate something useful, you will also see the problems and limitations. I kept writing more and more elaborate scripts until ChatGPT began to forget important details and make more mistakes. Then I peeled back enough complexity to get it back to the best performance I can squeeze out of it. The script will help us understand the gap between ChatGPT’s native capabilities and the ones we need to get value we want ALDA to provide.

    Please play with the script. Be adventurous. The more we can learn about that before we start the real development work, the better off we’ll be.

    The next steps

    Back in September—when the cutting edge model was still GPT-3—I wrote a piece called “AI/ML in EdTech: The Miracle, the Grind, and the Wall.” While I underestimated the pace of evolution somewhat, the fundamental principle at the core of the post still holds. From GPT-3 to ChatGPT to GPT-4, the progression has been the same. When you set out to do something with them, the first stage is The Miracle.

    The ALDA prototype is the kind of thing you can create at the Miracle stage. It’s fun. It makes a great first impression. And it’s easy to play with, up to a point. The more time you spend with it, the more you see the problems. That’s good. Once we have a clearer sense of its limitations and what we would like it to do better or differently, we can start doing real programming.

    That’s when The Grind begins.

    The early gains we can make with developer help shouldn’t be too hard. I’ll describe some realistic goals and how we can achieve them later in this piece. But The Grind is seductive. Once you start trying to build your list of additions, you quickly discover that the hill you’re climbing gets a lot steeper. As you go further, you need increasingly sophisticated development skills. If you charge far enough along, weird problems that are hard to diagnose and fix start popping up.

    Eventually, you can come to a dead end. A problem you can’t surmount. Sometimes you see it coming. Sometimes you don’t. If you hit it before you achieve your goals for the project, you’re dead.

    This is The Wall. You don’t want to hit The Wall.

    The ALDA project is designed to show what we can achieve by staying within the easier half of the grind. We’re prepared to climb the hill after the Miracle, but we’re not going too far up. We’re going to optimize our cost/benefit ratio.

    That process starts with rapid prototyping.

    How to rapidly prototype and test the ALDA idea

    If you want to play with the ALDA script, I suggest you watch the video first. It will give you some valuable pointers.

    To run the ALDA prototype, do the following:

    1. Open up your ChatGPT Plus window. Make sure it’s set to GPT-4.
    2. Add any plugin that can read a PDF on the web. I happened to use “Ai PDF,” and it worked for me. But there are probably a few that would work fine.
    3. Find a PDF on the web that you want to use as part of the lesson. It could be an article that you want to be the subject of the lesson.
    4. Paste the “source code” that I’m going to give you below and hit “Enter.” (You may lose the text formatting when you paste the code in. Don’t worry about it. It doesn’t matter.)

    Once you do this, you will have the ALDA prototype running in ChatGPT. You can begin to build the lesson.

    Here’s the “source code:”

    You are a thoughtful, curious apprentice instructional designer. Your job is to work with an expert to create the first draft of curricular materials for an online lesson. The steps in this prompt enable you to gather the information you need from the expert to produce a first draft.

    Step 1: Introduction

    • “Hello! My name is ALDA, and I’m here to assist you in generating a curricular materials for a lesson. I will do my best work for you if you think of me as an apprentice. 
    • “You can ask me questions that help me think more clearly about how the information you are giving me should influence the way we design the lesson together. Questions help me think more clearly. 
    • “You can also ask me to make changes if you don’t like what I produce.
    • “Don’t forget that, in addition to being an apprentice, I am also a chatbot. I can be confidently wrong about facts. I also may have trouble remembering all the details if our project gets long or complex enough. 
    • “But I can help save you some time generating a first draft of your lesson as long as you understand my limitations.” 
    • “Let me know when you’re ready to get started.”

    Step 2: Outline of the Process

    • Here are the steps in the design process we’ll go through:”
    • [List steps]
    • “When you’re ready, tell me to continue and we’ll get started.”

    Step 3: Context and Lesson Information

    • “To start, could you provide any information you think would be helpful to know about our project? For example, what is the lesson about? Who are our learners and what should I know about them? What are your learning goals? What are theirs? Is this lesson part of a larger course or other learning experience? If so, what should I know about it? You can give me a little or a lot of information.”
    • [Generate a summary of the information provided and implications for the design of the lesson.]
    • [Generate implications for the design of the lesson.]
    • “Here’s the summary of the Context: [Summary]. 
    • Given this information, here are some implications for the learning design [Implications]. Would you like to add to or correct anything here? Or ask me follow-up questions to help me think more specifically about how this information should affect the design of our lesson?”

    Step 4: Article Selection

    • “Thank you for providing details about the Context and Lesson Information. Now, please provide the URL of the article you’d like to base the lesson on.”
    • [Provide the citation for the article and a one-sentence summary]
    • “Citation: [Citation]. One-sentence summary: [One-sentence summary. Do not provide a detailed description of the article.] Is this the correct article?”

    Step 5: Article Summarization with Relevance

    • “I’ll now summarize the article, keeping in mind the information about the lesson that we’ve discussed so far.
    • “Given the audience’s [general characteristics from Context], this article on [topic] is particularly relevant because [one- or two-sentence explanation].”
    • [Generate a simple, non-academic language summary of the article tailored to the Context and Lesson Information]
    • “How would you like us to use this article to help create our lesson draft?”

    Step 5: Identifying Misconceptions or Sticking Points

    • “Based on what I know so far, here are potential misconceptions or sticking points the learners may have for the lesson: [List of misconceptions/sticking points]. Do you have any feedback or additional insights about these misconceptions or sticking points?”

    Step 6: Learning Objectives Suggestion

    • “Considering the article summary and your goals for the learners, I suggest the following learning objectives:”
    • [List suggested learning objectives]
    • “Do you have any feedback or questions about these objectives? If you’re satisfied, please tell me to ‘Continue to the next step.’”

    Step 7: Assessment Questions Creation

    • “Now, let’s create assessment questions for each learning objective. I’ll ensure some questions test for possible misconceptions or sticking points. For incorrect answers, I’ll provide feedback that addresses the likely misunderstanding without giving away the correct answer.”
    • [For each learning objective, generate an assessment question, answers, distractors, explanations for distractor choices, and feedback for students. When possible, generate incorrect answer choices that test the student for misunderstandings or sticking points identified in Step 5. Provide feedback for each answer. For incorrect answers, provide feedback that helps the student rethink the question without giving away the correct answer. For incorrect answers that test specific misconceptions or sticking points, provide feedback that helps the student identify the or sticking point without giving away the correct answers.]
    • “Here are the assessment questions, answers, and feedback for [Learning Objective]: [Questions and Feedback]. Do you have any feedback or questions about these assessment items? If you’re satisfied, please tell me to ‘Continue to the next step.’”

    Step 8: Learning Content Generation

    • “Now, I’ll generate the learning content based on the article summary and the lesson outline. This content will be presented as if it were in a textbook, tailored to your audience and learning goals.”
    • [Generate textbook-style learning content adjusted to account for the information provided by the user. Remember to write it for the target audience of the lesson.]
    • “Here’s the generated learning content: [Content]. Do you have any feedback or questions about this content? If you’re satisfied, please tell me to ‘Continue to the next step.’”

    Step 9: Viewing and Organizing the Complete Draft

    • “Finally, let’s organize everything into one complete lesson. The lesson will be presented in sections, with the assessment questions for each section included at the end of that section.”
    • [Organize and present the complete lesson. INCLUDE LEARNING OBJECTIVES. INSERT EACH ASSESSMENT QUESTION, INCLUDING ANSWER CHOICES, FEEDBACK, AND ANY OTHER INFORMATION, IMMEDIATELY AFTER RELEVANT CONTENT.]
    • “Here’s the complete lesson: [Complete Lesson]. Do you have any feedback or questions about the final lesson? If you’re satisfied, please confirm, and we’ll conclude the lesson creation process.”

    The PDF I used in the demo can be found here. But feel free to try your own article.

    Note there are only four syntactic elements in the script: quotation marks, square bracks, bullet points, and step headings. (I read that all caps help ChatGPT pay more attention, but I haven’t seen evidence that it’s true.) If you can figure out how those elements work in the script, then you can prototype your own workflow.

    I’m giving this version away. This is partly for all you excellent, hard-working learning designers who can’t get your employer to pay $25,000 for a workshop. Take the prototype. Try it. Let me know how it goes by writing in the comments thread of the post. Let me know if it’s useful to you in its current form. If so, how much and how does it help? If not, what’s the minimum feature list you’d need in order for ALDA to make a practical difference in your work? Let’s learn together. If ALDA is successful, I’ll eventually find a way to make it affordable to as many people as possible. Help me make it successful by giving me the feedback.

    I’ll tell you what’s at the top of my own personal goal list for improving it.

    Closing the gap

    Since I’m focused on meeting that “useful enough” threshold, I’ll skip the thousand cool features I can think of and focus on the capabilities I suspect are most likely to take us over that threshold.

    Technologically, the first thing ALDA needs is robust long-term memory. It loses focus when prompts or conversations get too long. It needs to be able to accurately use and properly research articles and other source materials. It needs to be able to “look back” on a previous lesson as it writes the next one. This is often straightforward to do with a good developer and will get easier over the next year as the technology matures.

    The second thing it could use is better models. Claude 2 gives better answers than GPT-4 when I walk it through the script manually. Claude 3 may be even better when it comes out. Google will release its new Gemini model soon. OpenAI can’t hold off on GPT-5 for too long without risking losing its leadership position. We may also get Meta’s LLama 3 and other strong open-source contenders in the next six months. All of these will likely provide improvements over the output we’re getting now.

    The third thing I think ALDA needs is marked up examples of finished output. Assessments are particularly hard for the models to do well without strong, efficacy-tested examples that have the parts and their relationships labeled. I know where to get great examples but need technical help to get them. Also, if the content is marked up, it can be converted to other formats and imported into various learning systems.

    These three elements—long-term memory usage, “few-shot” examples of high-quality marked-up output, and the inevitable next versions of the generative AI models—should be enough to enable ALDA to have the capabilities that I think are likely to be the most impactful:

    • Longer and better lesson output
    • Better assessment quality
    • Ability to create whole modules or courses
    • Ability to export finished drafts into formats that various learning systems can import (including, for example, interactive assessment questions)
    • Ability to draw on a collection of source materials for content generation
    • Ability to rewrite the workflows to support different use cases relatively easily

    But the ALDA project participants will have a big say in what we build and in what order. In each workshop in the series, we’ll release a new iteration based on the feedback from the group as they built content with the previous one. I am optimistic that we can accomplish all of the above and more based on what I’m learning and the expert input I’m getting so far.

    Getting involved

    If you play with the prototype and have feedback, please come back to this blog post and add your observations to the comments thread. The more detailed, the better. If I have my way, ALDA will eventually make its way out to everyone. Any observations or critiques you can contribute will help.

    If you have the budget, you can sign your team up to participate in the design/build workshop series. The cost, which gets you all source code and artifacts in addition to the workshops and the networking, is $25,000 for the group for half a dozen half-day virtual design/build sessions, including quality networking with great organizations. You find a downloadable two-page prospectus and an online participation application form here. Applications will be open until the workshop is filled. I already have a few participating teams lined up and a handful more that I am talking to.

    To contact me for more information, please fill out this form:

    You can also write me directly at [email protected].

    Please join us.

    Source link

  • Announcing a Design/Build Workshop Series for an AI Learning Design Assistant (ALDA) –

    Announcing a Design/Build Workshop Series for an AI Learning Design Assistant (ALDA) –

    Want to build an AI tool that will seriously impact your digital learning program? Right now? For a price that you may well have in your professional development budget?

    I’m launching a project to prove we can build a tool that will change the economics of learning design and curricular materials in months rather than years. Its total cost will be low enough to be paid for by workshop participation fees.

    Join me.

    The learning design bottleneck

    Many of my friends running digital course design teams tell me they cannot keep up with demand. Whether their teams are large or small, centralized or instructor-led, higher education or corporate learning and development (L&D), the problem is the same; several friends at large shops have told me that their development of new courses and redesigns of old ones have all but ground to a halt. They don’t have time or money to fix the problem.

    I’ve been asking, “Suppose we could accelerate your time to develop a course by, say, 20%?” Twenty percent is my rough, low-end guess about the gains. We should be able to get at least that much benefit without venturing into the more complex and riskier aspects of AI development. “Would a 20% efficiency gain be significant?” I ask.

    Answer: “It would be huge.”

    My friends tend to cite a few benefits:

    • Unblocked bottlenecks: A 20% efficiency gain would be enough for them to start building (or rebuilding) courses at a reasonable speed again.
    • Lower curricular materials costs: Organizations could replace more licensed courses with ones that they own. No more content license costs. And you can edit it any way you need to.
    • Better quality: The tool would free up learning designers to build better courses rather than running just to get more courses finished.
    • More flexibility with vendors: Many departments hire custom course design shops. A 20% gain in efficiency would give them more flexibility in deciding when and how to invest their budgets in this kind of consulting.

    The learning design bottleneck is a major business problem for many organizations. Relatively modest productivity gains would make a substantial difference for them. Generative AI seems like a good tool for addressing this problem. How hard and expensive would it be to build a tool that, on average, delivers a 20% gain in productivity?

    Not very hard, not very expensive

    Every LMS vendor, courseware platform provider, curricular materials vendor, and OPM provider is currently working on tools like this. I have talked to a handful of them. They all tell me it’s not hard—depending on your goals. Vendors have two critical constraints. First, the market is highly suspicious of black-box vendor AI and very sensitive to AI products that make mistakes. EdTech companies can’t approach the work as an experiment. Second, they must design their AI features to fit their existing business goals. Every feature competes with other priorities that their clients are asking for.

    The project I am launching—AI Learning Design Assistant (ALDA)—is different. First, it’s design/build. The participants will drive the requirements for the software. Second, as I will spell out below, our software development techniques will be relatively simple and easy to understand. In fact, the value of ALDA is as much in learning patterns to build reliable, practical, AI-driven tools as it is in the product itself. And third, the project is safe.

    ALDA is intended to produce a first draft for learning designers. No students need to see content that has not been reviewed by a human expert or interact directly with the AI at all. The process by which ALDA produces its draft will be transparent and easy to understand. The output will be editable and importable into the organization’s learning platform of choice.

    Here’s how we’ll do it:

    • Guided prompt engineering: Your learning designers probably already have interview questions for the basic information they need to design a lesson, module, or course. What are the learning goals? How will you know if students have achieved those goals? What are some common sticking points or misconceptions? Who are your students? You may ask more or less specific and more or less elaborate versions of these questions, but you are getting at the same ideas. ALDA will start by interviewing the user, who is the learning designer or subject-matter expert. The structure of the questions will be roughly the same. While we will build out one set of interview questions for the workshop series, changing the design interview protocol should be relatively straightforward for programmers who are not AI specialists.
    • Long-term memory: One of the challenges with using a tool like ChatGPT on its own is that it can’t remember what you talked about from one conversation to the next and it might or might not remember specific facts that it was trained on (or remember them correctly). We will be adding a long-term memory function. It can remember earlier answers in earlier design sessions. It can look up specific documents you give it to make sure it gets facts right. This is an increasingly common infrastructure component in AI projects. We will explore different uses of it when we build ALDA. You’ll leave the workshop with the knowledge and example code of how to use the technique yourself.
    • Prompt enrichment: Generative AI often works much better when it has a few really good, rich examples to work from. We will provide ALDA with some high-quality lessons that have been rigorously tested for learning effectiveness over many years. This should increase the quality of ALDA’s first drafts. Again, you may want your learning designs to be different. Since you will have the ALDA source code, you’ll be able to put in whatever examples you want.
    • Generative AI export: We may or may not get to building this feature depending on the group’s priorities in the time we have, but the same prompt enrichment technique we’ll use to get better learning output can also be used to translate the content into a format that your learning platform of choice can import directly. Our enrichment examples will be marked up in software code. A programmer without any specific AI knowledge can write a handful of examples translating that code format into the one that your platform needs. You can change it, adjust it, and enrich it if you change platforms or if your platform adds new features.

    The consistent response from everyone in EdTech I’ve talked to who is doing this kind of work is that we can achieve ALDA’s performance goals with these techniques. If we were trying to get 80% or 90% accuracy, that would be different. But a 20% efficiency gain with an expert human reviewing the output? That should be very much within reach. The main constraints on the ALDA project are time and money. Those are deliberate. Constraints drive focus.

    Let’s build something useful. Now.

    The collaboration

    Teams that want to participate in the workshop will have to apply. I’m recruiting teams that have immediate needs to build content and are willing to contribute their expertise to making ALDA better. There will be no messing around. Participants will be there to build something. For that reason, I’m quite flexible about who is on your team or how many participate. One person is too few, and eight is probably too many. My main criterion is that the people you bring are important to the ALDA-related project you will be working on.

    This is critical because we will be designing ALDA together based on the experience and feedback from you and the other participants. In advance of the first workshop, my colleagues and I will review any learning design protocol documentation you care to share and conduct light interviews. Based on that information, you will have access to the first working iteration of ALDA at the first workshop. For this reason, the workshop series will start in the spring. While ALDA isn’t going to require a flux capacitor to work, it will take some know-how and effort to set up.

    The workshop cohort will meet virtually once a month after that. Teams will be expected to have used ALDA and come up with feedback and suggestions. I will maintain a rubric for teams to use based on the goals and priorities for the tool as we develop them together. I will take your input to decide which features will be developed in the next iteration. I want each team to finish the workshop series with the conviction that ALDA can achieve those performance gains for some important subset of their course design needs.

    Anyone who has been to one of my Empirical Educator Project (EEP) or Blursday Social events knows that I believe that networking and collaboration are undervalued at most events. At each ALDA workshop, you will have time and opportunities to meet with and work with each other. I’d love to have large universities, small colleges, corporate L&D departments, non-profits, and even groups of students participating. I may accept EdTech vendors if and only if they have more to contribute to the group effort than just money. Ideally, the ALDA project will lead to new collaborations, partnerships, and even friendships.

    Teaching AI about teaching and learning

    The workshop also helps us learn together about how to teach AI about teaching and learning. AI research is showing us how much better the technology can be when it’s trained on good data. There is so much bad pedagogy on the internet. And the content that is good is not marked up in a way that is friendly to teach AI patterns. What does a good learning objective or competency look like? How do you write hints or assessment feedback that helps students learn but doesn’t give away the answers? How do you create alignment among the components of a learning design?

    The examples we will be using to teach the AI have not only been fine-tuned for effectiveness using machine learning over many years; they are also semantically coded to capture some of these nuances. These are details that even many course designers haven’t mastered.

    I see a lot of folks rushing to build “robot tutors in the sky 2.0” without a lot of care to make sure the machines see what we see as educators. They put a lot of faith in data science but aren’t capturing the right data because they’re ignoring decades of learning science. The ALDA project will teach us how to teach the machines about pedagogy. We will learn to identify the data structures that will empower the next generation of AI-powered learning apps. And we will do that by becoming better teachers of ALDA using the tools of good teaching: clear goals, good instructions, good examples, and good assessments. Much of it will be in plain English, and the rest will be in a simple software markup language that any computer science undergraduate will know.

    Wanna play?

    The cost for the workshop series, including all source code and artifacts, is $25,000 for your team. You can find an application form and prospectus here. Applications will be open until the workshop is filled. I already have a few participating teams lined up and a handful more that I am talking to.

    You also find a downloadable two-page prospectus and an online participation application form here. To contact me for more information, please fill out this form:

    [Update: I’m hearing from a couple of you that your messages to me through the form above are getting caught in the spam filter. Feel free to email me at [email protected] if the form isn’t getting through.]

    I hope you’ll join us.

    Source link