Tag: Learning

  • What Are Student Learning Outcomes?

    What Are Student Learning Outcomes?

    Learning outcomes are descriptions of the abilities, skills and knowledge that are used for assessing student learning. Learning outcomes should outline what students possess and can demonstrate upon completion of a learning experience or set of experiences. When developing a list of student learning outcomes for educators to set as curriculum objectives to improve student learning, consider the following recommendations:

    How to Build Student Learning Outcomes

    Choose between 3-5 learning outcomes: You should choose a sufficient amount of learning outcomes to ensure student progress can be measured without becoming overly complicated for educators to assess. It is also worthwhile to point out that not all educational activities will assess all learning outcomes. Each educational activity can assess students’ development and comprehension focusing on 1-2 student learning objectives for each class. Less than 3 objectives likely mean that student learning objectives are not robust enough for an entire course.

    Learning outcomes should be straightforward: The outcomes identified and described in your plan should be concise and simple. They should avoid complex phrasing or compound statements that mesh more than one statement together to communicate effectively. Each learning outcome should focus on the development of one skill or the meeting of one goal in order to be straightforward and ensure effective learning.

    Learning outcomes should be expressed in the future tense: It is very important for the proper implementation of student learning outcomes that they are expressed in the future tense. The statement should express what an individual student should be able to do as the result of specific instruction or educational activity. Outcomes should involve active learning, and be observable so they can be quantified for examining key student success metrics through learning assessment. They should create and make use of information literacy skills.

    Learning outcomes should be realistic: In order to ensure student learning outcomes are successful, they must be attainable for the students for whom they are designated. Outcomes need to be designed with students’ ability, their initial skill sets, cognitive development and the length of the institutional time frame (a week, a semester, etc) designated to attain these skill sets in mind. Further, they should also align with the material for teaching to students.

    Learning outcomes should align with the curriculum: The learning outcomes developed should be consistent with the curriculum objectives within the program and discipline in which they are taught. This is especially important when interpreting assessment results to analyze where changes in instruction should be made. Curriculum mapping is one example of an effective way to ensure that chosen learning outcomes correspond to the designated curriculum. A curriculum map is a diagram that explains which learning outcomes are plotted against specific program courses. This helps ensure that learning goals are reached in a timely manner.

    Methods of Constructing Learning Outcomes

    Implementing taxonomies: Taxonomies of learning experiences and student outcomes can be useful outlines for developing thorough and insightful lists of student outcomes. Taxonomies classify and compartmentalize the different types of student learning. Taxonomies usually follow a structure that divides learning into three categories. The first is the cognitive domain, which has six levels, ranging from the simple recall or recognition of facts, as the lowest level, up to increasingly more complex and abstract mental levels, followed by the highest order which is classified as evaluation. The second domain is the affective domain involves our feelings, emotions, and attitudes. This domain includes the ways in which humans deal with things emotionally, such as feelings, values, appreciation, enthusiasm, motivations, and attitudes. The final domain is the psychomotor domain, which focuses refers to the motor skills learners are expected to have acquired and mastered at each stage of development.

    Bloom’s Taxonomy of Educational Objectives (1956) is one traditional framework for structuring learning outcomes. Levels of performance for Bloom’s cognitive domain include knowledge, comprehension, application, analysis, synthesis, and evaluation. These categories are arranged in ascending order of cognitive complexity where evaluation represents the highest level. There are six steps within Bloom’s Taxonomy to achieve learning outcomes. The first step is knowledge, which focuses on knowing and remembering important facts, concepts, terms, principles or theories. The second step is comprehension, which focuses on the understanding of specific learning concepts or curriculum objectives. The third step is application, which focuses on skills and knowledge applications to solve problems. The fourth step is analysis, which focuses on identifying different structures and organizations of specific concepts or subjects, identifying relationships and different moving elements within an organization. The fifth step is synthesis, which focuses on the creation and integration of new ideas into a solution, in order to propose an action plan and potentially formulate a new classification scheme by using critical thinking. The sixth and final step in Bloom’s Taxonomy is evaluation, which judges the quality of knowledge more broadly or a specific learning concept based on its adequacy, use, value or logic.

    Using power verbs: When constructing learning outcomes, it is important to make use of concrete action words that are able to describe and quantify specific action that is observable and measurable.

    Using a Curriculum Map: Once learning outcomes have been developed and approved, making use of a curriculum map can help in viewing how the outcomes developed are being met in each course at an institution. A curriculum map is a straightforward way to visualize the ways in which an educator or institution can list learning outcomes in the rows and the program courses in the columns to demonstrate which courses contribute to each learning outcome. In each cell, letters can be placed to indicate how the course relates to the learning outcome. Use the letters “I,” “R,’ and “E” to identify which courses in the program “introduce”, “reinforce,” or “emphasize” the corresponding learning outcomes. By putting the curriculum maps into place, educators can watch for unnecessary redundancies, inconsistencies, misalignments, weaknesses, and gaps in their learning outcomes in order to optimize them for student success in their program review.

    Measuring Student Learning Outcomes

    Assessment of student learning outcomes: Assessment is a systematic and on-going way of collecting and interpreting information in order to analyze its effectiveness. The academic assessment process can also provide greater insight into how well learning outcomes relate and correspond to the goals and outcomes developed to support the institution’s mission and purpose. An ideal learning outcomes assessment process aims to answer the questions of what an institution is doing and how well it is doing it. Assessments begins with the expression of learning outcomes and course learning. The key to writing measurable outcomes involves describing the first three components: firstly analyzing the outcome, secondly, determining the method of assessment, Third, involves recognizing the criteria for success, as part of the student-centered assessment cycle.

    Program and Performance outcomes: program and performance outcomes describe the goals of a program rather than focusing on what students should know, do or value at the end of a given time period. Program outcomes can be as one-dimensional and simple as a completion of a task or activity, although this is not as meaningful as it could be and does not provide the educator with enough information for improvement. To accomplish the latter, educators and department heads should try to assess the effectiveness of what a given program has set out to accomplish. Performance outcomes usually have quantitative targets and specific timelines.

    Tagged as:

    Source link

  • The Benefits of Distant Learning

    The Benefits of Distant Learning

    What is distance learning?

    Distance learning refers to the education of students who may not always be physically present at a school. Historically, this involved correspondence between an individual and an academic institution by mail. Today, it involves learning through online tools and platforms. A distance learning program can take place entirely in online learning environments, or a combination of distance learning and traditional classroom instruction (called blended or hybrid). Massive open online courses (MOOCs), offering large-scale interactive participation and learning resources, are more recent developments in distance learning. During the COVID-19 pandemic and subsequent campus closures, educators and institutions relied heavily on distance learning methods to complete the semester.

    Types of distance learning

    Within the scope of distance education there are two very important concepts: synchronous and asynchronous learning.

    Synchronous learning

    Synchronous learning requires some form of communication during classroom time. It has a less flexible learning plan because the classes are conducted on a set schedule using videoconferencing or live online webinars.

    • Fixed-time online courses are the most common type of distance education. Students sign into their online educational portal to access distance learning resources, including live class video streams. Using this method, students and instructors make use of live chats and discussion boards for communication.
    • Video conferencing takes advantage of tools and platforms, like Zoom, that have expansive capabilities and can be used globally. Video conferencing provides learning opportunities for students by allowing them to see their instructors and peers in real time, creating a sense of community in the virtual classroom.

    Synchronous distance learning most closely mirrors the typical in-class experience. Delivering course content virtually in real time creates a sense of intimacy and timeliness that is particularly effective for increasing student engagement. Depending on supporting technology, such as learning management platforms, educators can also respond directly to questions and discussions, provide feedback and use interactive polling and click-on-target questions to gauge comprehension and ensure students are moving in the right direction. This includes the ability for attending students to access lecture slides, engage with their peers in discussion threads, and answer interactive questions.

    Synchronous learning provides opportunities to apply concepts and collaborate. It’s especially useful when teaching material that requires immediate feedback or clarification to keep students on track. There are important social benefits as well. Given the new normal students and faculty find themselves contending with, the opportunity to connect with peers, work together and see each other can go a long way in alleviating the sense of isolation many may feel when learning in a virtual environment.

    Asynchronous learning

    Asynchronous learning allows the student to work at their own pace, and normally has a very distinct syllabus, with weekly deadlines for homework and other assignments. Students have regular access to their peers and their instructors, although this is typically managed through email and discussion boards.

    • Open schedule online courses give students the greatest amount of freedom. All deadlines are pre-set and students are encouraged to be self-sufficient and complete their assignments on their own timelines. Without dedicated class time, students complete their coursework whenever they choose to allot the time to do so. Final exams normally occur at the end of the semester, and are open for several days to provide students with some flexibility as to when they choose to take it.
    • Hybrid distance education combines synchronous and asynchronous methods of online learning. Students must adhere to specific predetermined deadlines for assignment completion. The majority of the coursework is completed online, but in some cases, the student can physically speak with an instructor in person through live chats or video conferencing. Hybrid distance education may also include attending a physical classroom for certain periods of time. Conversely, it may involve covering specific modules and then returning to distance learning to complete additional modules and assignments.

    Asynchronous learning is particularly beneficial if students with varying levels of Internet access find it difficult to follow a specific schedule. Accessing all course materials, readings and assignments in a single place allows students to explore topics in detail and at their own pace. Discussion forums and one-to-one communications through email are simple ways to create engagement, even if much of the learning is self-directed. Asynchronous learning also provides the opportunity for instructors to promote peer collaboration, creating specific assignments that require students to work with each other or review each other’s work outside the confines of a class schedule.

    Without the benefit of live interaction, it’s especially important for students and instructors alike to communicate—or over-communicate —as the case may be. One of the disadvantages of asynchronous learning is student apathy and isolation. Taking time to set course expectations, provide clear assignment instructions and responding to student emails and discussion thread posts are essential.

    How distance learning impacts students

    There are many advantages to distance education. Online courses provide a more accessible learning experience for students. Accessibility in higher education means all students are provided with an equal opportunity to access course materials. This should be top of mind for educators in planning how to deliver their courses. It is no longer realistic to expect that all students have access to online materials outside of the traditional classroom, and even when they do, it’s important to take time to orient students properly. A Top Hat survey of more than 3,000 students found that 28 percent reported difficulty navigating and using online learning resources and tools. Accessibility goes hand-in-hand with flexibility: letting students choose how and where learning takes place can reduce barriers associated with finding success in higher ed.

    Forming an accessible course starts with giving careful consideration to ensuring all students can benefit from your teaching model.

    In online learning environments, students may feel isolated from their peers and campus communities. Participation has therefore become even more important with the shift to remote education. With in-person learning, instructors can gauge by a show of hands who understood your course material. In an online environment, opportunities for participation, such as discussion questions interspersed throughout lecture presentations, can help bridge the gap. Engagement in the classroom may start with icebreaker activities and diagnostic assessments. From here, instructors should consider introducing more collaborative activities such as case studies and debates to ensure students have ample opportunity to put theory into practice.

    Academic success isn’t the only concern students face. Stable housing facilities and regular access to food, along with physical and mental health resources are also top of mind for today’s college students. This is particularly in the midst of the coronavirus pandemic. Empathetic teaching practices, such as shortening lecture modules to provide students with key takeaways and making those lectures available for students to review on their own, are essential in creating supportive learning communities. Empowering students starts with respecting their individual needs and circumstances. It’s also important to dedicate time to connect with students beyond the actual class schedule. As part of the responsibilities of teaching in an online learning environment, instructors should set aside time to answer students’ questions, provide feedback and connect with them on a more personal level, similar to on a social media site.

    The future of distance learning

    Students were okay with “good enough” online education at the height of the pandemic and subsequent school closures, according to Top Hat’s COVID-19 State of Flux Survey results. But they will be less likely to put up with subpar learning in the coming semester. The good news is that many students see value in the flexibility of virtual learning. In fact, around a third of students would prefer a blended approach, with both in-person and online components. The key to success is improving the online experience and ensuring students see the return on their academic investment.

    It is clear that distance learning is here to stay. The fall semester is approaching and pressure on institutions to be ready to teach effectively is increasing. Regardless of what the situation on college campuses looks like in the fall, it is paramount to ensure students see the value of investing their time and effort in courses that may need to be delivered online.

    Even when institutions reopen their physical doors and life returns to ‘normal,’ the ability to teach online, in-person or some combination of the two will yield important benefits in terms of flexibility, as well as dimensionalizing the learning experience. As educators and students grow more comfortable and more confident with the virtual classroom, so do the opportunities to infuse learning with new experiences and new possibilities.

    Tagged as:

    Source link

  • 2024 Top Tools for Learning Votes – Teaching in Higher Ed

    2024 Top Tools for Learning Votes – Teaching in Higher Ed

    Each year, I look forward to reviewing the results of Jane Hart’s Top 100 Tools for Learning and to submitting my votes for a personal Top Tools for Learning list. I haven’t quite been writing up my list every single year (missed 2020 and 2023), but I did submit a top 10 list in 2015, 2016, 2017, 2018, 2019, 2021, and 2022. I avoid looking at the prior year’s lists until I have identified my votes for current year.

    My 2024 Top Tools for Learning

    Below are my top 10 Tools for Learning for 2024. The biggest change in my learning tools involves using social media less, most specifically that service that used to have an association with a blue bird and can most closely be associated with a cesspool these days.

    Overcast

    This podcast catcher is a daily part of my life and learning. Overcast has key features like smart speed and voice boost, which you can have for free with some non-intrusive ad placements, or pay a small fee for a pro subscription and have them hidden from view. Overcast received a major design overhaul in March of 2022, which led me to reorganize my podcast playlists to take full advantage of the new features.

    Unread

    While Overcast is for the spoken word, Unread is primarily for written pieces. Powered by real simple syndication (RSS), Unread presents me headlines of unread stories across all sorts of categories, which I can tap (on my iPad) to read, or scroll past to automatically mark as read. I use Unread in conjunction with Inoreader, which is a robust RSS aggregator that can either be used as an RSS reader, as well, or can be used in conjunction with an RSS reader, such as Unread.

    LinkedIn

    The biggest change from prior year’s surveys has to do with social media. The bird app just isn’t like it used to be. I’ve found most of my professional learning via social media takes place on LinkedIn these days. If you’re on LinkedIn, please follow me and the Teaching in Higher Ed page.

    YouTube

    Once I found out that I could subscribe to new YouTube videos on my RSS reader, Inoreader, it changed how often I watch YouTube videos. That, plus subscribing to YouTube Premium, which means we get ad-free viewing as a family, makes me spending a lot more time with YouTube. I even have my own YouTube channel, which I occasionally post videos on, most recently about my course redesign and use of LiaScript.

    Loom

    The expression tells us that it is better to show than tell in many contexts. Loom is a simple screen casting tool. Record what’s on your screen (with or without your face included via your web cam) and as soon as you press stop, there’s a link that automatically gets copied to your computer’s clipboard which is now ready to paste anywhere you want. I use Loom for simple explanations, to have asynchronous conversations with colleagues and students, to record how-to videos, and to invite students to share what they’re learning. If you verify your Loom account as an educator, you get the pro features for free.

    Kindle App

    I primarily read digitally and find the Kindle iPad app to be the easiest route for reading. I read more, in total, when I am disciplined about using the Kindle hardware, but wind up grabbing my iPad most nights.

    Readwise

    It is so easy to highlight sections of what I’m reading on the Kindle app and have those highlights sync over to a service called Readwise. The service “makes it easy to revisit and learn from your ebook and article highlights.

    Canva

    My use of the graphic design website Canva has evolved over the years. I started by using it to create graphics and printable signs for classes. Now I also use it to create presentations (which can include embedded content, slides, videos, etc.). For some presentations I’m doing in the coming weeks, I’m experimenting with using Beautiful.ai for my presentations. I still think Canva is great, but am having fun trying something new.

    Raindrop.io

    Probably more than any other app, I use Raindrop on a daily basis. It is a digital bookmarking tool. I wrote about how I use Raindrop in late 2020. I continue to see daily benefits with having such a simple-yet-robust way of making sense of all the information coming at me on a daily basis.

    Craft

    I don’t change my core productivity apps very often. In the case of Craft, once I made the switch, I never looked back. This app has both date-based and topic-based note-taking, as well as individual and collaborative features. From their website: “Craft is where people go to ideate, organize, and share their best work.”

    Those are my top ten for the year, not in any particular order. The first draft of this post had eleven items, since I lost count as I was going. I wind up using Zoom as so much a part of almost every day, it winds up getting forgotten, given its ubiquity in my life. I’m leaving it on this post, even though it takes me over my count of ten.

    Zoom

    I use Zoom so often that one of the years, I entirely left it off of my top ten listing, because it is just always there. Recent enhancements I have grown to appreciate are the built-in timer app, the AI transcripts and summaries, and that you can present slides while people are in breakout rooms.

    Your Turn

    Would you like to submit a vote with your Top Tools for Learning? You can fill out a form, write a blog post, or even share your picks on Twitter. The 2024 voting will continue through Friday, August 30, 2024 and the results will be posted by Monday, September 2, 2024.

    Source link

  • CBE Learning Platform Architecture White Paper –

    CBE Learning Platform Architecture White Paper –

    Earlier this year, I had the pleasure of consulting for the Education Design Lab (EDL) on their search for a Learning Management System (LMS) that would accommodate Competency-Based Education (CBE). While many platforms, especially in the corporate Learning and Development space, talked about skill tracking and pathways in their marketing, the EDL team found a bewildering array of options that looked good in theory but failed in practice. My job was to help them separate the signal from the noise.

    It turns out that only a few defining architectural features of an LMS will determine its fitness for CBE. These features are significant but not prohibitive development efforts. Rather, many of the firms we talked to, once they understood the true core requirements, said they could modify their platforms to accommodate CBE but do not currently see enough demand among customers to invest the resources required.

    This white paper, which outlines the architectural principles I discovered during the engagement, is based on my consulting work with EDL and is released with their blessing. In addition to the white paper itself, I provide some suggestions for how to move the vendors and a few comments about other missing pieces in the CBE ecosystem that may be underappreciated.

    The core principles

    The four basic principles for an LMS or learning platform to support CBE are simple:

    • Separate skill tree: Most systems have learning objectives that are attached to individual courses. The course is about the learning objectives. One of the goals of CBE is to create more granular tracking of progress that may run across courses. A skill learned in one course may count toward another. So a CBE platform must include a skill tree as a first-class citizen of the architecture, separate from the course.
    • Mastery learning: This heading includes a range of features, from standardized and simplified grading (e.g., competent/non-yet) to gates in which learners may only pass to the next competency after mastering the one they’re on. Many learning platforms already have these features. But they are not tied to a separate skill tree in a coherent way that supports mastery learning. This is not a huge development effort if the skill tree exists. And in a true CBE platform, it could mean being able to get rid of the grade book, which is a hideous, painful, never-ending time sink for LMS product developers.
    • Integration: In a traditional learning platform, the main integration points are with the registrar or talent management system (tracking registrations and final scores) and external tools that plug into the environment. A CBE platform must import skills, export evidence of achievement, and sometimes work as a delivery platform that gets wrapped into somebody else’s LMS (e.g., a university course built and run on their learning platform but appearing in a window of a corporate client’s learning platform). Most of these are not hard if the first two requirements are developed but they can require significant amounts of developer time.
    • Evidence of achievement: CBE standards increasingly lean toward rich packages that provide not only certification of achievement but also evidence of it. That means the learner’s work must be exportable. This can get complicated, particularly if third-party tools are integrated to provide authentic assessments.

    The full white paper is here:

    (The download button is in the top right corner.)

    Getting the vendors to move

    Vendors are beginning to move toward support for CBE, albeit slowly and piecemeal. I emphasize that the problem is not a lack of capability on their part to support CBE. It’s a lack of perceived demand. Many platform vendors can support these changes if they understand the requirements and see strong demand for them. CBE-interested organizations can take steps to accelerate vendor progress.

    First, provide the vendors with this white paper early in the selection process and tell them that your decision will be partly driven by their demonstrated ability to support the architecture described in the paper. Ask pointed questions and demand demos.

    Second, go to interoperability standards bodies like 1EdTech and work with them to establish a CBE reference architecture. Nothing in the white paper requires new interoperability standards any more than it requires a radical, ground-up rebuild of a learning platform. But if a standards body were to put them together into one coherent picture and offer a certification suite to test for the integrations, it could help. (Testing for the platform-internal functionality like competency dashboards is often outside the remit of interoperability groups, although there’s no law preventing them from taking it on.)

    Unfortunately, the mere existence of these standards and tests doesn’t guarantee that vendors will flock to implement CBE-friendly architectures. But the creation process can help rally a group that demonstrates demand while the existence of the standard itself makes the standard vendors have to meet clear and verifiable.

    What’s still missing

    Beyond the learning platform architecture, I see two pieces that seem to be under-discussed amid the impressive amount of CBE interoperability and coalition-building work that’s been happening lately. I already wrote about the first, which is capturing real job skills in real-time at a level of fidelity that will convince employers your competencies are meaningful to them. This is a hard problem, but it is becoming solvable with AI.

    The second one is tricky to even characterize but it has to do with the content production pipeline. Curricular materials publishers, by and large, are not building their products in CBE-friendly ways. Between the weak third-party content pipeline and the chronic shortage of learning design talent relative to the need, CBE-focused institutions often either tie themselves in knots trying to solve this problem or throw up their hands, focusing on authentic certification and mentoring. But there’s a limit to how much you can improve retention and completion rates if you don’t have strong learning experiences, including formative assessments that enable you to track students’ progress toward competency, address the sticking points in learning particular skills, and so on. This is a tough bind since institutions can’t ignore the quality of learning materials, can’t rely on third parties, and can’t keep up with demand themselves.

    Adding to this problem is a tendency to follow the CBE yellow brick road to what may look like its logical conclusion of atomizing everything. I’m talking about reusable learning objects. I first started experimenting with them at scale in 1998. By 2002, I had given up, writing instead about instructional design techniques to make recyclable learning objects. And that was within corporate training—as it is, not as we imagine it—which tends to focus on a handful of relatively low-level skills for limited and well-defined populations. The lack of a healthy Learning Object Repository (LOR) market should tell us something about how well reusable learning object strategy holds up under stress.

    And yet, CBE enthusiasts continue to find it attractive. In theory, it fits well with the view of smaller learning chunks that show up in multiple contexts. In practice, the LOR usually does not solve the right problems in the right way. Version control, discoverability, learning chunk size, and reusability are all real problems that have to be addressed. But because real-world learning design needs often can’t be met with content legos, starting from a LOR and adding complexity to fix its shortcomings usually brings a lot of pain without commensurate gain.

    There is a path through this architectural mess, just like there is a path through the learning platform mess. But it’s a complicated one that I won’t lay out in detail here.

    Source link

  • AI Learning Design Workshop: Solving for CBE –

    AI Learning Design Workshop: Solving for CBE –

    I recently announced a design/build workshop series for an AI Learning Design Assistant (ALDA). The idea is simple:

    • If we can reduce the time it takes to design a course by about 20%, the productivity and quality impacts for organizations that need to build enough courses to strain their budget and resources will gain “huge” benefits.
    • We should be able to use generative AI to achieve that goal fairly easily without taking ethical risks and without needing to spend massive amounts of time or money.
    • Beyond the immediate value of ALDA itself, learning the AI techniques we will use—which are more sophisticated than learning to write better ChatGPT prompts but far less involved than trying to build our own ChatGPT—will help the participants learn to accomplish other goals with AI.

    In today’s post, I’m going to provide an example of how the AI principles we will learn in the workshop series can be applied to other projects. The example I’ll use is Competency-Based Education (CBE).

    Can I please speak to your Chief Competency Officer?

    The argument for more practical, career-focused education is clear. We shouldn’t just teach the same dusty old curriculum with knowledge that students can’t put to use. We should prepare them for today’s world. Teach them competencies.

    I’m all for it. I’m on board. Count me in. I’m raising my hand.

    I just have a few questions:

    • How many companies are looking at formally defined competencies when evaluating potential employees or conducting performance reviews?
    • Of those, how many have specifically evaluated catalogs of generic competencies to see how well they fit with the skills their specific job really requires?
    • Of those, how many regularly check the competencies to make sure they are up-to-date? (For example, how many marketing departments have adopted generative AI prompt engineering competencies in any formal way?)
    • Of those, how many are actively searching for, identifying, and defining new competency needs as they arise within their own organizations?

    The sources I turn to for such information haven’t shown me that these practices are being implemented widely yet. When I read the recent publications on SkillsTech from Northeastern University’s Center for the Future of Higher Education and Talent Strategy (led by Sean Gallagher, my go-to expert on these sorts of changes), I see growing interest in skills-oriented thinking in the workplace with still-immature means for acting on that interest. At the moment, the sector seems to be very focused on building a technological factory for packaging, measuring, and communicating formally defined skills.

    But how do we know that those little packages are the ones people actually need on the job, given how quickly skills change and how fluid the need to acquire them can be? I’m not skeptical about the worthiness of the goal. I’m asking whether we are solving the hard problems that are in the way of achieving it.

    Let’s make this more personal. I was a philosophy major. I often half-joke that my education prepared me well for a career in anything except philosophy. What were the competencies I learned? I can read, write, argue, think logically, and challenge my own assumptions. I can’t get any more specific or fine-grained than that. I know I learned more specific competencies that have helped me with my career(s). But I can’t tell you what they are. Even ones that I may use regularly.

    At the same time, very few of the jobs I have held in the last 30 years existed when I was an undergraduate. I have learned many competencies since then. What are they? Well, let’s see…I know I have a list around here somewhere….

    Honestly, I have no idea. I can make up phrases for my LinkedIn profile, but I can’t give you anything remotely close to a full and authentic list of competencies I have acquired in my career. Or even ones I have acquired in the last six months. For example, I know I have acquired competencies related to AI and prompt engineering. But I can’t articulate them in useful detail without more thought and maybe some help from somebody who is trained and experienced at pulling that sort of information out of people.

    The University of Virginia already has an AI in Marketing course up on Coursera. In the next six months, Google, OpenAI, and Facebook (among others) will come out with new base models that are substantially more powerful. New tools will spring up. Practices will evolve within marketing departments. Rules will be put in place about using such tools with different marketing outlets. And so, competencies will evolve. How will the university be able to refresh that course fast enough to keep up? Where will they get their information on the latest practices? How can they edit their courses quickly enough to stay relevant?

    How can we support true Competency-Based Education if we don’t know which competencies specific humans in specific jobs need today, including competencies that didn’t exist yesterday?

    One way for AI to help

    Let’s see if we can make our absurdly challenging task of keeping an AI-in-marketing CBE course up-to-date by applying a little AI. We’ll only assume access to tools that are coming on the market now—some of which you may already be using—and ALDA.

    Every day I read about new AI capabilities for work. Many of them, interestingly, are designed to capture information and insights that would otherwise be lost. A tool to generate summaries and to-do lists from videoconferences. Another to annotate software code and explain what it does, line-by-line. One that summarizes documents, including long and technical documents, for different audiences. Every day, we generate so much information and witness so many valuable demonstrations of important skills that are just…lost. They happen and then they’re gone. If you’re not there when they happen and you don’t have the context, prior knowledge, and help to learn them, you probably won’t learn from them.

    With the AI enhancements that are being added to our productivity tools now, we can increasingly capture that information as it flies by. Zoom, Teams, Slack, and many other tools will transcribe, summarize, and analyze the knowledge in action as real people apply it in their real work.

    This is where ALDA comes in. Don’t think of ALDA as a finished, polished, carved-in-stone software application. Think of it as a working example of an application design pattern. It’s a template.

    Remember, the first step in the ALDA workflow is a series of questions that the chatbot asks the expert. In other words, it’s a learning design interview. A learning designer would normally conduct an interview with a subject-matter expert to elicit competencies. But in this case, we make use of the transcripts generated by those other AI as a direct capture of the knowledge-in-action that those interviews are designed to tease out.

    ALDA will incorporate a technique called “Retrieval-Augmented Generation,” or “RAG.” Rather than relying on—or hallucinating—the generative AI’s own internal knowledge, it can access your document store. It can help the learning designer sift through the work artifacts and identify the AI skills the marketing team had to apply when that group planned and executed their most recent social media campaign, for example.

    Using RAG and the documents we’ve captured, we develop a new interview pattern that creates a dialog between the human expert, the distilled expert practices in the document store, and the generative AI (which may be connected to the internet and have its own current knowledge). That dialogue will look a little different from the one we will script in the workshop series. But that’s the point. The script is the scaffolding for the learning design process. The generative AI in ALDA helps us execute that process, drawing on up-to-the-minute information about applied knowledge we’ve captured from subject-matter experts while they were doing their jobs.

    Behind the scenes, ALDA has been given examples of what its output should look like. Maybe those examples include well-written competencies, knowledge required to apply those competencies, and examples of those competencies being properly applied. Maybe we even wrap your ALDA examples in a technical format like Rich Skill Descriptors. Now ALDA knows what good output looks like.

    That’s the recipe. If you can use AI to get up-to-date information about the competencies you’re teaching and to convert that information into a teachable format, you’ve just created a huge shortcut. You can capture real-time workplace applied knowledge, distill it, and generate the first draft of a teachable skill.

    The workplace-university CBE pipeline

    Remember my questions early in this post? Read them again and ask yourself whether the workflow I just described could change the answers in the future:

    • How many companies are looking at formally defined competencies when evaluating potential employees or conducting performance reviews?
    • Of those, how many have specifically evaluated catalogs of generic competencies to see how well they fit with the skills their specific job really requires?
    • Of those, how many regularly check the competencies to make sure they are up-to-date? (For example, how many marketing departments have adopted relevant AI prompt engineering competencies in any formal way?)
    • Of those, how many are actively searching for, identifying, and defining new competency needs as they arise?

    With the AI-enabled workflow I described in the previous section, organizations can plausibly identify critical, up-to-date competencies as they are being used by their employees. They can share those competencies with universities, which can create and maintain up-to-date courses and certification programs. The partner organizations can work together to ensure that students and employees have opportunities to learn the latest skills as they are being practiced in the field.

    Will this new learning design process be automagic? Nope. Will it give us a robot tutor in the sky that can semi-read our minds? Nuh-uh. The human educators will still have plenty of work to do. But they’ll be performing higher-value work better and faster. The software won’t cost a bazillion dollars, you’ll understand how it works, and you can evolve it as the technology gets better and more reliable.

    Machines shouldn’t be the only ones learning

    I think I’ve discovered a competency that I’ve learned in the last six months. I’ve learned how to apply simple AI application design concepts such as RAG to develop novel and impactful solutions to business problems. (I’m sure my CBE friends could express this more precisely and usefully than I have.)

    In the months between now, when my team finishes building the first iteration of ALDA, and when the ALDA workshop participants finish the series, technology will have progressed. The big AI vendors will have released at least one generation of new, more powerful AI foundation models. New players will come on the scene. New tools will emerge. But RAG, prompt engineering, and the other skills the participants develop will still apply. ALDA itself, which will almost certainly use tools and models that haven’t been released yet, will show how the competencies we learn still apply and how they evolve in a rapidly changing world.

    I hope you’ll consider enrolling your team in the ALDA workshop series. The cost, including all source code and artifacts, is $25,000 for the team. You can find an application form and prospectus here. Applications will be open until the workshop is filled. I already have a few participating teams lined up and a handful more that I am talking to.

    You also find a downloadable two-page prospectus and an online participation application form here. To contact me for more information, please fill out this form:

    You can also write me directly at [email protected].

    Please join us.

    Source link

  • Announcing a Design/Build Workshop Series for an AI Learning Design Assistant (ALDA) –

    Announcing a Design/Build Workshop Series for an AI Learning Design Assistant (ALDA) –

    Want to build an AI tool that will seriously impact your digital learning program? Right now? For a price that you may well have in your professional development budget?

    I’m launching a project to prove we can build a tool that will change the economics of learning design and curricular materials in months rather than years. Its total cost will be low enough to be paid for by workshop participation fees.

    Join me.

    The learning design bottleneck

    Many of my friends running digital course design teams tell me they cannot keep up with demand. Whether their teams are large or small, centralized or instructor-led, higher education or corporate learning and development (L&D), the problem is the same; several friends at large shops have told me that their development of new courses and redesigns of old ones have all but ground to a halt. They don’t have time or money to fix the problem.

    I’ve been asking, “Suppose we could accelerate your time to develop a course by, say, 20%?” Twenty percent is my rough, low-end guess about the gains. We should be able to get at least that much benefit without venturing into the more complex and riskier aspects of AI development. “Would a 20% efficiency gain be significant?” I ask.

    Answer: “It would be huge.”

    My friends tend to cite a few benefits:

    • Unblocked bottlenecks: A 20% efficiency gain would be enough for them to start building (or rebuilding) courses at a reasonable speed again.
    • Lower curricular materials costs: Organizations could replace more licensed courses with ones that they own. No more content license costs. And you can edit it any way you need to.
    • Better quality: The tool would free up learning designers to build better courses rather than running just to get more courses finished.
    • More flexibility with vendors: Many departments hire custom course design shops. A 20% gain in efficiency would give them more flexibility in deciding when and how to invest their budgets in this kind of consulting.

    The learning design bottleneck is a major business problem for many organizations. Relatively modest productivity gains would make a substantial difference for them. Generative AI seems like a good tool for addressing this problem. How hard and expensive would it be to build a tool that, on average, delivers a 20% gain in productivity?

    Not very hard, not very expensive

    Every LMS vendor, courseware platform provider, curricular materials vendor, and OPM provider is currently working on tools like this. I have talked to a handful of them. They all tell me it’s not hard—depending on your goals. Vendors have two critical constraints. First, the market is highly suspicious of black-box vendor AI and very sensitive to AI products that make mistakes. EdTech companies can’t approach the work as an experiment. Second, they must design their AI features to fit their existing business goals. Every feature competes with other priorities that their clients are asking for.

    The project I am launching—AI Learning Design Assistant (ALDA)—is different. First, it’s design/build. The participants will drive the requirements for the software. Second, as I will spell out below, our software development techniques will be relatively simple and easy to understand. In fact, the value of ALDA is as much in learning patterns to build reliable, practical, AI-driven tools as it is in the product itself. And third, the project is safe.

    ALDA is intended to produce a first draft for learning designers. No students need to see content that has not been reviewed by a human expert or interact directly with the AI at all. The process by which ALDA produces its draft will be transparent and easy to understand. The output will be editable and importable into the organization’s learning platform of choice.

    Here’s how we’ll do it:

    • Guided prompt engineering: Your learning designers probably already have interview questions for the basic information they need to design a lesson, module, or course. What are the learning goals? How will you know if students have achieved those goals? What are some common sticking points or misconceptions? Who are your students? You may ask more or less specific and more or less elaborate versions of these questions, but you are getting at the same ideas. ALDA will start by interviewing the user, who is the learning designer or subject-matter expert. The structure of the questions will be roughly the same. While we will build out one set of interview questions for the workshop series, changing the design interview protocol should be relatively straightforward for programmers who are not AI specialists.
    • Long-term memory: One of the challenges with using a tool like ChatGPT on its own is that it can’t remember what you talked about from one conversation to the next and it might or might not remember specific facts that it was trained on (or remember them correctly). We will be adding a long-term memory function. It can remember earlier answers in earlier design sessions. It can look up specific documents you give it to make sure it gets facts right. This is an increasingly common infrastructure component in AI projects. We will explore different uses of it when we build ALDA. You’ll leave the workshop with the knowledge and example code of how to use the technique yourself.
    • Prompt enrichment: Generative AI often works much better when it has a few really good, rich examples to work from. We will provide ALDA with some high-quality lessons that have been rigorously tested for learning effectiveness over many years. This should increase the quality of ALDA’s first drafts. Again, you may want your learning designs to be different. Since you will have the ALDA source code, you’ll be able to put in whatever examples you want.
    • Generative AI export: We may or may not get to building this feature depending on the group’s priorities in the time we have, but the same prompt enrichment technique we’ll use to get better learning output can also be used to translate the content into a format that your learning platform of choice can import directly. Our enrichment examples will be marked up in software code. A programmer without any specific AI knowledge can write a handful of examples translating that code format into the one that your platform needs. You can change it, adjust it, and enrich it if you change platforms or if your platform adds new features.

    The consistent response from everyone in EdTech I’ve talked to who is doing this kind of work is that we can achieve ALDA’s performance goals with these techniques. If we were trying to get 80% or 90% accuracy, that would be different. But a 20% efficiency gain with an expert human reviewing the output? That should be very much within reach. The main constraints on the ALDA project are time and money. Those are deliberate. Constraints drive focus.

    Let’s build something useful. Now.

    The collaboration

    Teams that want to participate in the workshop will have to apply. I’m recruiting teams that have immediate needs to build content and are willing to contribute their expertise to making ALDA better. There will be no messing around. Participants will be there to build something. For that reason, I’m quite flexible about who is on your team or how many participate. One person is too few, and eight is probably too many. My main criterion is that the people you bring are important to the ALDA-related project you will be working on.

    This is critical because we will be designing ALDA together based on the experience and feedback from you and the other participants. In advance of the first workshop, my colleagues and I will review any learning design protocol documentation you care to share and conduct light interviews. Based on that information, you will have access to the first working iteration of ALDA at the first workshop. For this reason, the workshop series will start in the spring. While ALDA isn’t going to require a flux capacitor to work, it will take some know-how and effort to set up.

    The workshop cohort will meet virtually once a month after that. Teams will be expected to have used ALDA and come up with feedback and suggestions. I will maintain a rubric for teams to use based on the goals and priorities for the tool as we develop them together. I will take your input to decide which features will be developed in the next iteration. I want each team to finish the workshop series with the conviction that ALDA can achieve those performance gains for some important subset of their course design needs.

    Anyone who has been to one of my Empirical Educator Project (EEP) or Blursday Social events knows that I believe that networking and collaboration are undervalued at most events. At each ALDA workshop, you will have time and opportunities to meet with and work with each other. I’d love to have large universities, small colleges, corporate L&D departments, non-profits, and even groups of students participating. I may accept EdTech vendors if and only if they have more to contribute to the group effort than just money. Ideally, the ALDA project will lead to new collaborations, partnerships, and even friendships.

    Teaching AI about teaching and learning

    The workshop also helps us learn together about how to teach AI about teaching and learning. AI research is showing us how much better the technology can be when it’s trained on good data. There is so much bad pedagogy on the internet. And the content that is good is not marked up in a way that is friendly to teach AI patterns. What does a good learning objective or competency look like? How do you write hints or assessment feedback that helps students learn but doesn’t give away the answers? How do you create alignment among the components of a learning design?

    The examples we will be using to teach the AI have not only been fine-tuned for effectiveness using machine learning over many years; they are also semantically coded to capture some of these nuances. These are details that even many course designers haven’t mastered.

    I see a lot of folks rushing to build “robot tutors in the sky 2.0” without a lot of care to make sure the machines see what we see as educators. They put a lot of faith in data science but aren’t capturing the right data because they’re ignoring decades of learning science. The ALDA project will teach us how to teach the machines about pedagogy. We will learn to identify the data structures that will empower the next generation of AI-powered learning apps. And we will do that by becoming better teachers of ALDA using the tools of good teaching: clear goals, good instructions, good examples, and good assessments. Much of it will be in plain English, and the rest will be in a simple software markup language that any computer science undergraduate will know.

    Wanna play?

    The cost for the workshop series, including all source code and artifacts, is $25,000 for your team. You can find an application form and prospectus here. Applications will be open until the workshop is filled. I already have a few participating teams lined up and a handful more that I am talking to.

    You also find a downloadable two-page prospectus and an online participation application form here. To contact me for more information, please fill out this form:

    [Update: I’m hearing from a couple of you that your messages to me through the form above are getting caught in the spam filter. Feel free to email me at [email protected] if the form isn’t getting through.]

    I hope you’ll join us.

    Source link

  • Journal of Open, Flexible and Distance Learning (JOFDL) Vol 22(2) – Sijen

    Journal of Open, Flexible and Distance Learning (JOFDL) Vol 22(2) – Sijen

    It is my privilege to serve alongside Alison Fields as co-editor of the Journal of Open, Flexible and Distance Learning, an international high-quality peer-reviewed academic journal. I also have a piece in this issue entitled ‘Definitions of the Terms Open, Distance, and Flexible in the Context of Formal and Non-Formal Learning‘.

    Issue 26 (2) of the Journal of Open, Flexible and Distance Learning (JOFDL) is now available to the world. It begins with an editorial looking at readership and research trends in the journal post-COVID, followed by a thought-provoking Invited Article about the nature of distance learning by Professor Jon Dron. This general issue follows with 7 articles on different aspects of research after COVID-19.
    Alison Fields and Simon Paul Atkinson, JOFDL Joint Editors. 

    Editorial

    Post-pandemic Trends: Readership and Research After COVID-19

    Alison Fields, Simon Paul Atkinson

    1-6

    Image of Jon Dron

    Invited Article

    Technology, Teaching, and the Many Distances of Distance Learning

    Jon Dron

    7-17

    Position Piece

    Definitions of the Terms Open, Distance, and Flexible in the Context of Formal and Non-Formal Learning

    Simon Paul Atkinson

    18-28

    Articles – Primary studies

    Images of Hulbert and Koh

    The Role of Non-Verbal Communication in Asynchronous Talk Channels ‎

    Image of Leomar Miano

    An An Initial Assessment of Soft Skills Integration in Emergency Remote Learning During the COVID-19 Pandemic: A Learners’ PerspectiveA Learners Perspective

    Image of small child at a laptop

    Supporting English Language Development of English Language Learners in Virtual Kindergarten: A Parents’ Perspective

    Image of Lockias Chitanana

    Parents’ Experience with Remote Learning during COVID-19 Lockdown in Zimbabwe

    Image of Martin Watts & Ioannis Andreadis

    First-year Secondary Students’ Perceptions of the Impact of iPad Use on Their Learning in a BYOD Secondary International School

    venn diagram for AIM

    Teaching, Engaging, and Motivating Learners Online Through Weekly, Tailored, and Relevant CommunicationAcademic Content, Information for the Course, and Motivation (AIM)


    Source link

  • Book on Writing Good Learning Outcomes – Sijen

    Book on Writing Good Learning Outcomes – Sijen

    Introducing a short guide entitled: “Writing Good Learning Outcomes and Objectives”, aimed at enhancing the learner experience through effective course design. Available at https://amazon.com/dp/0473657929

    The book has sections on the function and purpose of intended learning outcomes as well as guidance on how to write them with validation in mind. Sections explore the use of different educational taxonomies as well as some things to avoid, and the importance of context. There is also a section on ensuring your intended learning outcomes are assessable. The final section deals with how you might go about designing an entire course structure based on well-structured outcomes, breaking these outcomes down into session-level objectives that are not going to be assessed.

    #ad #education #highereducation #learningdesign #coursedesign #learningoutcomes #instructionaldesign


    Source link

  • Trends in Higher Ed Employee Learning and Development – CUPA-HR

    Trends in Higher Ed Employee Learning and Development – CUPA-HR

    by CUPA-HR | February 1, 2023

    Employee learning and development (L&D) offerings at higher ed institutions have changed significantly over the last three years. To find out what other institutions are doing in this area, Krista Vaught, assistant director of employee learning and engagement at Vanderbilt University, conducted a survey in the summer of 2022. Survey responses from L&D professionals at 115 institutions reveal the following trends in program delivery, attendance, topics and outcomes.

    Program Delivery

    Since 2020, synchronous online sessions have been offered by most (89) institutions, followed by self-paced modules (85). Some institutions indicated that at certain points, employees were limited to online learning and self-paced only, as they did not host live workshops.

    Prior to the pandemic, synchronous, in-person workshops were the primary delivery method at most institutions. Now, synchronous online is the primary method at 35 percent of institutions surveyed, asynchronous online at 30 percent of institutions, synchronous in-person at 18 percent of institutions and hybrid at 17 percent of institutions.

    Attendance

    Attendance and participation have fluctuated. In the early 2020 shift to remote work, there was a sense that employees had newfound time to pursue L&D, at least initially. From March 2020 to December 2021, 31 percent of institutions surveyed saw increased participation, while 27 percent said it was mixed or hard to tell. Eighteen percent said it increased then decreased, and 17 percent said it decreased.

    What did institutions see in 2022? Results were mixed again. Twenty five percent said attendance and participation were about the same as prior to 2022, 23 percent said it decreased, 21 percent said it increased and 27 percent said it was mixed or hard to tell.

    What’s causing the fluctuations and challenges in attendance and participation?

    • Time and availability
    • Burnout
    • Increased workload as employees transition back to more on-campus work or take on additional responsibilities because of turnover, leaving less time to pursue learning
    • Unsupportive supervisors who see learning as taking away time from work rather than part of work
    • Employee preference for different delivery methods (in-person versus virtual)
    • Learning opportunities are not always prioritized, resulting in last-minute no-shows

    Topics

    According to respondents, the most popular workshop topics fall under management and leadership, and wellness and communication.

    Assessing Outcomes

    Follow-up surveys are the most popular tool for assessing outcomes of workshops, followed by attendance and participation numbers.

    Prioritizing Learning and Development 

    In the ongoing competition for talent, L&D can be a game changer, both in attracting new talent and retaining the talent you already have. By investing in and prioritizing programs to support managers, develop leaders and promote better communication, institutions can create a workplace that’s hard to leave.

    Interested in more data and insights HR pros can use when brainstorming L&D initiatives, making a case for those initiatives, and designing them and assessing them? Head over to the full article, Higher Education Learning and Development Trends in 2022 – Where We Are now and Where We’re Headed (members-only) in the winter issue of Higher Ed HR Magazine.

    To learn how one institution launched a multi-faceted retention initiative, including manager and leadership development opportunities, watch the recording of the recent CUPA-HR webinar Solving the Retention Puzzle.

    Related Resources:

    CUPA-HR Learning Framework and Resources

    Management and Supervisor Training Toolkit (CUPA-HR Knowledge Center)

    Creating Your Individual Development Plan (E-Learning Course)

    Understanding Higher Education (E-Learning Course)



    Source link

  • 2022 Top Tools for Learning Votes – Teaching in Higher Ed

    2022 Top Tools for Learning Votes – Teaching in Higher Ed

    Each year, I look forward to reviewing the results of Jane Hart’s Top 300 Tools for Learning and to submitting my votes for a personal Top Tools for Learning list. I haven’t quite been writing up my list every single year (missed 2020), but I did submit a top 10 list in 2015, 2016, 2017, 2018, 2019, and 2021. I haven’t come across too many others’ 2022 Top Tools for Learning votes, yet, but did enjoy reviewing Mike Taylor’s list.

    I avoid looking at the prior year’s lists until I have identified my votes for current year. Once my list was finished for 2022, however, I did compare and realize that I had left Zoom off for this year. Given that I use Zoom pretty much daily for meetings, teaching, speaking engagements, and podcast interviews, I suspect this is one of those things where Zoom has become so integral to my life that it’s become like water that I can’t see because I’m swimming in it.

    Something that I am still looking forward to getting more practice with is a technique shared by Kevin Kelly on Episode 406 of the Teaching in Higher Ed podcast. Kevin shared about how to turn a Zoom chat into a useful summary and included a sample summary from an AAEEBL Meetup in the show notes for the episode.

    Another thing I realize as I reflect back on the current and prior years of voting is how much every single tool I use fits into a personal knowledge mastery system, which I have learned so much about from Harold Jarche over decades now. Harold Jarche writes:

    Personal knowledge mastery is a set of processes, individually constructed, to help each of us make sense of our world and work more effectively. PKM keeps us afloat in a sea of information – guided by professional communities and buoyed by social networks.

    PKM is the number one skill set for each of us to make sense of our world, work more effectively, and contribute to society. The PKM framework – Seek > Sense > Share – helps professionals become knowledge catalysts. Today, the best leaders are constant learners.

    Harold was on Episode 213 of the Teaching in Higher Ed podcast, if you would like to learn more about PKM. There is also an entire collection of PKM episodes.

    My 2022 Top Tools for Learning

    Below are my top 10 Tools for Learning for 2022. Jane Hart’s survey methodology has shifted over the years. She now asks us to list each tool and then identify which of three categories we most often use it for: personal learning, workplace learning, or education. Mine overlap quite a bit, within those categories, but I’ve done my best to pick the context in which I use it most often.

    1. Overcast | Personal Learning | PKM-Seek

    This podcast “catcher” app is a daily part of my life and learning. Overcast received a major design overhaul in March of 2022, which led me to reorganize my podcast playlists to take full advantage of the new features. In October of 2021, I wrote up my podcast favorites, in case you’re interested.

    2. Unread | Personal Learning | PKM-Seek

    While Overcast is for the spoken word, Unread is primarily for written pieces. Powered by real simple syndication (RSS), Unread presents me headlines of unread stories across all sorts of categories, which I can tap (on my iPad) to read, or scroll past to automatically mark as read. I use Unread in conjunction with Inoreader, which is a robust RSS aggregator that can either be used as an RSS reader, as well, or can be used in conjunction with an RSS reader, such as Unread.

    On a related note, if you like the idea of information flowing to you (via RSS) versus you having to go find it – and you like to cook – check out the app Mela. I switched to it in the past year and haven’t looked back.

    3. Twitter | Personal Learning | PKM-Seek

    I continue to benefit from a strong personal learning network (PLN), which for me is at its most vibrant on Twitter. Whether it’s for something as simple as getting some good tv/movie recommendations when I am under the weather, or for a deeper and more significant purpose of learning from those in the disability community, I find a tangible benefit with almost every visit. Yes, there are also major problems on social media platforms, including Twitter. But for me, the key has been all in who I follow and how I engage in community with others on Twitter.

    4. Raindrop | Workplace Learning | PKM-Sense

    While the first three tools I mentioned were all about seeking information, Raindrop is all about sense making (in the present and future) for me. It is a digital bookmarking tool. I wrote about how I use Raindrop in late 2020. I continue to see daily benefits with having such a simple-yet-robust way of making sense of all the information coming at me on a daily basis. Raindrop recently added the ability to highlight text on a page you have bookmarked, but I haven’t experimented with that feature much yet. If I want to do something with annotations and highlighting, I tend to gravitate toward Hypothes.is, a social annotation tool.

    5. PollEverywhere | Education | PKM-Sense

    When I started in a professional career in the early 1990s, I used to work for a computer training company. One regular thing that would happen with less-experienced instructors would be them standing at the front of the class, asking if everyone “got it” or was “with them.” As you can imagine, many times people either didn’t realize that they were lost, or they were too embarrassed to admit it.

    Polling tools like PollEverywhere remove the barrier of people not realizing that they don’t understand something, or for those are reluctant to share their confusion publicly. PollEverywhere also has features to support team collaboration, asynchronous and/or synchronous polling, and can integrate with a learning management system (LMS). I primarily use PollEverywhere for formative assessment, allowing people to respond anonymously to the questions being posed. I subscribe to the Present plan, which allows me to have up to 700 people responding at one time on a given poll question. People in an education context who needed to create reports and access archived poll responses would likely need to go with an Individual Instructor premium account, or department/university-wide plan.

    6. Padlet | Education | PKM-Sense

    One of many collaborative tools I enjoy using is Padlet, a virtual cork board. I use Padlet to create a shared vision for a class or a team, to create a crowd-sourced music playlist for an event or class, as a parking lot, and to collectively come up with ways to extend learning. This year for our faculty gathering, we have Padlet boards for virtual collaboration and have also printed out posters (with QR codes that point back to the Padlet boards) that people can respond in person to using sticky notes. I love the blend of the analog and the digital that is possible using this approach.

    7. Loom | Education | PKM-Share

    The past couple of years, Loom has become a part of my daily computing life. It is a simple screen casting tool. Record what’s on your screen (with or without your face included via your web cam) and as soon as you press stop, there’s a link that automatically gets copied to your computer’s clipboard which is now ready to paste anywhere you want. I use Loom for simple explanations, to have asynchronous conversations with colleagues and students, to record how-to videos, and to invite students to share what they’re learning. If you verify your Loom account as an educator, you get the pro features for free.

    8. Canva | Workplace Learning | PKM-Share

    My use of the graphic design website Canva has evolved over the years. I started by using it to create graphics and printable signs for classes. Now I also use it to create presentations (which can include embedded content, slides, videos, etc.). As I just revisited Canva features in writing this past, I discovered even more things I wasn’t even aware that Canva can do.

    I find the pro version worthwhile for both work and for Teaching in Higher Ed, as having the ability to include an entire team of people and have everyone be able to access a brand kit(s) to achieve consistent colors, logos, and other brand assets is a game-changer. We haven’t experimented as much with branded templates or comments and sharing, but there’s so much to benefit from with Canva working collaboratively. The free plan is also quite generous and worth signing up for, even if you don’t wind up upgrading to Pro or Canva for Teams.

    9. WordPress | Workplace Learning | PKM-Share

    The Teaching in Higher Ed website has been on a hosted WordPress site for so long, I can’t even remember where it resided prior to WordPress. My friend and web developer, Naomi Kasa, has helped keep the site beautiful and functional. One of my favorite features of the site is the page Naomi created with all my upcoming and past speaking engagements. It is great having all that information in one place and to see the collection of resources keep growing over time. Take a look at my resources page for a recent speaking engagement and how I embedded a Canva presentation, which includes use of embedded content and video.

    10. Blubrry | Workplace Learning | PKM-Share

    If you are going to have a podcast and you want to efficiently and effectively get it released to the majority of the various podcast players, you are going to need a podcast hosting company. We have used Blubrry for years now and appreciate its reliability, ease of use, and integration with WordPress.

    Your Turn

    Would you like to submit a vote with your Top Tools for Learning? You can fill out a form, write a blog post, or even share your picks on Twitter. The 2022 voting will continue through Thursday, August 25 and the results will be posted by Tuesday, August 30, 2022.



    Source link