Tag: Workshop

  • Live Workshop on Promoting Your Book Online for Academics

    Live Workshop on Promoting Your Book Online for Academics

    Jennifer van Alstyne and Dr. Sheena Howard designed this live interactive virtual event for professors and researchers like you. Especially if you’ve ever felt like, “I don’t need to do this for me, but I should do this for my book” when it comes to your online presence. Or, if you worry about self-promotion but know your writing / research can help more people if you’re open to sharing it.

    Join Dr. Sheena C. Howard and Jennifer van Alstyne for a 90-minute virtual event to help academics and researchers amplify your work, attract media opportunities, and share your book in meaningful ways.

    We hope you can join us on April 12, 2025 for Promoting Your Book Online for Academics. You’re invited! 💌

    What: 1.5 hour interactive workshop
    When: April 12, 2025 at 11:30am Pacific Time / 2:30pm Eastern Time
    Where: Live on Zoom (there will be a replay)
    With: Jennifer van Alstyne and Dr. Sheena Howard

    Promoting Your Book Online for Academics is on April 12, 2025 at 2pm Eastern / 11:30am Pacific Time. It will be recorded for when you can’t make it live.

    You should sign up if you’re open to

    • Sharing your book (or your research project)
    • Opportunities for your book to be featured in media (but aren’t sure where to start)
    • Helping more people with the writing / research you already do
    • Aim to attract funding
    • Want to build partnerships or collaborations for your equity focused work

    Promoting Your Book Online for Academics is a live event for academic authors. But it’s not just for your monograph or edited collection. If you’ve written a report. If you have created a resource. If your research outputs are something you want to share? This interactive workshop is for you.

    At the end of this workshop you’ll know what’s effective use of your time for media and online presence.

    Icon of a person at their desk with a cup of coffee. On their computer monitor, a Zoom meeting is in progress.
    Icon of a video replay on a computer monitor
    Icon of a calendar

    Hi, I’m Jennifer van Alstyne (@HigherEdPR). I’ve been working 1-on-1 with professors on their online presence since 2018. When I look back on the transformations my clients have gone through, there’s often an emotional journey, not just the capacity-building work we do for your online presence. Most of my clients are authors. The professor writers I work with want their words to reach the right people, but felt unsure about how to go about that online.

    Your book deserves to reach the people you wrote it for. When I ask professors who haven’t promoted their book, “do you hope more readers find this book?” The answer is often “Yes,” even if the book is older. Even when the book didn’t sell as well as you may have hoped. Even when your book is out of print there are things you can do to have agency in sharing it online.

    In 2021, Dr. Sheena Howard and I teamed up for an intimate live event that helped academics around the world. We’ve been wanting to do another one since. But we wanted something that was really going to help you. For years, authors have opened up to each of us about what stopped them from sharing their book for years. When we were brainstorming who we want to help most with this Promoting Your Book Online for Academics event, these are some of the stories that came up:

    I thought I’d have more support in marketing my book from the press…but it seems to be mostly on me.

    My publisher asked me to build up my social media presence for my new book…I’m not really a social media person.

    My books in the past didn’t do well…I’m worried my new book won’t do well either.

    I shared my book once. But I haven’t share it again since on socials.

    I am unsure if it is too early (or too late) to promote my book.

    If I want to promote my book, when should I be reaching out to media? Before the book launches? After the book launches? I don’t know where to start.

    I don’t think anyone will care about my book.

    I want to go on podcasts to talk about my book, but I haven’t done anything toward that, no.

    Do any of those feel like you? I hope you’ll join us.

    Your book deserves to be out there. You have agency in telling your book’s story. Here’s what’s on the Agenda for this workshop:

    • Goal-setting for your digital success as an academic for where to focusing your time and energy
    • Sharing your book or research project in meaningful ways on social media (in ways that don’t feel icky)
    • Using media to boost research impact and funding (and how being in the media can help you build relationships)
    • Media opportunities for your book and research even if you’re just starting to explore this path (digital, print, TV, YouTube, podcasts)
    • Live profile and online presence reviews
    • Q&A

    Sign up for Promoting Your Book Online for Academics.

    Dr. Sheena C. Howard (@drsheenahoward), a Professor of Communication. She helps professors get media coverage and visibility through Power Your Research (without the expense of a publicist). She’s been featured in ABC, PBS, BBC, NPR, NBC, The LA Times, The New York Times, The Washington Post, and more for her research on representation, identity, and social justice. Her book, Black Comics: Politics of Race and Representation won an Eisner Award. The Encyclopedia of Black Comics, which profiles over 100 Black people in the comics industry. Her book, Why Wakanda Matters, was a clue on Jeopardy.

    She’s a writer without limits. I’ve recommended Sheena to some of my clients because she’s someone who helps people move past the limits we sometimes set for ourselves as writers. The worries or beliefs that sometimes hold us back. She’s worked closely with writers and creatives to build their capacity, to have agency in your media presence so you can make an impact when it matters. You want visibility that makes a difference for you. That invites readers. That can attract opportunities when they’re aligned with with what you want for yourself and the world.

    This event is for you even when you want to do it yourself for your online presence. You won’t have to work with us after the workshop ends. This live event is about implementable strategies, and finding focus for what makes sense for sharing your book or research project.

    Frequently asked questions you may be wondering about.

    Where is the workshop?

    This is a live virtual interactive event on Zoom on April 12, 2025 at 11:30am Pacific Time / 2:30pm Eastern Time.

    What if I can’t make it live?

    At our last event, some people knew they wouldn’t be able to attend live when they signed up. A couple people also couldn’t make it live unexpectedly. If you’re unable to join us live on April 12, 2025, you’ll have everything you need.

    Jennifer will email you the event replay when it’s finished processing. You’ll get a copy of the take home worksheet to help you take action and the resources guide. That email will also have your private scheduling link for a follow up meeting with Jennifer if you’d find space to chat about your online presence supportive.

    How much is the workshop?

    This event is $300 USD.

    You can sign up on Dr. Sheena Howard’s Calendly to pay with PayPal.

    Or, email Jennifer for a custom invoice at [email protected]

    Outside of the United States? We had people register from around the world last time. If you run into an issue checking out, Jennifer is happy to create an invoice for you through Wise. Email [email protected]

    This event is non-refundable. If something comes up and you’re unable to join us live on April 12, 2025, you’ll have everything you need.

    Jennifer will email you the event replay when it’s finished processing. You’ll get a copy of the take home worksheet to help you take action and the resources guide. That email will also have your private scheduling link for a follow up meeting with Jennifer if you’d find space to chat about your online presence supportive.

    Can I use professional development funds or research funds to pay for this event?

    Yes. If a custom invoice would be helpful for you, please reach out to [email protected]

    I’m interested in working with Jennifer and Sheena privately. Is this event still for me?

    Jennifer and Sheena team up for online presence VIP Days. And some of our clients have worked with us separately depending on your goals.

    While I’m happy to see how we can work together, this is not a sales event. At our last event, people found having a bit of private space after the event was helpful. So we wanted to be sure you get that private follow up consultation too. If you’re interested in working with us, please do sign up for that Zoom call. We can save time to chat about what may be helpful for you.

    This workshop isn’t in my budget…I still want a stronger online presence for my book / research.

    Yay, I’m glad you found this page because I want that for you. You deserve a stronger online presence if that’s something you want for yourself. Best wishes for your online presence, you’ve got this! There are free resources here on The Social Academic blog to help you have a stronger online presence for your book and your research. You can search by category to find what’s helpful for you. You might start resources related to Authors and Books.

    I don’t think this event is right for me, can I share it with a friend?

    Yes! I’d love that. If this event isn’t right for you, but you think it may be helpful for your friend or colleague, please share it with them. We appreciate you!


    Questions about this event? Please don’t hesitate to reach out. I’m happy to answer your question, hesitation, or concern.

    Email me at [email protected].
    Or, send me a message on LinkedIn.

    Source link

  • Running a Workshop: Guidelines for Engagement and Impact – Faculty Focus

    Running a Workshop: Guidelines for Engagement and Impact – Faculty Focus

    Source link

  • Running a Workshop: Guidelines for Engagement and Impact – Faculty Focus

    Running a Workshop: Guidelines for Engagement and Impact – Faculty Focus

    Source link

  • AI Learning Design Workshop: Solving for CBE –

    AI Learning Design Workshop: Solving for CBE –

    I recently announced a design/build workshop series for an AI Learning Design Assistant (ALDA). The idea is simple:

    • If we can reduce the time it takes to design a course by about 20%, the productivity and quality impacts for organizations that need to build enough courses to strain their budget and resources will gain “huge” benefits.
    • We should be able to use generative AI to achieve that goal fairly easily without taking ethical risks and without needing to spend massive amounts of time or money.
    • Beyond the immediate value of ALDA itself, learning the AI techniques we will use—which are more sophisticated than learning to write better ChatGPT prompts but far less involved than trying to build our own ChatGPT—will help the participants learn to accomplish other goals with AI.

    In today’s post, I’m going to provide an example of how the AI principles we will learn in the workshop series can be applied to other projects. The example I’ll use is Competency-Based Education (CBE).

    Can I please speak to your Chief Competency Officer?

    The argument for more practical, career-focused education is clear. We shouldn’t just teach the same dusty old curriculum with knowledge that students can’t put to use. We should prepare them for today’s world. Teach them competencies.

    I’m all for it. I’m on board. Count me in. I’m raising my hand.

    I just have a few questions:

    • How many companies are looking at formally defined competencies when evaluating potential employees or conducting performance reviews?
    • Of those, how many have specifically evaluated catalogs of generic competencies to see how well they fit with the skills their specific job really requires?
    • Of those, how many regularly check the competencies to make sure they are up-to-date? (For example, how many marketing departments have adopted generative AI prompt engineering competencies in any formal way?)
    • Of those, how many are actively searching for, identifying, and defining new competency needs as they arise within their own organizations?

    The sources I turn to for such information haven’t shown me that these practices are being implemented widely yet. When I read the recent publications on SkillsTech from Northeastern University’s Center for the Future of Higher Education and Talent Strategy (led by Sean Gallagher, my go-to expert on these sorts of changes), I see growing interest in skills-oriented thinking in the workplace with still-immature means for acting on that interest. At the moment, the sector seems to be very focused on building a technological factory for packaging, measuring, and communicating formally defined skills.

    But how do we know that those little packages are the ones people actually need on the job, given how quickly skills change and how fluid the need to acquire them can be? I’m not skeptical about the worthiness of the goal. I’m asking whether we are solving the hard problems that are in the way of achieving it.

    Let’s make this more personal. I was a philosophy major. I often half-joke that my education prepared me well for a career in anything except philosophy. What were the competencies I learned? I can read, write, argue, think logically, and challenge my own assumptions. I can’t get any more specific or fine-grained than that. I know I learned more specific competencies that have helped me with my career(s). But I can’t tell you what they are. Even ones that I may use regularly.

    At the same time, very few of the jobs I have held in the last 30 years existed when I was an undergraduate. I have learned many competencies since then. What are they? Well, let’s see…I know I have a list around here somewhere….

    Honestly, I have no idea. I can make up phrases for my LinkedIn profile, but I can’t give you anything remotely close to a full and authentic list of competencies I have acquired in my career. Or even ones I have acquired in the last six months. For example, I know I have acquired competencies related to AI and prompt engineering. But I can’t articulate them in useful detail without more thought and maybe some help from somebody who is trained and experienced at pulling that sort of information out of people.

    The University of Virginia already has an AI in Marketing course up on Coursera. In the next six months, Google, OpenAI, and Facebook (among others) will come out with new base models that are substantially more powerful. New tools will spring up. Practices will evolve within marketing departments. Rules will be put in place about using such tools with different marketing outlets. And so, competencies will evolve. How will the university be able to refresh that course fast enough to keep up? Where will they get their information on the latest practices? How can they edit their courses quickly enough to stay relevant?

    How can we support true Competency-Based Education if we don’t know which competencies specific humans in specific jobs need today, including competencies that didn’t exist yesterday?

    One way for AI to help

    Let’s see if we can make our absurdly challenging task of keeping an AI-in-marketing CBE course up-to-date by applying a little AI. We’ll only assume access to tools that are coming on the market now—some of which you may already be using—and ALDA.

    Every day I read about new AI capabilities for work. Many of them, interestingly, are designed to capture information and insights that would otherwise be lost. A tool to generate summaries and to-do lists from videoconferences. Another to annotate software code and explain what it does, line-by-line. One that summarizes documents, including long and technical documents, for different audiences. Every day, we generate so much information and witness so many valuable demonstrations of important skills that are just…lost. They happen and then they’re gone. If you’re not there when they happen and you don’t have the context, prior knowledge, and help to learn them, you probably won’t learn from them.

    With the AI enhancements that are being added to our productivity tools now, we can increasingly capture that information as it flies by. Zoom, Teams, Slack, and many other tools will transcribe, summarize, and analyze the knowledge in action as real people apply it in their real work.

    This is where ALDA comes in. Don’t think of ALDA as a finished, polished, carved-in-stone software application. Think of it as a working example of an application design pattern. It’s a template.

    Remember, the first step in the ALDA workflow is a series of questions that the chatbot asks the expert. In other words, it’s a learning design interview. A learning designer would normally conduct an interview with a subject-matter expert to elicit competencies. But in this case, we make use of the transcripts generated by those other AI as a direct capture of the knowledge-in-action that those interviews are designed to tease out.

    ALDA will incorporate a technique called “Retrieval-Augmented Generation,” or “RAG.” Rather than relying on—or hallucinating—the generative AI’s own internal knowledge, it can access your document store. It can help the learning designer sift through the work artifacts and identify the AI skills the marketing team had to apply when that group planned and executed their most recent social media campaign, for example.

    Using RAG and the documents we’ve captured, we develop a new interview pattern that creates a dialog between the human expert, the distilled expert practices in the document store, and the generative AI (which may be connected to the internet and have its own current knowledge). That dialogue will look a little different from the one we will script in the workshop series. But that’s the point. The script is the scaffolding for the learning design process. The generative AI in ALDA helps us execute that process, drawing on up-to-the-minute information about applied knowledge we’ve captured from subject-matter experts while they were doing their jobs.

    Behind the scenes, ALDA has been given examples of what its output should look like. Maybe those examples include well-written competencies, knowledge required to apply those competencies, and examples of those competencies being properly applied. Maybe we even wrap your ALDA examples in a technical format like Rich Skill Descriptors. Now ALDA knows what good output looks like.

    That’s the recipe. If you can use AI to get up-to-date information about the competencies you’re teaching and to convert that information into a teachable format, you’ve just created a huge shortcut. You can capture real-time workplace applied knowledge, distill it, and generate the first draft of a teachable skill.

    The workplace-university CBE pipeline

    Remember my questions early in this post? Read them again and ask yourself whether the workflow I just described could change the answers in the future:

    • How many companies are looking at formally defined competencies when evaluating potential employees or conducting performance reviews?
    • Of those, how many have specifically evaluated catalogs of generic competencies to see how well they fit with the skills their specific job really requires?
    • Of those, how many regularly check the competencies to make sure they are up-to-date? (For example, how many marketing departments have adopted relevant AI prompt engineering competencies in any formal way?)
    • Of those, how many are actively searching for, identifying, and defining new competency needs as they arise?

    With the AI-enabled workflow I described in the previous section, organizations can plausibly identify critical, up-to-date competencies as they are being used by their employees. They can share those competencies with universities, which can create and maintain up-to-date courses and certification programs. The partner organizations can work together to ensure that students and employees have opportunities to learn the latest skills as they are being practiced in the field.

    Will this new learning design process be automagic? Nope. Will it give us a robot tutor in the sky that can semi-read our minds? Nuh-uh. The human educators will still have plenty of work to do. But they’ll be performing higher-value work better and faster. The software won’t cost a bazillion dollars, you’ll understand how it works, and you can evolve it as the technology gets better and more reliable.

    Machines shouldn’t be the only ones learning

    I think I’ve discovered a competency that I’ve learned in the last six months. I’ve learned how to apply simple AI application design concepts such as RAG to develop novel and impactful solutions to business problems. (I’m sure my CBE friends could express this more precisely and usefully than I have.)

    In the months between now, when my team finishes building the first iteration of ALDA, and when the ALDA workshop participants finish the series, technology will have progressed. The big AI vendors will have released at least one generation of new, more powerful AI foundation models. New players will come on the scene. New tools will emerge. But RAG, prompt engineering, and the other skills the participants develop will still apply. ALDA itself, which will almost certainly use tools and models that haven’t been released yet, will show how the competencies we learn still apply and how they evolve in a rapidly changing world.

    I hope you’ll consider enrolling your team in the ALDA workshop series. The cost, including all source code and artifacts, is $25,000 for the team. You can find an application form and prospectus here. Applications will be open until the workshop is filled. I already have a few participating teams lined up and a handful more that I am talking to.

    You also find a downloadable two-page prospectus and an online participation application form here. To contact me for more information, please fill out this form:

    You can also write me directly at [email protected].

    Please join us.

    Source link

  • Announcing a Design/Build Workshop Series for an AI Learning Design Assistant (ALDA) –

    Announcing a Design/Build Workshop Series for an AI Learning Design Assistant (ALDA) –

    Want to build an AI tool that will seriously impact your digital learning program? Right now? For a price that you may well have in your professional development budget?

    I’m launching a project to prove we can build a tool that will change the economics of learning design and curricular materials in months rather than years. Its total cost will be low enough to be paid for by workshop participation fees.

    Join me.

    The learning design bottleneck

    Many of my friends running digital course design teams tell me they cannot keep up with demand. Whether their teams are large or small, centralized or instructor-led, higher education or corporate learning and development (L&D), the problem is the same; several friends at large shops have told me that their development of new courses and redesigns of old ones have all but ground to a halt. They don’t have time or money to fix the problem.

    I’ve been asking, “Suppose we could accelerate your time to develop a course by, say, 20%?” Twenty percent is my rough, low-end guess about the gains. We should be able to get at least that much benefit without venturing into the more complex and riskier aspects of AI development. “Would a 20% efficiency gain be significant?” I ask.

    Answer: “It would be huge.”

    My friends tend to cite a few benefits:

    • Unblocked bottlenecks: A 20% efficiency gain would be enough for them to start building (or rebuilding) courses at a reasonable speed again.
    • Lower curricular materials costs: Organizations could replace more licensed courses with ones that they own. No more content license costs. And you can edit it any way you need to.
    • Better quality: The tool would free up learning designers to build better courses rather than running just to get more courses finished.
    • More flexibility with vendors: Many departments hire custom course design shops. A 20% gain in efficiency would give them more flexibility in deciding when and how to invest their budgets in this kind of consulting.

    The learning design bottleneck is a major business problem for many organizations. Relatively modest productivity gains would make a substantial difference for them. Generative AI seems like a good tool for addressing this problem. How hard and expensive would it be to build a tool that, on average, delivers a 20% gain in productivity?

    Not very hard, not very expensive

    Every LMS vendor, courseware platform provider, curricular materials vendor, and OPM provider is currently working on tools like this. I have talked to a handful of them. They all tell me it’s not hard—depending on your goals. Vendors have two critical constraints. First, the market is highly suspicious of black-box vendor AI and very sensitive to AI products that make mistakes. EdTech companies can’t approach the work as an experiment. Second, they must design their AI features to fit their existing business goals. Every feature competes with other priorities that their clients are asking for.

    The project I am launching—AI Learning Design Assistant (ALDA)—is different. First, it’s design/build. The participants will drive the requirements for the software. Second, as I will spell out below, our software development techniques will be relatively simple and easy to understand. In fact, the value of ALDA is as much in learning patterns to build reliable, practical, AI-driven tools as it is in the product itself. And third, the project is safe.

    ALDA is intended to produce a first draft for learning designers. No students need to see content that has not been reviewed by a human expert or interact directly with the AI at all. The process by which ALDA produces its draft will be transparent and easy to understand. The output will be editable and importable into the organization’s learning platform of choice.

    Here’s how we’ll do it:

    • Guided prompt engineering: Your learning designers probably already have interview questions for the basic information they need to design a lesson, module, or course. What are the learning goals? How will you know if students have achieved those goals? What are some common sticking points or misconceptions? Who are your students? You may ask more or less specific and more or less elaborate versions of these questions, but you are getting at the same ideas. ALDA will start by interviewing the user, who is the learning designer or subject-matter expert. The structure of the questions will be roughly the same. While we will build out one set of interview questions for the workshop series, changing the design interview protocol should be relatively straightforward for programmers who are not AI specialists.
    • Long-term memory: One of the challenges with using a tool like ChatGPT on its own is that it can’t remember what you talked about from one conversation to the next and it might or might not remember specific facts that it was trained on (or remember them correctly). We will be adding a long-term memory function. It can remember earlier answers in earlier design sessions. It can look up specific documents you give it to make sure it gets facts right. This is an increasingly common infrastructure component in AI projects. We will explore different uses of it when we build ALDA. You’ll leave the workshop with the knowledge and example code of how to use the technique yourself.
    • Prompt enrichment: Generative AI often works much better when it has a few really good, rich examples to work from. We will provide ALDA with some high-quality lessons that have been rigorously tested for learning effectiveness over many years. This should increase the quality of ALDA’s first drafts. Again, you may want your learning designs to be different. Since you will have the ALDA source code, you’ll be able to put in whatever examples you want.
    • Generative AI export: We may or may not get to building this feature depending on the group’s priorities in the time we have, but the same prompt enrichment technique we’ll use to get better learning output can also be used to translate the content into a format that your learning platform of choice can import directly. Our enrichment examples will be marked up in software code. A programmer without any specific AI knowledge can write a handful of examples translating that code format into the one that your platform needs. You can change it, adjust it, and enrich it if you change platforms or if your platform adds new features.

    The consistent response from everyone in EdTech I’ve talked to who is doing this kind of work is that we can achieve ALDA’s performance goals with these techniques. If we were trying to get 80% or 90% accuracy, that would be different. But a 20% efficiency gain with an expert human reviewing the output? That should be very much within reach. The main constraints on the ALDA project are time and money. Those are deliberate. Constraints drive focus.

    Let’s build something useful. Now.

    The collaboration

    Teams that want to participate in the workshop will have to apply. I’m recruiting teams that have immediate needs to build content and are willing to contribute their expertise to making ALDA better. There will be no messing around. Participants will be there to build something. For that reason, I’m quite flexible about who is on your team or how many participate. One person is too few, and eight is probably too many. My main criterion is that the people you bring are important to the ALDA-related project you will be working on.

    This is critical because we will be designing ALDA together based on the experience and feedback from you and the other participants. In advance of the first workshop, my colleagues and I will review any learning design protocol documentation you care to share and conduct light interviews. Based on that information, you will have access to the first working iteration of ALDA at the first workshop. For this reason, the workshop series will start in the spring. While ALDA isn’t going to require a flux capacitor to work, it will take some know-how and effort to set up.

    The workshop cohort will meet virtually once a month after that. Teams will be expected to have used ALDA and come up with feedback and suggestions. I will maintain a rubric for teams to use based on the goals and priorities for the tool as we develop them together. I will take your input to decide which features will be developed in the next iteration. I want each team to finish the workshop series with the conviction that ALDA can achieve those performance gains for some important subset of their course design needs.

    Anyone who has been to one of my Empirical Educator Project (EEP) or Blursday Social events knows that I believe that networking and collaboration are undervalued at most events. At each ALDA workshop, you will have time and opportunities to meet with and work with each other. I’d love to have large universities, small colleges, corporate L&D departments, non-profits, and even groups of students participating. I may accept EdTech vendors if and only if they have more to contribute to the group effort than just money. Ideally, the ALDA project will lead to new collaborations, partnerships, and even friendships.

    Teaching AI about teaching and learning

    The workshop also helps us learn together about how to teach AI about teaching and learning. AI research is showing us how much better the technology can be when it’s trained on good data. There is so much bad pedagogy on the internet. And the content that is good is not marked up in a way that is friendly to teach AI patterns. What does a good learning objective or competency look like? How do you write hints or assessment feedback that helps students learn but doesn’t give away the answers? How do you create alignment among the components of a learning design?

    The examples we will be using to teach the AI have not only been fine-tuned for effectiveness using machine learning over many years; they are also semantically coded to capture some of these nuances. These are details that even many course designers haven’t mastered.

    I see a lot of folks rushing to build “robot tutors in the sky 2.0” without a lot of care to make sure the machines see what we see as educators. They put a lot of faith in data science but aren’t capturing the right data because they’re ignoring decades of learning science. The ALDA project will teach us how to teach the machines about pedagogy. We will learn to identify the data structures that will empower the next generation of AI-powered learning apps. And we will do that by becoming better teachers of ALDA using the tools of good teaching: clear goals, good instructions, good examples, and good assessments. Much of it will be in plain English, and the rest will be in a simple software markup language that any computer science undergraduate will know.

    Wanna play?

    The cost for the workshop series, including all source code and artifacts, is $25,000 for your team. You can find an application form and prospectus here. Applications will be open until the workshop is filled. I already have a few participating teams lined up and a handful more that I am talking to.

    You also find a downloadable two-page prospectus and an online participation application form here. To contact me for more information, please fill out this form:

    [Update: I’m hearing from a couple of you that your messages to me through the form above are getting caught in the spam filter. Feel free to email me at [email protected] if the form isn’t getting through.]

    I hope you’ll join us.

    Source link