Tag: Ways

  • 5 Ways to Advocate for Your Students During The U.S. Election

    5 Ways to Advocate for Your Students During The U.S. Election

    Election Day in the U.S. is just around the corner. On top of carrying multiple academic and employment responsibilities, some students will also be voting for the first time. Others, such as those from marginalized or historically underrepresented populations, may be overwhelmed with what the election results could mean for them. In the lead up to Election Day, a healthy dose of empathy will be essential in ensuring students have a chance to fulfill their civic duty—and the opportunity to consider its consequences.

    Being flexible with due dates, considering students’ wellbeing and ensuring learners are armed with the resources needed to vote are the most important things you can do as Election Day nears. Read on to learn how professors advocated for their students during the 2020 election—and how you can do the same.

    Consider making November 4 and 5 free of assignments (or even classes)

    Exams can cause some students a great deal of stress and anxiety. Lillian Horin, Biological and Biomedical Sciences PhD student at Harvard University, urges educators to keep BIPOC students in mind when scheduling high-stakes tests.

    Consider swapping your exams or problem sets (Psets) with a trip to the ballot box. Jacob Light, Economics PhD student at Stanford University, writes that this simple gesture may allow students to exercise their civic duty.

    Other students like Anna-Sophia Boguraev, Bioengineering PhD student at Harvard Medical School and MIT, say that TAs have the power to amplify student concerns and requests—none of which should be ignored.

    If your assignments can’t wait, build in flexibility and timeliness

    Self-paced learning can allow students to visit the polls and complete coursework at a time that works for them—so says Jesse Fox, Associate Professor of Communication at Ohio State University.

    Election Day can also be a good opportunity to let students catch their breath in your course. Give students a chance to study and review material that they haven’t had a chance to look over, suggests Scott Grunow, Instructor in English and Religious Studies at the University of Illinois at Chicago.

    Should your institutions provide little leeway in your assessment choices, at least incorporate real-time events into your discussions. Derek Bruff, Associate Director, Center for Teaching Excellence at the University of Virginia, notes that relating course content to the election can help students see the value of what they’re learning.

    Real-time political events and policy proposals can make for discipline-specific conversations. This also allows students to apply what they’ve learned in your class to the real world, as Andrea Gomez Cervantes, Assistant Professor in the Department of Sociology at Wake Forest University, proposes.

    Mobilize your students to show up at the polls

    Gen Z students are motivated to vote. In the 2018 midterm elections, the student turnout rate increased by 20 percent compared to the 2014 midterms.1 Ensure students are equipped with the resources to vote as soon as possible, writes Wendy Christensen, Sociology Professor at William Paterson University.

    Similarly, ask students about their voting plans. Consider working with your class to ensure they know where to go on November 5, suggests Margaret Boyle, Associate Professor of Romance Languages and Literatures at Bowdoin College.

    Ensure your voter registration information and resources appeal to all students, regardless of what political party they support. Meghan Novisky, Assistant Professor of Criminology at Cleveland State University, emphasizes the importance of using non-partisan guidelines.

    Some scholars like Sara Wheeler-Smith, Associate Professor of Management at Manhattan College, even plan to offer a grading incentive for visiting the polls.

    Incorporate guest lectures and learn from your colleagues

    Navigating election week with students in mind might be an unfamiliar undertaking. Consider leaning on faculty at your institution for support, writes Heather Mayer, Director of Educational Technology at Everett Community College.

    Some students may be undecided voters, while others may have missed the presidential debates. Incorporate forms of debate in your classroom—with the support of scholars from other institutions, as Yujin Jung, Political Sciences PhD student at the University of Missouri, plans to do.

    Keep in mind the importance of mental and physical health

    Check-ins with students have gained new meaning in the midst of an election. Andrea Kelley, Sociology Professor at the University of Michigan, tends to her students’ socioemotional needs before assigning readings and lectures.

    Election Day can come with a range of emotions for many students. Cate Denial, Distinguished Professor of American History, Chair of the History department, and Director of the Bright Institute at Knox College, removes the expectation for students to pay attention and participate in class.

    References

    1. Thomas, N. et al. (2018). Democracy Counts 2018: Increased Student and Institutional Engagement. Tufts University. https://idhe.tufts.edu/sites/default/files/DemocracyCounts2018.pdf

    Tagged as:



    Source link

  • Bloom’s Revised Taxonomy: 3 Ways To Reshape The Pyramid

    Bloom’s Revised Taxonomy: 3 Ways To Reshape The Pyramid

    Bloom’s Taxonomy is probably the most widespread and enduringly popular model in education. It was created in 1956 by Dr. Benjamin Bloom and colleagues at the Board of Examinations, University of Chicago. In 2001, the pyramid was revised by Lorin Anderson, a student of Bloom’s, resulting in Bloom’s Revised Taxonomy.

    Bloom’s Revised Taxonomy focuses on learning outcomes. The framework demands that very first thing that instructors need to think about is what students have to know by the end of the course. Learning objectives need actions to get to them. And Bloom’s Revised Taxonomy is hierarchical, requiring your students to achieve each level in succession—in order to understand a concept, you must remember it; to apply a concept you must first understand it, and so on.

    There’s no doubt that this way of classifying educational objectives has been extremely useful to millions of teachers over the years. But for those who might not have had conclusively positive results evaluating Bloom’s Revised Taxonomy or incorporating it into instruction, it’s worth considering some more ways to think outside the pyramid to improve teaching and learning. Here are three things you could bear in mind when using Bloom’s Revised Taxonomy in your lesson planning.

    1. Cultivate judgment rather than transmit information

    The instructional strategies behind Bloom’s Revised Taxonomy require educators to begin with “lower order” tasks, arguing that students need to master these first. This means we front-load our courses with information: information that can be recalled, defined, identified or another objective in the lowest layer of the pyramid.

    But constructivist theories of learning—and our own classroom experiences—tell us that learning does not happen through information transference alone. A learner is an not empty vessel into which we pour definitions. He or she is not going to truly understand something without interpreting it, questioning it, or relating to it.

    So when designing your course, try to incorporate ways to strengthen and take advantage of their faculties of judgment.

    What would it mean to cultivate judgment during a course? Start by doing. Engage your students to take action in some relevant way—through a lab experiment, for example, or by field research. Another way to do this is role-play. When I taught history, we started out by taking on the identities of various countries, coming to decisions supported by research and analysis. The historical facts—and there were many—were all taught in this context. In this way, facts are put into the service of learning, rather than becoming an initial goal in themselves.

    2. Start, rather than end, with creativity

    As educator Shelley Wright has pointed out, Bloom’s Revised Taxonomy gives the impression there’s a “scarcity of creativity.”1 Only those strong enough or talented enough to work their way up to the summit of the pyramid can be creative. The truth is that everybody is naturally creative—just think of a seven year-old at play—except that this way of being in the world is often squelched or squandered. Ken Robinson, for instance, has strongly argued that creativity is typically “educated out” of us.2

    What could it mean to start with creativity? Have your students create on day one. (OK, maybe day two or three.) Wright explains how this works for her media studies class. Instead of beginning by laying out design principles and the history of media, she gets the students to make an advert mockup. Then they compare their mockups to published adverts. Wright helps them analyze differences and introduces, through student-facilitated research, the major principles and concepts of design that help them explain their own creation and those of others.

    A create-first approach could work just as well in courses that are theory-rich and fact-heavy such as philosophy, literature, or science. In environmental science, for example, ask students to propose a solution to deforestation or ocean acidification. Then, starting from their contributions, explore the principles, factors, concepts, contingencies at play, including the ones that were omitted. Have students compare their solutions to others’. Get them to elicit the principle involved, the recent literature in the area, and articulate and fully describe the concepts and the facts.

    3. Promote awareness instead of entrenching hierarchy

    The stratification of Bloom’s Revised Taxonomy into “lower” and “higher” order objectives sets up a value proposition. It leads educators to think that certain kinds of learning necessarily reflect superior kinds of cognition.

    But as Roland Case argues, tasks at every level of Bloom’s Revised Taxonomy can be performed thoughtfully or thoughtlessly.3 It is possible to defend a position in a completely superficial way. It is possible to propose a plan that lacks good judgment or analysis. It is possible to create something without building from a base of relevant knowledge. Indeed, that is why it’s necessary to practice and develop judgment, critical thinking skills and creative problem-solving.

    When Anderson and Krathwohl revised Bloom’s Taxonomy, they accounted for this with a second scale for assessment called The Knowledge Dimension, which lies as another dimension or axis to the cognitive domain. One should assess each of the revised categories (Remember, Understand, Apply, Analyze, Evaluate, Create) according to whether factual, conceptual, procedural or metacognitive knowledge is demonstrated.

    If no category is higher or lower than any other, then leveling makes no sense. With proper consideration for The Knowledge Dimension, we are far from a pyramid… and always have been! But who knew? As Leslie Wilson points out, “what most educators are given in training is a simple chart listing levels and related accompanying verbs.”4

    Bloom’s Taxonomy revised: A pyramid alternative

    And so, if we want to engage students’ creativity, cultivate judgment and make sure that each stage of learning is fully developed and attuned to the right outcome, then organizing anew the existing structure of Bloom’s Revised Taxonomy could go a very long way. Instead of a pyramid, how about a mandala?

    Bloom's Taxonomy in a Mandala or Rose format. CC-BY-SA 3.0 K. Aainsqatsi

    Strange things happen when we feel beholden to a structure. If lesson planning with Bloom’s Revised Taxonomy hasn’t been working for you or your class, rethink the background on how it should be applied. Reconsider the way you’re assessing student learning. Break down the hierarchy and rebuild.

    Illustration credits: CC-BY 2.0 Vanderbilt University; CC-BY-SA 3.0 K. Aqinsqatsi.

    Related stories
    Bloom’s verbs and how to apply them to questions

    Get a set of online teaching resources to help you develop effective teaching practices and foster active learning within your classrooms.

    Download your free Top Hat Toolkit Here:

    References

    1. Wright, S. (2012, May 15). Flipping Bloom’s Taxonomy [Blog post]. Retrieved from https://plpnetwork.com/2012/05/15/flipping-blooms-taxonomy/
    2. Parker, Q. (2018, October 3). The Possibilities of an Agile Classroom: Sir Ken Robinson [Blog post]. Retrieved from https://tophat.com/blog/sir-ken-robinson-qa/
    3. Case, R. Unfortunate Consequences of Bloom’s Taxonomy. Retrieved from https://tc2.ca/uploads/PDFs/Critical%20Discussions/unfortunate_consequences_blooms_taxonomy.pdf
    4. Wilson, L. O. (2017, January 20). Understanding the New Version of Bloom’s Taxonomy. [Blog post] https://thesecondprinciple.com/teaching-essentials/beyond-bloom-cognitive-taxonomy-revised/

    Tagged as:

    Source link

  • How it Breaks in Subtle Ways –

    How it Breaks in Subtle Ways –

    In my last post, I explained how generative AI memory works and why it will always make mistakes without a fundamental change in its foundational technology. I also gave some tips for how to work around and deal with that problem to safely and productively incorporate imperfect AI into EdTech (and other uses). Today, I will draw on the memory issue I wrote about last time as a case study of why embracing our imperfect tools also means recognizing where they are likely to fail us and thinking hard about dealing realistically with their limitations.

    This is part of a larger series I’m starting on a term of art called “product/market fit.” The simplest explanation of the idea is the degree to which the thing you’re building is something people want and are willing to pay the cost for, monetary or otherwise. In practice, achieving product/market fit is complex, multifaceted, and hard. This is especially true in a sector like education, where different contextual details often create the need for niche products, where the buyer, adopter, and user of the product are not necessarily the same, and where measurable goals to optimize your product for are hard to find and often viewed with suspicion.

    Think about all the EdTech product categories that were supposed to be huge but disappointed expectations. MOOCs. Learning analytics. E-portfolios. Courseware platforms. And now, possibly OPMs. The list goes on. Why didn’t these product categories achieve the potential that we imagined for them? There is no one answer. It’s often in the small details specific to each situation. AI in action presents an interesting use case, partly because it’s unfolding right now, partly because it seems so easy, and partly because it’s odd and unpredictable, even to the experts. I have often written about “the miracle, the grind, and the wall” with AI. We will look at a couple of examples of moving from the miracle to the grind. These moments provide good lessons in the challenges of product/market fit.

    In my next post, I’ll examine product/market fit for universities in a changing landscape, focusing on applying CBE to an unusual test case. In the third post, I’ll explore product/market fit for EdTech interoperability standards and facilitating the growth of a healthier ecosystem.

    Khanmingo: the grind behind the product

    Khan Academy’s Kristen DiCerbo gave us all a great service by writing openly about the challenges of producing a good AI lesson plan generator. They started with prompt engineering. Well-written prompts are miracles. They’re like magic spells. Generating a detailed lesson plan in seconds with a well-written prompt is possible. But how good is that lesson plan? How well did Khanmingo’s early prompts produce the lesson plans?

    Kristen writes,

    At first glance, it wasn’t bad. It produced what looked to be a decent lesson plan—at least on the surface. However, on closer inspection, we saw some issues, including the following:

    • Lesson objectives just parroted the standard
    • Warmups did not consistently cover the most logical prerequisite skills
    • Incorrect answer keys for independent practice
    • Sections of the plan were unpredictable in length and format
    • The model seemed to sometimes ignore parts of the instructions in the prompt

    Prompt Engineering a Lesson Plan: Harnessing AI for Effective Lesson Planning

    You can’t tell the quality of the AI’s lesson plans without having experts examine them closely. You also want feedback from people who will actually use those lesson plans. I guarantee they will find problems that you will miss. Every time. Remember, the ultimate goal of product/market fit is to make something that the intended adopters will actually want. People will tolerate imperfections in a product. But which ones? What’s most important to them? How will they use the product? You can’t answer these questions confidently without the help of actual humans who would be using the product.

    At any rate, Khan Academy realized their early prompt engineering attempts had several shortcomings. Here’s the first:

    Khanmigo didn’t have enough information. There were too many undefined details for Khanmigo to infer and synthesize, such as state standards, target grade level, and prerequisites. Not to mention limits to Khanmigo’s subject matter expertise. This resulted in lesson plans that were too vague and/or inaccurate to provide significant value to teachers.

    Prompt Engineering a Lesson Plan: Harnessing AI for Effective Lesson Planning

    Read that passage carefully. With each type of information or expertise, ask yourself, “Where could I find that? Where is it written down in a form the AI can digest?” The answer is different for each one. How can the AI learn more about what state standards mean? Or about target grade levels? Prerequisites? Subject-matter expertise for each subject? No matter how much ChatGPT seems to know, it doesn’t know everything. And it is often completely ignorant about anything that isn’t well-documented on the internet. A human educator has to understand all these topics to write good lesson plans. A synthetic one does too. But a synthetic educator doesn’t have experience to draw on. It only has whatever human educators have publicly published about their experiences.

    Think about the effort involved in documenting all these various types of knowledge for a synthetic educator. (This, by the way, is very similar to why learning analytics disappointed as a product category. The software needs to know too much that wasn’t available in the systems to make sense of the data.)

    Here’s the second challenge that the Khanmingo team faced:

    We were trying to accomplish too much with a single prompt. The longer a prompt got and the more detailed its instructions were, the more likely it was that parts of the prompt would be ignored. Trying to produce a document as complex and nuanced as a comprehensive lesson plan with a single prompt invariably resulted in lesson plans with neglected, unfocused, or entirely missing parts.

    Prompt Engineering a Lesson Plan: Harnessing AI for Effective Lesson Planning

    I suspect this is a subtle manifestation of the memory problem I wrote about in my last post. Even with a relatively short text like a complex prompt, the AI couldn’t hold onto all the details. The Khanmingo team ended up breaking up the prompt into smaller pieces. This produced better results as the AI could “concentrate on”—or remember the details— one step at a time. I’ll add that this approach provides more opportunities to put humans in the loop. An expert—or a user—can examine and modify the output of each step.

    We fantasize about AI doing work for us. In some cases, it’s not just a fantasy. I use AI to be more productive literally every day. But it fails me often. We can’t know what it will take for AI to solve any particular problem without looking closely at the product’s capabilities and the user’s very specific needs. This is product/market fit.

    Learning design in the real world

    Developing skill in product/market fit is hard. Think about all those different topics the Khanmingo team needed to not only know, but know their relevance to creating lesson plans well enough to diagnose the gaps in the AI’s understanding.

    Refining a product is also inherently iterative. No matter how good you are at product designer how well you know your audience, and how brilliant you are, you will be wrong about some of your ideas early on. Because people are complicated. Organizations are complicated The skills workers need are often complicated and non-obvious. And the details of how the people need to work, individually and together, are often distinctive in ways that are invisible to them. Most people only know their own context. They take a lot for granted. Good product people spend their time uncovering these invisible assumptions and finding the commonalities and the differences. This is always a discovery process that takes time.

    Learning design is a classic case of this problem. People have been writing and adopting learning design methodologies longer than I’ve been alive. The ADDIE model—”Analyze, Design, Develop, Implement, and Evaluate”—was created by Florida State University for the military in the 1970s. “Backward Design” was invented in 1949 by Ralph W. Tyler. Over the past 30 years, I’ve seen a handful of learning design or instructional design tools that attempt to scaffold and enforce these and other design methodologies. I’ve yet to see one get widespread adoption. Why? Poor product/market fit.

    While the goal of learning design (or “instructional design,” to use the older term) is to produce a structured learning experience, the thought process of creating it is non-linear and iterative. As we develop and draft, we see areas that need tuning or improving. We move back and forth across the process. Nobody ever follows learning design methodologies strictly in practice. And I’m talking about trained learning design professionals. Untrained educators stray even further from the model. That’s why the two most popular learning design tools, by far, are Microsoft Word and Google Docs.

    If you’ve ever used ChatGPT and prompt engineering to generate the learning design of a complex lesson, you’ve probably run into unexpected limits to its usefulness. The longer you spend tinkering with the lesson, the more your results start to get worse rather than better. It’s the same problem the Khanmingo team had. Yes, ChatGPT and Claude can now have long conversations. But both research and experience show us that they tend to forget the stuff in the middle. By itself, ChatGPT is useful in lesson design to a point. But I find that when writing complex documents, I paste different pieces of my conversation into Word and stitch them together.

    And that’s OK. If that process saves me design time, that’s a win. But there are use cases where the memory problems are more serious in ways that I haven’t heard folks talking about yet.

    Combining documents

    Here’s a very common use case in learning design:

    First, you start with a draft of a lesson or a chapter that already exists. Maybe it’s a chapter from an OpenStax textbook. Maybe it’s a lesson that somebody on your team wrote a while ago that needs updating. You like it, but you don’t love it.

    You have an article with much of the information you want to add to the new version you want to create. If you were using a vendor’s textbook, you’d have to require the students to read the outdated lesson and then read the article separately. But this is content you’re allowed to revise. If you’re using the article in a way that doesn’t violate copyright—for example, because you’re using it to capture publicly known facts that have changed rather than something novel in the article itself—you can simply use the new information to revise the original lesson. That was often too much work the old way. But now we have ChatGPT, so, you know…magic.

    While you’re at it, you’d like to improve the lesson’s diversity, equity, and inclusion (DEI). You see opportunities to write the chapter in ways that represent more of your students and include examples relevant to their lived experiences. You happen to have a document with a good set of DEI guidelines.

    So you feed your original chapter, new article, and DEI guidelines to the AI. “ChatGPT, take the original lesson and update it with the new information from the article. Then apply the DEI guidelines, including examples in topics X, Y, and Z that represent different points of view. Abracadabra!”

    You can write a better prompt than this one. But no matter how carefully you engineer your prompt, you will be disappointed with the results. Don’t take my word for it. Try it yourself.

    Why does this happen? Because the generative AI doesn’t “remember” these three documents perfectly. Remember what I wrote in my last article:

    The LLMs can be “trained” on data, which means they store information like how “beans” vs. “water” modify the likely meaning of “cool,” what words are most likely to follow “Cool the pot off in the,” and so on. When you hear AI people talking about model “weights,” this is what they mean.

    Notice, however, that none of the original sentences are stored anywhere in their original form. If the LLM is trained on Wikipedia, it doesn’t memorize Wikipedia. It models the relationships among the words using combinations of vectors (or “matrices”) and probabilities. If you dig into the LLM looking for the original Wikipedia article, you won’t find it. Not exactly. The AI may become very good at capturing the gist of the article given enough billions of those tensor/workers. But the word-for-word article has been broken down and digested. It’s gone.

    How You Will Never Be Able to Trust Generative AI (and Why That’s OK)

    Your lesson and articles are gone. They’ve been digested. The AI remembers them, but it’s designed to remember the meaning, not the words. It’s not metaphorically sitting down with the original copy and figuring out where to insert new information or rewrite a paragraph. That may be fine. Maybe it will produce something better. But it’s a fundamentally different process than human editing. We won’t know if the results it generates have good product/market until we test it out with folks.

    To the degree that you need to preserve the fidelity of the original documents, you’ve got a problem. And the more you push generative AI to do this kind of fine-tuning work across multiple documents, the worse it gets. You’re running headlong into one of your synthetic co-worker’s fundamental limitations. Again, you might get enough value from it to achieve a net gain in productivity. But you might not because this seemingly simple use case is pushing hard on functionality that hasn’t been designed, tested, and hardened for this kind of use.

    Engineering around the problem

    Any product/market fit problem has two sides: product and market. On the market side, how good will be good enough? I’ve specifically positioned my ALDA project as producing a first draft with many opportunities for a human in the loop. This is a common approach we’re seeing in educational content generation right now, for good reasons. We’re reducing the risk to the students. Risk is one reason the market might reject the productket.

    Another is failing to deliver the promised time savings. If the combination of the documents is too far off from the humans’ goal, it will be rejected. Its speed will not make up for the time required for the human to fix its mistakes. We have to get as close to the human need as possible, mitigate the’ consequences, and test to see if we’ve achieved a cost/benefit for the users good enough that they will adopt the product.

    There is no perfect way to solve the memory problem. You will always need a human in the loop. But we could make a good step forward if we could get the designs solid enough to be directly imported into the learning platform and fine-tuned there, skipping the word processor step. Being able to do so requires tackling a host of problems, including (but not limited to) the memory issue. We don’t need the AI to get the combination of these documents perfect, but we do need it to get close enough that our users don’t need to dump the output into a full word processor to rewrite the draft.

    When I raised this problem to a colleague who is a digital humanities scholar and an expert in AI, he paused before replying. “Nobody is working on this kind of problem right now,” he said. “On one side, AI experts are experimenting with improving the base models. On the other side, I see articles all the time about how educators can write better prompts. Your problem falls in between those two.”

    Right. As a sector, we’re not discussing product/market fit for particular needs. The vendors are, each within their own circumscribed world. But on the customer side? I hear people tell me they’re conducting “experiments.” It sounds a bit like when university folk told me they were “working with learning analytics,” which turned out to mean that they were talking about working with learning analytics. I’m sure there are many prompt engineering workshops and many grants being written for fancy AI solutions that sound attractive to the National Science Foundation or whoever the grantor happens to be. But in the middle ground? Making AI usable to solve specific problems? I’m not seeing much of that yet.

    The document combination problem can likely be addressed adequately well through a combination of approaches that improve the product and mitigate the consequences of the imperfections to make them more tolerable for the market. After consulting with some experts, I’ve come up with a combination of approaches to try first. Technologically, I know it will work. It doesn’t depend on cutting-edge developments. Will the market accept the results? Will the new approach be better than the old one? Or will it trip over some deal-breaker, like so many products before it?

    I don’t know. I feel pretty good about my hypothesis. But I won’t know until real learning designers test it on real projects.

    We have a dearth of practical, medium-difficulty experiments with real users right now. That is a big, big problem. It doesn’t matter how impressive the technology is if its capabilities aren’t the ones the users need to solve real-world problems. You can’t fix this gap with symposia, research grants, or even EdTech companies that have the skills but not necessarily the platform or business model you need.

    The only way to do it is to get down into the weeds. Try to solve practical problems. Get real humans to tell you what does and doesn’t work for them in your first, second, third, fourth, and fifth tries. That’s what the ALDA project is all about. It’s not primarily about the end product. I am hopeful that ALDA itself will prove to be useful. But I’m not doing it because I want to commercialize a product. I’m doing it to teach and learn about product/market fit skills with AI in education. We need many more experiments like this.

    We put too much faith in the miracle, forgetting the grind and the wall are out there waiting for us. Folks in the education sector spend too much time staring at the sky, waiting for the EdTech space aliens to come and take us all to paradise,.

    I suggest that at least some of us should focus on solving today’s problems with today’s technology, getting it done today, while we wait for the aliens to arrive.

    Source link

  • 3 Ways To Take New Photos For Your Website

    3 Ways To Take New Photos For Your Website

    You want a new photo of yourself for your personal website. What’s the best way to take new photos? Here are 3 ways for you to get beautiful photos for your website.

    Hi! I’m Jennifer van Alstyne. Welcome to my blog about understanding your digital footprint in research and Higher Education. I help professors have an amazing online presence through my company The Academic Designer LLC. I empower academics to feel confident when showing up online.

    The form above subscribes you to new posts published on The Social Academic blog.
    Want emails from Jennifer about building your online presence? Subscribe to her email list.
    Looking for the podcast? Subscribe on Spotify.
    Prefer to watch videos? Subscribe on YouTube.

    Listen to the podcast

    Take a selfie

    A woman wearing a hijab and a green sweater holds her phone and smiles at the camera while taking a selfie photo.

    A selfie is a photo that you take of yourself. You can use selfies on your personal website! The most popular way to take a selfie is with your phone camera. You can use the front camera of your phone, but you’ll take higher quality photos if you use the rear camera.

    Look for a space with good lighting. You may find great natural lighting outside in the last hour before sunset. Or, if you’re an early riser (unlike me), try just before sunrise. Photographers call these periods the ‘golden hour’ because the light can be perfect.

    Avoid a distracting background in your photo. When in doubt, you can take a photo against a neutral background like a wall. Then you can edit the photo to remove the background like I did for the selfie above.

    Use a cleaning cloth to gently wipe the camera lens clean before you start your photo shoot.

    Try holding the phone at different angles (slightly higher, slightly lower). You can also hold the phone out to your side and turn your head towards it. Don’t forget to smile.

    Your phone camera may have settings to help you take a better selfie. For instance, there may be a portrait mode. Or, an option to take a wide selfie.

    You can use your phone’s self-timer setting to take hands-free and full body selfies. You may want to prop up your phone using books. You could also invest in an inexpensive tripod or selfie stick.

    A black and white dog wearing blue sunglasses lays on its back on the grass taking a selfie photo with a phone in holds with its paw.

    Devices For Taking Selfies

    1. Mobile phone
    2. Camera
    3. High resolution web cam

    Other Tools To Help

    • Selfie stick
    • Tripod
    • Remote
    • Ring light
    • Books, a box, or other items to prop up your phone

    Ask a friend or family member

    A man holds a phone taking a photo of his friend who is smiling. They are in Los Angeles. The Hollywood sign is visible in the background, but unreadable because the background of the photo is very blurry.

    One of the best ways to get a new photo of yourself for your website is to ask a friend or family member to help.

    If you have a friend who is good at taking photos, great! You may not. That’s OK. What’s more important is that you have someone with you that you feel comfortable with.

    Set expectations by letting them know it may take a bit of time. That you want to try out different poses (or locations). Ask them to try taking photos at different angles. Try watching a YouTube video together about creative portrait photos with a phone.

    My best tip is to ask your friend to take photos holding the phone vertically and horizontally. That gives you flexibility with what you use the photos for.

    It’s possible that this photo shoot may not result in the photos you want. So don’t put too much pressure on your friend. Do encourage them to take lots of photos so you can choose the best ones. If you find photos you like in the set, ask your friend if they’d like to be credited as photographer when you share the photo online.

    The form above subscribes you to new posts published on The Social Academic blog.
    Want emails from Jennifer about building your online presence? Subscribe to her email list.
    Looking for the podcast? Subscribe on Spotify.
    Prefer to watch videos? Subscribe on YouTube.

    Hire a professional photographer

    Photography and video production studio with a yellow chair, plant, and pink backdrop. There are large lights on tripods.

    Hiring a professional portrait photographer is the best option. The mistake you want to avoid is asking for a headshot when booking your photo shoot.

    Professional photographers have the gear, skills, and experience to take your new website photos. If you can afford a professional photo shoot, it’s your best option. You want to hire an experienced local photographer.

    You may have a place to take your photos in mind. Professionals often include 1 location in your package, or shoot in their studio. Be sure to ask for their recommendations.

    Photographs are the intellectual property of the photographer. You’ll need to sign a contract or licensing agreement for permission to use the photos. Help ensure you get the right license by sharing how you want to use your photos when booking your shoot. Most photographers have professional liability insurance.

    You’re looking for a portrait photographer, not a headshot photographer. Be clear about how many photos you’re looking for, as your photographer may offer a ‘brand photo shoot’ or a ‘website photo package.’

    Portrait photographers can ranges from $100-$400/hour in the United States. Local Facebook groups are often an effective way to find recommendations for photographers in your area.

    Here are 3 ways to get new photos for your personal website

    A graphic for 'New Photos For Your Website' with tips including take a selfie, ask a friend for help, and hire a professional photographer. On the graphic are a cutout of a Sony professional camera, the backs of 2 cell phone cameras, and a selfie photo of Jennifer van Alstyne smiling.

    The form above subscribes you to new posts published on The Social Academic blog.
    Want emails from Jennifer about building your online presence? Subscribe to her email list.
    Looking for the podcast? Subscribe on Spotify.
    Prefer to watch videos? Subscribe on YouTube.

    My name is Jennifer van Alstyne. I’ve been helping professors make beautiful personal websites to share their research since 2018. I’d love to help you!

    Let’s talk on a no pressure Zoom call about working with me on your online presence and website.

    Personal Website How To’s The Social Academic

    Source link