Category: Artificial Intelligence

  • A Game Changing App for Faculty Researchers!

    A Game Changing App for Faculty Researchers!

    Consensus – A Game Changing App for Faculty Researchers

    Today, I started to utilize a new AI app for my research. This app, Consensus, is a game changer for faculty researchers. I wish that I had this app in graduate school – it would have definitely made life easier!

    Step 1 – Here are some screen shots of the software. You can type a question in the box (yes, a question) and the system does the work. Yes, the work that you would usually have to do!

    Step 2 – Then, AI does the rest. You receive AI-powered answers for your results. Consensus analyzes your results (before you even view them) and then summarizes the studies collectively.

    Step 3 – You can view the AI-powered answers which review each article for you.

    *I would also encourage you to review the article independently as well.

    Step 4 – View the study snapshots! Yes, a snapshot of the population, sample size, methods, outcomes measured, and more! Absolutely amazing!

    Step 5 – Click the “AI Synthesis” button to synthesize your results. Even better!

    Step 6 – Use the “powerful filters” button. You can view the “best” research results by: a) population, b) sample size, c) study design, d) journal quality, and other variables. 

    I plan to make a video soon, but please take a look at this video to discover exactly how Consensus can help you in your research! 

    ***

    Check out my book – Retaining College Students Using Technology: A Guidebook for Student Affairs and Academic Affairs Professionals.

    Remember to order copies for your team as well!


    Thanks for visiting! 


    Sincerely,


    Dr. Jennifer T. Edwards
    Professor of Communication

    Executive Director of the Texas Social Media Research Institute & Rural Communication Institute

    Source link

  • Three Essential AI Tools and Practical Tips for Automating HR Tasks – CUPA-HR

    Three Essential AI Tools and Practical Tips for Automating HR Tasks – CUPA-HR

    by Julie Burrell | March 27, 2024

    During his recent keynote at CUPA-HR’s Higher Ed HR Accelerator, Commissioner Keith Sonderling of the Equal Opportunity Employment Commission observed, “now, AI exists in HR in every single stage of employment,” from writing job descriptions, to sourcing candidates and scheduling interviews, and well into the career lifecycle of employees.

    At some colleges and universities, AI is now a routine part of the HR workflow. At the University of North Texas at Dallas, for example, AI has significantly sped up the recruitment and hiring timeline. “It helped me staff a unit in an aggressive time frame,” says Tony Sanchez, chief human resources officer, who stresses that they use AI software with privacy protections. “AI parsed resumes, prescreened applicants, and allowed scheduling directly to the hiring manager’s calendar.”

    Even as AI literacy is becoming a critical skill, many institutions of higher education have not yet adopted AI as a part of their daily operations. But even if you don’t have your own custom AI like The University of Michigan, free AI tools can still be a powerful daily assistant. With some common-sense guardrails in place, AI can help you automate repetitive tasks, make software like Excel easier to use, analyze information and polish your writing.

    Three Free Chatbots to Use Now

    AI development is moving at a breakneck pace, which means that even the freely available tools below are more useful than they were just a few months ago. Try experimenting with multiple AI chatbots by having different browser windows open and asking each chatbot to do the same task. Just don’t pick a favorite yet. With AI companies constantly trying to outperform each other, one might work better depending on the day or the task. And before you start, be sure to read the section on AI guardrails below — you never want to input proprietary or private information into a public chatbot.

    ChatGPT, the AI trailblazer. The free version allows unlimited chats after signing up for an account. Right now, ChatGPT is text-based, which means it can help you with emails and communications, or even draft longer materials like reports. It can also solve math problems and answer questions (but beware of fabricated answers).

    You can customize ChatGPT to make it work better for you by clicking on your username in the bottom lefthand corner. For example, you can tell it that you’re an HR professional working in higher education, and it will tailor its responses to what it knows about your job.

    Google’s powerful AI chatbot, Gemini (formerly known as Bard). You’ll need to have or sign up for a free Google account, and it’s well worth it. Gemini can understand and interact with text just like ChatGPT does, but it’s also multimodal. You can drag and drop images and it will be able to interpret them. Gemini can also make tables, which can be exported to Google Sheets. And it generates images for free. For example, if you have an image you want your marketing team to design, you can get started by asking Gemini to create what you have in mind. But for now, Gemini won’t create images of people.

    Claude, often considered the best AI writer. Take Claude for a spin by asking it to write a job description or memo for you. Be warned that the free version of Claude has a daily usage limit, and you won’t know you’ve hit it until you hit it. According to Claude, your daily limit depends on demand, and your quota resets every morning.

    These free AI tools aren’t as powerful as their paid counterparts — all about $20 per month — but they do offer a sense of what AI can do.

    Practical Tips for Using AI in HR 

    For a recent Higher Ed HR Magazine article, I asked higher education HR professionals how they used AI to increase efficiency. Rhonda Beassie, associate vice president for people and procurement operations at Sam Houston State University, shared that she and her team are using AI for both increased productivity and upskilling, such as:

    • Creating first drafts of and benchmarking job descriptions.
    • Making flyers, announcements and other employee communications.
    • Designing training presentations, including images, text, flow and timing.
    • Training employees for deeper use of common software applications.
    • Providing instructions on developing and troubleshooting questions for macros and VLOOKUP in Microsoft Excel.
    • Troubleshooting software. Beassie noted that employees “can simply say to the AI, ‘I received an error message of X. How do I need to change the script to correct this?’ and options are provided.”
    • Creating reports pulled from their enterprise system.

    AI chatbots are also great at:

    • Being a thought partner. Ask a chatbot to help you respond to a tricky email, to find the flaws in your argument or to point out things you’ve missed in a piece of writing.
    • Revising the tone, formality or length of writing. You can ask chatbots to make something more or less formal or friendly (or whatever tone you’re trying to strike), remove the jargon from a piece of writing, or lengthen or shorten something.
    • Summarizing webpages, articles or book chapters. You can cut and paste a URL into a chatbot and ask it to summarize the page for you. You can also cut and paste a fairly large amount of text into chatbots and ask it for a summary. Try using parameters, such as “Summarize this into one sentence,” or “Please give me a bulleted list of the main takeaways.” The summaries aren’t always perfect, but will usually do in a pinch.
    • Summarizing YouTube videos. (Currently, the only free tool that can do this is Gemini.) Just cut and paste in the URL and ask it to summarize a video for you. Likewise, these summaries aren’t always exactly accurate.
    • Writing in your voice. Ask a chatbot to learn your voice and style by entering in things you’ve written. Ask it to compose a communication, like a memo or email you need to write, in your voice. This takes some time up front to train the AI, and it may not remember your voice from day-to-day or task-to-task.

    Practice Your Prompts

    Just 10 minutes a day can take you far in getting comfortable with these tools if you’re new to them. Learning prompting, which may take an upfront investment of more time, can unlock powerful capabilities in AI tools. The more complex the task you ask AI to do, the more time you need to spend crafting a prompt.

    The best prompts will ask a chatbot to assume a role and perform an action, using specific context. For example, “You are a human resources professional at a small, liberal arts college. You are writing a job description for an HR generalist. The position’s responsibilities include leading safety and compliance training; assisting with payroll; conducting background checks; troubleshooting employee questions in person and virtually. The qualifications for the job are one to two years in an HR office, preferably in higher education, and a BA.”

    Anthropic has provided a very helpful prompt library for Claude, which will also work with most AI chatbots.

    AI Guardrails

    There are real risks to using AI, especially the free tools listed above. You can read about them in detail here, or even ask AI to tell you, but the major dangers are:

    • Freely available AI will not protect your data privacy. Unless you have internal or enterprise software with a privacy agreement at your institution, assume everything you share with AI is public. Protected or confidential information should not be entered into a prompt.
    • AI fabricates, or hallucinates, as it’s sometimes called. It will make up facts that sound deceptively plausible. If you need accurate information, it’s best to consult an expert or trusted sources.
    • You don’t own copyright on AI-created work. In the United States, only human-produced work can be copyrighted.
    • Most of these tools are trained only up to a certain date, often a year or more ago for free chatbots. If you need up-to-the-minute information, use your favorite web browser.

    Further AI Resources



    Source link

  • Dr. Jennifer T. Edwards: A Texas Professor Focused on Artificial Intelligence, Health, and Education: Preparing Our Higher Education Institutions for the Future

    Dr. Jennifer T. Edwards: A Texas Professor Focused on Artificial Intelligence, Health, and Education: Preparing Our Higher Education Institutions for the Future

    As we prepare for an upcoming year, I have to stop and think about the future of higher education. The pandemic changed our students, faculty, staff, and our campus as a whole. The Education Advisory Board (EAB) provides colleges and universities across the country with resources and ideas to help the students of the future.

    I confess, I have been a complete fan of EAB and their resources for the past ten years. Their resources are at the forefront of higher education innovation.

    🏛 – Dining Halls and Food Spaces

    🏛 – Modern Student Housing

    🏛 – Hybrid and Flexible Office Spaces

    🏛 – Tech-Enabled Classrooms

    🏛 – Libraries and Learning Commons

    🏛 – Interdisciplinary Research Facilities


    Higher education institutions should also focus on the faculty and staff as well. When I ask most of my peers if they are comfortable with the numerous changes happening across their institution, most of them are uncomfortable. We need to prepare our teams for the future of higher education. 

    Here’s the Millennial Professor’s Call the Action Statements for the Higher Education Industry

    🌎 – Higher Education Conferences and Summits Need to Provide Trainings Focused on Artificial Intelligence (AI) for Their Attendees

    🌎 – Higher Education Institutions Need to Include Faculty and Staff as Part of Their Planning Process (an Important Part)

    🌎 – Higher Education Institutions Provide Wellness and Holistic Support for Faculty and Staff Who are Having Problems With Change (You Need Us and We Need Help)

    🌎 – Higher Education Institutions Need to Be Comfortable with Uncommon Spaces (Flexible Office Spaces)

    🌎 – Faculty Need to Embrace Collaboration Opportunities with Faculty at Their Institutions and Other Institutions

    Here are some additional articles about the future of higher education:

    Higher education will continue to transition in an effort to meet the needs of our current and incoming students. 

    For our particular university, we are striving to modify all of these items simultaneously. It is a challenge, but the changes are well worth the journey.

    Here’s the challenge for this post: “In your opinion, which one of the items on the list is MOST important for your institution?”

    ***. 

    Check out my book – Retaining College Students Using Technology: A Guidebook for Student Affairs and Academic Affairs Professionals.

    Remember to order copies for your team as well!


    Thanks for visiting! 


    Sincerely,


    Dr. Jennifer T. Edwards
    Professor of Communication

    Executive Director of the Texas Social Media Research Institute & Rural Communication Institute

    Source link

  • Artificial Intelligence Sparks the Interest of Federal Policymakers – CUPA-HR

    Artificial Intelligence Sparks the Interest of Federal Policymakers – CUPA-HR

    by CUPA-HR | November 15, 2023

    A growing interest in artificial intelligence and its potential impact on the workforce has sparked action by policymakers at the federal level. As employers increasingly turn to AI to fill workforce gaps, as well as improve hiring and overall job quality, policymakers are seeking federal policies to better understand the use and development of the technology. Recent policies include an executive order from the Biden administration and a Senate committee hearing on AI, both of which are detailed below.

    Executive Order on AI Use and Deployment

    On October 30, the Biden Administration released an executive order delineating the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The order urges responsible AI deployment that satisfies workforce development needs and ethical considerations.

    The executive order directs several agency heads to issue guidance and regulations to address the use and deployment of AI and other technologies in several policy areas. Some orders of particular interest to higher education HR include:

    • The secretary of labor is directed to submit a report analyzing ways agencies can support workers who may be displaced by AI.
    • The secretaries of labor, education and commerce are directed to expand education and training opportunities to provide pathways to careers related to AI.
    • The secretary of labor is ordered to publish principles and best practices for employers to help mitigate harmful impacts and maximize potential benefits of AI as it relates to employees’ well-being.
    • The secretary of labor is directed to issue guidance clarifying that employers using AI to monitor employees’ work are required to comply with protections that ensure workers are compensated for hours worked as defined under the Fair Labor Standards Act.
    • The secretary of labor is directed to publish guidance for federal contractors on nondiscrimination in hiring practices that involve the use of AI and other technology.
    • The director of the National Science Foundation is directed to “prioritize available resources to support AI-related education and AI-related workforce development through existing programs.”
    • The secretary of education is ordered to develop resources and guidance regarding AI, including resources addressing “safe, responsible and nondiscriminatory uses of AI in education.”
    • The secretary of state is ordered to establish a program to “identify and attract top talent in AI and other critical and emerging technologies at universities [and] research institutions” and “to increase connections with that talent to educate them on opportunities and resources for research and employment in the United States.”
    • The secretary of homeland security is directed to continue its rulemaking process to modernize the H-1B program and to consider a rulemaking that would ease the process of adjusting noncitizens’ status to lawful permanent resident status if they are experts in AI and other emerging technologies.

    The executive order directs the agency heads to produce their respective guidance and resources within the next year. As these policies and resources begin to roll out, CUPA-HR will keep members updated on any new obligations or requirements related to AI.

    Senate HELP Committee Hearing on AI and the Future of Work

    On October 31, 2023 the Senate Employment and Workplace Safety Subcommittee held a hearing titled “AI and the Future of Work: Moving Forward Together.” The hearing provided policymakers and witnesses the opportunity to discuss the use of AI as a complementary tool in the workforce to skill and reskill American workers and help them remain a valuable asset to the labor market.

    Democrats and Republicans on the committee agreed that AI has the potential to alter the workforce in positive ways but that the growth of the use of the technology needs to be supported by a framework of regulations that do not smother its potential. According to witnesses, employers using AI currently face a patchwork of state and local laws that complicate the responsible use and growth of AI technologies. They argued that a federal framework to address the safe, responsible use of AI could help employers avoid such complications and allow AI use to continue to grow.

    Democrats on the committee also asked whether education opportunities and skills-based training on AI can help provide an employment pathway for workers. Witnesses argued that AI education is needed at the elementary and secondary level to ensure future workers are equipped with the skills needed to work with AI, and that skills-based training models to reskill workers have proven successful.

    CUPA-HR will continue to track any developments in federal AI regulations and programs and will inform members of updates.



    Source link

  • Anticipating Impact of Educational Governance – Sijen

    Anticipating Impact of Educational Governance – Sijen

    It was my pleasure last week to deliver a mini-workshop at the Independent Schools of New Zealand Annual Conference in Auckland. Intended to be more dialogue than monologue, I’m not sure if it landed quite where I had hoped. It is an exciting time to be thinking about educational governance and my key message was ‘don’t get caught up in the hype’.

    Understanding media representations of “Artificial Intelligence”.

    Mapping types of AI in 2023

    We need to be wary of the hype around the term AI, Artificial Intelligence. I do not believe there is such a thing. Certainly not in the sense the popular press purport it to exist, or has deemed to have sprouted into existence with the advent of ChatGPT. What there is, is a clear exponential increase in the capabilities being demonstrated by computation algorithms. The computational capabilities do not represent intelligence in the sense of sapience or sentience. These capabilities are not informed by the senses derived from an organic nervous system. However, as we perceive these systems to mimic human behaviour, it is important to remember that they are machines.

    This does not negate the criticisms of those researchers who argue that there is an existential risk to humanity if A.I. is allowed to continue to grow unchecked in its capabilities. The language in this debate presents a challenge too. We need to acknowledge that intelligence means something different to the neuroscientist and the philosopher, and between the psychologist and the social anthropologist. These semiotic discrepancies become unbreachable when we start to talk about consciousness.

    In my view, there are no current Theory of Mind applications… yet. Sophia (Hanson Robotics) is designed to emulate human responses, but it does not display either sapience or sentience.

    What we are seeing, in 2023, is the extension of both the ‘memory’, or scope of data inputs, into larger and larger multi-modal language models, which are programmed to see everything as language. The emergence of these polyglot super-savants is remarkable, and we are witnessing the unplanned and (in my view) cavalier mass deployment of these tools.

    Three ethical spheres Ethical spheres for Governing Boards to reflect on in 2023

    Ethical and Moral Implications

    Educational governing bodies need to stay abreast of the societal impacts of Artificial Intelligence systems as they become more pervasive. This is more important than having a detailed understanding of the underlying technologies or the way each school’s management decides to establish policies. Boards are required to ensure such policies are in place, are realistic, can be monitored, and are reported on.

    Policies should already exist around the use of technology in supporting learning and teaching, and these can, and should, be reviewed to ensure they stay current. There are also policy implications for admissions and recruitment, selection processes (both of staff and students) and where A.I. is being used, Boards need to ensure that wherever possible no systemic bias is evident. I believe Boards would benefit from devising their own scenarios and discussing them periodically.

     

    Source link

  • honest authors, being human – Sijen

    honest authors, being human – Sijen

    I briefly had a form up on my website for people to be able to contact me if they wanted to use any of my visualizations, visuals of theory in practice. I had to take it down because ‘people’ proved incapable of reading the text above it which clearly stated its purpose. They insisted on trying to persuade me they had something to flog. Often these individuals, generalists, were most likely using AI to generate blog posts on some vaguely related theme.

    I have rejected hundreds of approaches in recent years from individuals (I assume they were humans) who suggested they could write blogs for me. My site has always been a platform for me to disseminate my academic outputs, reflections, and insights. It has never been about monetizing my outputs or building a huge audience. I recognize that I could be doing a better job of networking, I am consistently attracting a couple of hundred different individuals visiting the site each week, but I am something of a misanthrope so it goes against the grain to crave attention.

    We should differentiate between the spelling and grammar assistance built into many desktop writing applications and the large language models (LLM) that generate original text based on an initial prompt. I have not been able to adjust to the nascent AI applications (Jasper, ChatGPT) in supporting my own authorship. I have used some of these applications as long-text search engine results, but stylistically it just doesn’t work for me. I use the spelling and grammar checking functionality of writing tools but don’t allow it to complete my sentences for me. I regularly use generative AI applications to create illustrative artwork (Midjourney) and always attribute those outputs, just as I would if were to download someone’s work from Unsplash.com or other similar platforms.

    For me, in 2023, the key argument is surely about the human-authenticity equation. To post blogs using more than a spell and grammar checker and not declaring this authorship assistance strikes me as dishonest. It’s simply not your work or your thoughts, you haven’t constructed an argument. I want to know what you, based on your professional experience, have to say about a specific issue. I would like it to be written in flowing prose, but I can forgive the clumsy language used by others and myself. If it’s yours.

    It makes a difference to me knowing that a poem has been born out of 40 years of human experience rather than the product of the undoubtedly clever linguistic manipulation of large language models devoid of human experience. That is not to say that these digital artefacts are not fascinating and have no value. They are truly remarkable, that song generated by AI can be a pleasure to listen to, but not being able to relate the experiences related through song back to an individual simply makes it different. The same is true of artworks and all writing. We need to learn to differentiate between computer intelligence and human intelligence. Where the aim is for ‘augmentation’, such enhancements should be identifiable.

    I want to know that if I am listening, looking, or reading any artefact, it is either generated by, or with assistance from, large generative AI models, or whether it is essentially the output of a human. This blog was created without LLM assistance. I wonder why other authors don’t declare the opposite when it’s true.

    Image credit: Midjourney 14/06/23

    Source link

  • What it is and How to Use it in the Classroom – Sovorel

    What it is and How to Use it in the Classroom – Sovorel

    I recently published a book to help all educators deal with the new technological phenomenon which came about on 30 November 2022 known as ChatGPT by OpenAI (https://chat.openai.com). My book, ChatGPT AI in Education: What it is and How to Use it in the Classroom, available as a paperback or ebook on Amazon at https://www.amazon.com/ChatGPT-AI-Education-What-Classroom-ebook/dp/B0BRWXPVB7 covers all of the main aspects of this AI as applied to education. Here is the book’s Table of Contents:

    What is AI and ChatGPT

    AI What is ChatGPT

    Exactly What Can ChatGPT Do?

    ChatGPT Limitations

    How Can ChatGPT Be Used in Education

    How to Use ChatGPT in the Classroom

    1. Use ChatGPT as an Essay/Assignment
    2. Creation Checker
    3. Prompt Skill Development Competition
    4. Reflect and Improve
    5. In-Class Preparatory Process
    6. Full Incorporation Option
    7. Reflection of Why
    8. Maximize the Localization and Personalization of the Assignment
    9. Use More Dynamic Assessment Techniques
    10. Feedback Provider
    11. Scaffolding Creator
    12. Instructor Assistance
    13. Virtual Guest Speaker
    14. Virtual Experiment Conductor or Guide
    15. Research Assistant

    Ethical Considerations

    Plagiarism and Academic Dishonesty

    AI Policy and Privacy Concerns

    Educational Institutions’ Policy on Use of AI

    Privacy Concerns

    Teachers’ Jobs Taken Over by AI

    Future of ChatGPT and AI

    More Integration

    This is Just the Beginning

    Call to Action

    Additional Resources

    AI Guides

    Videos

    Glossary

    References

    About the Author

    Feedback

    Other Available Books

    In addition to the book, I have provided a large number of guides, information, and infographics via Twitter (https://twitter.com/BrentAAnders) as well as multiple videos through the Sovorel YouTube Channel: https://www.youtube.com/@sovorel-EDU/videos



    Source link

  • Empower Learners for the Age of AI: a reflection – Sijen

    Empower Learners for the Age of AI: a reflection – Sijen

    During the Empower Learners for the Age of AI (ELAI) conference earlier in December 2022, it became apparent to me personally that not only does Artificial intelligence (AI) have the potential to revolutionize the field of education, but that it already is. But beyond the hype and enthusiasm there are enormous strategic policy decisions to be made, by governments, institutions, faculty and individual students. Some of the ‘end is nigh’ messages circulating on Social Media in the light of the recent release of ChatGPT are fanciful click-bait, some however, fire a warning shot across the bow of complacent educators.

    It is certainly true to say that if your teaching approach is to deliver content knowledge and assess the retention and regurgitation of that same content knowledge then, yes, AI is another nail in that particular coffin. If you are still delivering learning experiences the same way that you did in the 1990s, despite Google Search (b.1998) and Wikipedia (b.2001), I am amazed you are still functioning. What the emerging fascination about AI is delivering an accelerated pace to the self-reflective processes that all university leadership should be undertaking continuously.

    AI advocates argue that by leveraging the power of AI, educators can personalize learning for each student, provide real-time feedback and support, and automate administrative tasks. Critics argue that AI dehumanises the learning process, is incapable of modelling the very human behaviours we want our students to emulate, and that AI can be used to cheat. Like any technology, AI also has its disadvantages and limitations. I want to unpack these from three different perspectives, the individual student, faculty, and institutions.


    Get in touch with me if your institution is looking to develop its strategic approach to AI.


    Individual Learner

    For learners whose experience is often orientated around learning management systems, or virtual learning environments, existing learning analytics are being augmented with AI capabilities. Where in the past students might be offered branching scenarios that were preset by learning designers, the addition of AI functionality offers the prospect of algorithms that more deeply analyze a student’s performance and learning approaches, and provide customized content and feedback that is tailored to their individual needs. This is often touted as especially beneficial for students who may have learning disabilities or those who are struggling to keep up with the pace of a traditional classroom, but surely the benefit is universal when realised. We are not quite there yet. Identifying ‘actionable insights’ is possible, the recommended actions harder to define.

    The downside for the individual learner will come from poorly conceived and implemented AI opportunities within institutions. Being told to complete a task by a system, rather than by a tutor, will be received very differently depending on the epistemological framework that you, as a student, operate within. There is a danger that companies presenting solutions that may work for continuing professional development will fail to recognise that a 10 year old has a different relationship with knowledge. As an assistant to faculty, AI is potentially invaluable, as a replacement for tutor direction it will not work for the majority of younger learners within formal learning programmes.

    Digital equity becomes important too. There will undoubtedly be students today, from K-12 through to University, who will be submitting written work generated by ChatGPT. Currently free, for ‘research’ purposes (them researching us), ChatGPT is being raved about across social media platforms for anyone who needs to author content. But for every student that is digitally literate enough to have found their way to the OpenAI platform and can use the tool, there will be others who do not have access to a machine at home, or the bandwidth to make use of the internet, or even to have the internet at all. Merely accessing the tools can be a challenge.

    The third aspect of AI implementation for individuals is around personal digital identity. Everyone, regardless of their age or context, needs to recognise that ‘nothing in life is free’. Whenever you use a free web service you are inevitably being mined for data, which in turn allows the provider of that service to sell your presence on their platform to advertisers. Teaching young people about the two fundamental economic models that operate online, subscription services and surveillance capitalism, MUST be part of ever curriculum. I would argue this needs to be introduced in primary schools and built on in secondary. We know that AI data models require huge datasets to be meaningful, so our data is what fuels these AI processes.

    Faculty

    Undoubtedly faculty will gain through AI algorithms ability to provide real-time feedback and support, to continuously monitor a student’s progress and provide immediate feedback and suggestions for improvement. On a cohort basis this is proving invaluable already, allowing faculty to adjust the pace or focus of content and learning approaches. A skilled faculty member can also, within the time allowed to them, to differentiate their instruction helping students to stay engaged and motivated. Monitoring students’ progress through well structured learning analytics is already available through online platforms.

    What of the in-classroom teaching spaces. One of the sessions at ELAI showcased AI operating in a classroom, interpreting students body language, interactions and even eye tracking. Teachers will tell you that class sizes are a prime determinant of student success. Smaller classes mean that teachers can ‘read the room’ and adjust their approaches accordingly. AI could allow class sizes beyond any claim to be manageable by individual faculty.

    One could imagine a school built with extensive surveillance capability, with every classroom with total audio and visual detection, with physical behaviour algorithms, eye tracking and audio analysis. In that future, the advocates would suggest that the role of the faculty becomes more of a stage manager rather than a subject authority. Critics would argue a classroom without a meaningful human presence is a factory.

    Institutions

    The attraction for institutions of AI is the promise to automate administrative tasks, such as grading assignments and providing progress reports, currently provided by teaching faculty. This in theory frees up those educators to focus on other important tasks, such as providing personalized instruction and support.

    However, one concern touched on at ELAI was the danger of AI reinforcing existing biases and inequalities in education. An AI algorithm is only as good as the data it has been trained on. If that data is biased, its decisions will also be biased. This could lead to unfair treatment of certain students, and could further exacerbate existing disparities in education. AI will work well with homogenous cohorts where the perpetuation of accepted knowledge and approaches is what is expected, less well with diverse cohorts in the context of challenging assumptions.

    This is a problem. In a world in which we need students to be digitally literate and AI literate, to challenge assumptions but also recognise that some sources are verified and others are not, institutions that implement AI based on existing cohorts is likely to restrict the intellectual growth of those that follow.

    Institutions rightly express concerns about the cost of both implementing AI in education and the costs associated with monitoring its use. While the initial investment in AI technologies may be significant, the long-term cost savings and potential benefits may make it worthwhile. No one can be certain how the market will unfurl. It’s possible that many AI applications become incredibly cheap under some model of surveillance capitalism so as to be negligible, even free. However, many of the AI applications, such as ChatGPT, use enormous computing power, little is cacheable and retained for reuse, and these are likely to become costly.

    Institutions wanting to explore the use of AI are likely to find they are being presented with additional, or ‘upgraded’ modules to their existing Enterprise Management Systems or Learning Platforms.

    Conclusion

    It is true that AI has the potential to revolutionize the field of education by providing personalized instruction and support, real-time feedback, and automated administrative tasks. However, institutions need to be wary of the potential for bias, aware of privacy issues and very attentive to the nature of the learning experiences they enable.


    Get in touch with me if your institution is looking to develop its strategic approach to AI.


    Image created using DALL-E

    Source link