Category: Artificial Intelligence

  • Earning Our AI Literacy License – Faculty Focus

    Earning Our AI Literacy License – Faculty Focus

    Source link

  • The Student Assistant Supports Learning and Teaching

    The Student Assistant Supports Learning and Teaching

    Reading Time: 3 minutes

    AI is becoming a bigger part of our daily lives, and students are already using it to support their learning. In fact, from our studies, 90% of faculty feel GenAI is going to play an increasingly important role in higher ed.

    Embracing AI responsibly, with thoughtful innovation, can help students take charge of their educational journey. So, we turn to the insights and expertise of you and your students — to develop AI tools that support and empower learners, while maintaining ethical practices, accuracy and a focus on the human side of education.

    Training the Student Assistant together

    Since we introduced the Student Assistant in August 2024, we continue to ensure that faculty, alongside students, play a central role in helping to train it.

    Students work directly with the tool, having conversations. Instructors review these exchanges to ensure the Student Assistant is guiding students through a collaborative, critical thinking process —helping them find answers on their own, rather than directly providing them.

    “I was extremely impressed with the training and evaluation process. The onboarding process was great, and the efforts taken by Cengage to ensure parity in the evaluation process was a good-faith sign of the quality and accuracy of the Student Assistant.” — Dr. Loretta S. Smith, Professor of Management, Arkansas Tech University

    Supporting students through our trusted sources

    The Student Assistant uses only Cengage-authored course materials — it does not search the web.

    By leveraging content aligned directly with instructor’s chosen textbook , the Student Assistant provides reliable, real-time guidance that helps students bridge knowledge gaps — without ever relying on external sources that may lack credibility.

    Unlike tools that rely on potentially unreliable web sources, the Student Assistant ensures that every piece of guidance aligns with course objectives and instructor expectations.

    Here’s how:

    • It uses assigned Cengage textbooks, eBooks and resources, ensuring accuracy and relevance for every interaction
    • The Student Assistant avoids pulling content from the web, eliminating the risks of misinformation or content misalignment
    • It does not store or share student responses, keeping information private and secure

    By staying within our ecosystem, the Student Assistant fosters academic integrity and ensures students are empowered to learn with autonomy and confidence.

    “The Student Assistant is user friendly and adaptive. The bot responded appropriately and in ways that prompt students to deepen their understanding without giving away the answer.” – Lois Mcwhorter, Department Chair for the Hutton School of Business at the University of Cumberlands

    Personalizing the learning journey

    56% of faculty cited personalization as a top use case for GenAI to help enhance the learning experience.

    The Student Assistant enhances student outcomes by offering a personalized educational experience. It provides students with tailored resources that meet their unique learning needs right when they need them. With personalized, encouraging feedback and opportunities to connect with key concepts in new ways, students gain a deeper understanding of their coursework. This helps them close learning gaps independently and find the answers on their own, empowering them to take ownership of their education.

    “What surprised me most about using the Student Assistant was how quickly it adapted and adjusted to feedback. While the Student Assistant helped support students with their specific questions or tasks, it did so in a way that allowed for a connection. It was not simply a bot that pointed you to the correct answer in the textbook; it assisted students similar to how a professor or instructor would help a student.” — Dr. Stephanie Thacker, Associate Professor of Business for the Hutton School of Business at the University of the Cumberlands

    Helping students work through the challenges

    The Student Assistant is available 24/7 to help students practice concepts without the need to wait for feedback, enabling independent learning before seeking instructor support.

    With just-in-time feedback, students can receive guidance tailored to their course, helping them work through challenges on their own schedule. By guiding students to discover answers on their own, rather than providing them outright, the Student Assistant encourages critical thinking and deeper engagement.

    “Often students will come to me because they are confused, but they don’t necessarily know what they are confused about. I have been incredibly impressed with the Student Assistants’ ability to help guide students to better understand where they are struggling. This will not only benefit the student but has the potential to help me be a better teacher, enable more critical thinking and foster more engaging classroom discussion.” — Professor Noreen Templin, Department Chair and Professor of Economics at Butler Community College

    Want to start using the Student Assistant for your courses?

    The Student Assistant, embedded in MindTap, is available in beta with select titles , such as “Management,” “Human Psychology” and “Principles of Economics” — with even more coming this fall. Find the full list of titles that currently feature the Student Assistant, plus learn more about the tool and AI at Cengage right here.

    Source link

  • Here’s where AI, VR and AR are boosting learning in higher ed

    Here’s where AI, VR and AR are boosting learning in higher ed

    Australian universities and TAFEs are embracing and combining emerging technologies like artificial intelligence (AI), virtual reality (VR) and augmented reality (AR).

    These innovations are reshaping the further and higher education sectors, offering more engaging, accessible, data-based learning experiences for students, educators and institutions alike.

    As students and institutions seek value amidst economic and work-life challenges, these technologies are crucial in delivering sustainable and scalable skilling and workforce-development goals. Integrating AI, VR, and AR can provide more personalised and cost-effective learning pathways for students facing daily pressures, making education more accessible and financially viable.

    The transformative role of AI in personalised learning

    AI is becoming a game-changer in Australian education by enabling personalised learning and providing data-driven insights. AI-powered platforms can analyse the complex interplay of factors impacting student performance and customise immersive content delivery to improve persistence, resilience and success.

    This integrated approach can serve personalised springboard content that matches students’ strengths, promotes growth in areas of weakness, and builds both capability and confidence.

    In this way, AI is not just about student learning; it also directly benefits teachers and professional staff. It streamlines the development of educational materials, from video and interactive content to branched lessons and adaptive learning paths.

    A few Australian higher and vocational education institutions have already demonstrated this by exploring the affordances of AI-driven platforms to offer personalised learning programs tailored to students’ career goals and development needs.

    Researchers from the University of South Australia are proving how AI can enhance students’ learning outcomes, equip teachers with advanced education tools and overhaul the education sector for good.

    At the University of Sydney, AI-driven learning platforms offer personalised learning experiences via the university’s generative AI platform, Cogniti, which shows that generative AI is a powerful way to support teachers and their teaching, not supplant them.

    Immersive learning through VR

    Virtual reality also continues to revolutionise Australian further and higher education, providing immersive learning environments that make complex subjects more accessible and engaging.

    From medical schools to engineering programs and advanced manufacturing, VR allows students to engage with practical scenarios that realistically present workplace problems, assess skills application and assess complex tasks.

    VR is a technology with tremendous promise in scaling high-quality and safe immersive learning by doing training at TAFE NSW.

    Its Ultimo campus utilises a high-tech, remarkably lifelike canine mannequin to provide aspiring veterinary nurses with invaluable hands-on training.

    Recently imported from the USA, this highly advanced model enables animal studies and veterinary nursing students to develop essential clinical skills, including intubation, CPR, bandaging and ear cleaning.

    By implementing VR as a training tool, TAFE NSW Ultimo plumbing students can learn to recognise potential risk from return electrical current via copper pipes into a residence, which can cause serious, even fatal, electric shock, in a safe and protected environment.

    Additionally, its welding students were able to identify and solve potentially hazardous scenarios when preparing for welding work.

    AR brings practical training to life

    AR is another immersive technology revolutionising Australian education by deepening the interaction between students and their learning materials. AR overlays digital content in the real world, making abstract concepts more tangible and understandable.

    AR is broadly applicable across diverse fields such as healthcare, technical trades, and construction, allowing students to practice and refine their skills in a controlled, simulated environment.

    At TAFE Queensland, welding students use AR to identify and solve potentially hazardous scenarios when preparing for welding work. 

    With a screen inside the helmet, students position their virtual welding torch, with sparks flying like in real life, against a plastic board and press the torch trigger to see the welds they have made.

    The screen flashes red when they are incorrect and gives them a score at the end. Using AR in welding has reduced raw material wastage by 68 per cent at a time of scarcity.

    TAFE Box Hill Institute’s Advanced Welder Training Centre is equipped with the latest augmented reality simulators, allowing students to use best-practice technology and quality systems in a hands-on environment.

    It was developed in collaboration with Weld Australia, which represents Australian welding professionals, and will help address the current shortage of qualified and skilled welders in Australia.

    Monash University’s Engineering Student Pilot Plant is designed to reflect real-world industrial environments and requirements.

    AR experiences are being developed in Vuforia Studio using 3D CAD models of the pilot plant, enabling visualisation of proposed equipment before installation.

    These AR interfaces will integrate with Internet of Things (IoT) devices, Digital Twin models and process simulations, creating an AR-based Human Machine Interface (HMI) that enhances on-site accessibility by providing remote, simultaneous interaction with the physical equipment and its Digital Twin.

    The future of Australian further and higher education

    The future of further and higher education in Australia will likely see these advanced digital technologies integrated further into the curriculum, offering new opportunities and skills for students to thrive in a competitive, tech-driven environment.

    Australia’s educational institutions have a rich history of effectively using educational technology to further learning and teaching.

    Assessing and leveraging rapidly evolving tools like AR and Gen AI will ensure they remain at the forefront of global education by providing students with the relevant and engaging learning experiences they need to succeed.

    Tony Maguire is regional director of ANZ at global learning technology company D2L.

    Source link

  • HESA’s AI Observatory: What’s new in higher education (January 31, 2025)

    HESA’s AI Observatory: What’s new in higher education (January 31, 2025)

    Transformation of education

    Leading Through Disruption: Higher Education Leaders Assess AI’s Impacts on Teaching and Learning

    Rainie, L. and Watson, E. AAC&U and Elon University.

    Report from a survey of 337 college and university leaders that provides a status report on the fast-moving changes taking place on US campuses. Key data takeaways include the fact faculty use of AI tools trails significantly behind student use, more than a third of leaders surveyed perceive their institution to be below average or behind others in using GenAI tools, 59% say that cheating has increased on their campus since GenAI tools have become widely available, and 45% think the impact of GenAI on their institutions in the next five years will be more positive than negative.

    Four objectives to guide artificial intelligence’s impact on higher education

    Aldridge, S. Times Higher Education. January 27th, 2025

    The four objectives are: 1) ensure that curricula prepare students to use AI in their careers and to add human skills value to help them success in parallel of expanded use of AI; 2) employ AI-based capacities to enhance the effectiveness and value of the education delivered; 3) leverage AI to address specific pedagogical and administrative challenges; and 4) address pitfalls and shortcomings of using AI in higher ed, and develop mechanisms to anticipate and respond to emerging challenges.

    Global perspectives

    DeepSeek harnesses links with Chinese universities in talent war

    Packer, H. Times Higher Education. January 31st, 2025

    The success of artificial intelligence platform DeepSeek, which was developed by a relatively young team including graduates and current students from leading Chinese universities, could encourage more students to pursue opportunities at home amid a global race for talent, experts have predicted.

    Teaching and learning

    Trends in AI for student assessment – A roller coaster ride

    MacGregor, K. University World News. January 25th, 2025

    Insights from (and recording of) the University World News webinar “Trends in AI for student assessment”, held on January 21st. 6% of audience members said that they did not face significant challenges in using GenAI for assessment, 53% identified “verifying the accuracy and validity of AI-generated results” as a challenge, 49% said they lacked training or expertise in using GenAI tools, 45% identified “difficulty integrating AI tools within current assessment systems”, 41% were challenged in addressing ethical concerns, 30% found “ensuring fairness and reducing bias in AI-based assessments” challenging, 25% identified “protecting student data privacy and security” as a challenge, and 19% said “resistance to adopting AI-driven assessment” was challenging.

    Open access

    Charting a course for open education resources in an AI era

    Wang, T. and Mishra, S. University World News. January 24th, 2025

    The digital transformation of higher education has positioned open educational resources (OER) as essential digital public goods for the global knowledge commons. As emerging technologies, particularly artificial intelligence (AI), reshape how educational content is created, adapted and distributed, the OER movement faces both unprecedented opportunities and significant challenges in fulfilling its mission of democratising knowledge access.

    The Dubai Declaration on OER, released after the 3rd UNESCO World OER Congress held in November 2024, addresses pressing questions about AI’s role in open education.

    Source link

  • HESA’s AI Observatory: What’s new in higher education (January 17, 2025)

    HESA’s AI Observatory: What’s new in higher education (January 17, 2025)

    Transformation of education

    The McDonaldisation of higher education in the age of AI

    Yoonil Auh, J. University World News. December 11th, 2024.

    Reflection on how AI’s impact on higher education aligns with the principles of McDonaldisation (efficiency, calculability, predictability and control), what opportunities and challenges it creates, and how institutions are responding

    Decolonization

    AI and digital neocolonialism: Unintended impacts on universities

    Yoonil Auh, J. University World News. July 12th, 2024. 

    The evolution of AI risks reinforcing neocolonial patterns, underscoring the complex ethical implications associated with their deployment and broader impact

    Workforce preparation

    As workers seek guidance on AI use, employers value skilled graduates

    Ascione, L. eCampusNews. December 9th, 2024.

    A new Wiley survey highlights that 40% of respondents struggle to understand how to integrate AI into their work and 75% lack confidence in AI use, while 34% of managers feel equipped to support AI integration

    California students want careers in AI. Here’s how colleges are meeting that demand

    Brumer, D. and Garza, J. Cal Matters. October 20th, 2024. 

    California’s governor announced the first statewide partnership with a tech firm, Nvidia, to bring AI curriculum, resources and opportunities to California’s public higher education institutions. The partnership will bring AI tools to community colleges first.

    Let’s equip the next generation of business leaders with an ethical compass

    Côrte-Real, A. Times Higher Education. October 22nd, 2024. 

    In a world driven by AI, focusing on human connections and understanding is essential for achieving success. While AI can standardize many processes, it is the unique human skills – such as empathy, creativity, and critical thinking – that will continue to set individuals and organizations apart.

    How employer demand trends across two countries demonstrate need for AI skills

    Stevens, K. EAB. October 10th, 2024. 

    Study reviewing employer demands in the US and in Ireland to better understand how demand for AI skills differ across countries, and examine if these differences are significant enough to require targeted curricular design by country

    Research

    We’re living in a world of artificial intelligence – it’s academic publishing that needs to change

    Moorhouse, B. Times Higher Education. December 13th, 2024.

    Suggestions to shift mindsets towards GenAI tools to restore trust in academic publishing

    Teaching and learning

    The AI-Generated Textbook That’s Making Academics Nervous

    Palmer, K. Inside Higher Ed. December 13th, 2024. 

    A comparative literature professor at UCLA used AI to generate the textbook for her medieval literature course notably with the aim to make course material more financially accessible to her students – but the academic community reacted strongly

    GenAI impedes student learning, hitting exam performance

    Sawahel, W. University World News. December 12th, 2024.

    A study conducted in Germany using GenAI detection systems showed that students who used GenAI scored significantly lower in essays

    The renaissance of the essay starts here

    Gordon, C. and Compton, M. Times Higher Education. December 9th, 2024. 

    A group of academics from King’s College London, the London School of Economics and Political Science, the University of Sydney and Richmond American University came together to draft a manifesto on the future of the essay in the age of AI, where they highlight problems and opportunities related to the use of essays, and propose ways to rejuvenate its use

    These AI tools can help prepare future programmers for the workplace

    Rao, R. Times Higher Education. December 9th, 2024.

    Reflection on how curricula should incorporate the use of AI tools, with a specific focus on programming courses

    The future is hybrid: Colleges begin to reimagine learning in an AI world

    McMurtrie, B. The Chronicle of Higher Education. October 3rd, 2024.

    Reflection on the state of AI integration in teaching and learning across the US

    Academic integrity

    Survey suggests students do not see use of AI as cheating

    Qiriazi, V. et al. University World News. December 11th, 2024. 

    Overview of topics discussed at the recent plenary of the Council of Europe Platform on Ethics, Transparency and Integrity in Education

    Focusing on GenAI detection is a no-win approach for instructors

    Berdahl, L. University Affairs. December 11th, 2024

    Reflection on potential equity, ethical, and workload implications of AI detection 

    The Goldilocks effect: finding ‘just right’ in the AI era

    MacCallum, K. Times Higher Education. October 28th, 2024. 

    Discussion on when AI use is ‘too much’ versus when it is ‘just right’, and how instructors can allow students to use GenAI tools while still maintaining ownership of their work

    Source link

  • HESA’s AI Observatory: What’s new in higher education (December 1, 2024)

    HESA’s AI Observatory: What’s new in higher education (December 1, 2024)

    Good evening,

    In my last AI blog, I wrote about the recent launch of the Canadian AI Safety Institute, and other AISIs around the world. I also mentioned that I was looking forward to learn more about what would be discussed during the International Network for AI Safety meeting that would take place on November 20th-21st.

    Well, here’s the gist of it. Representatives from Australia, Canada, the European Commission, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and the US gathered last week in San Francisco to “help drive technical alignment on AI safety research, testing and guidance”. They identified their first four areas of priority:

    • Research: We plan, together with the scientific community, to advance research on risks and capabilities of advanced AI systems as well as to share the most relevant results, as appropriate, from research that advances the science of AI safety.
    • Testing: We plan to work towards building common best practices for testing advanced AI systems. This work may include conducting joint testing exercises and sharing results from domestic evaluations, as appropriate.
    • Guidance: We plan to facilitate shared approaches such as interpreting tests of advanced systems, where appropriate.
    • Inclusion: We plan to actively engage countries, partners, and stakeholders in all regions of the world and at all levels of development by sharing information and technical tools in an accessible and collaborative manner, where appropriate. We hope, through these actions, to increase the capacity for a diverse range of actors to participate in the science and practice of AI safety. Through this Network, we are dedicated to collaborating broadly with partners to ensure that safe, secure, and trustworthy AI benefits all of humanity.

    Cool. I mean, of course these priority areas are all key to the work that needs to be done… But the network does not provide concrete details on how it actuallyplans to fulfill these priority areas. I guess now we’ll just have to wait and see what actually comes out of it all.

    On another note – earlier in the Fall, one of our readers asked us if we had any thoughts about how a win from the Conservatives in the next federal election could impact the future of AI in the country. While I unfortunately do not own a crystal ball, let me share a few preliminary thoughts. 

    In May 2024, the House of Commons released the Report of the Standing Committee on Human Resources, Skills and Social Development and the Status of Persons with Disabilities regarding the Implications of Artificial Intelligence Technologies for the Canadian Labour Force.

    TL;DR, the recommendations of the Standing Committee notably include: to review federal labour legislation to protect diverse workers’ rights and privacy; to collaborate with provinces, territories and labour representatives to develop a framework to support ethical adoption of AI in workplaces; to invest in AI skills training; to offer financial support to SMEs and non-profits for AI adoption; to investigate ways to utilize AI to increase operational efficiency and productivity; and for Statistics Canada to monitor labour market impacts of AI over time.

    Honestly – these are quite respectable recommendations, that could lead to significant improvements around AI implementation if they were to be followed through. 

    Going back to the question about the Conservatives, then… The Standing Committee report includes a Dissenting Report from the Conservative Party, which states that the report “does not go sufficiently in depth in how the lack of action concerning these topics [regulations around privacy, the poor state of productivity and innovation and how AI can be used to boost efficiencies, etc.] creates challenges to our ability to manage AI’s impact on the Canadian workforce”. In short, it says do more – without giving any recommendation whatsoever about what that more should be.

    On the other side, we know that one of the reasons why Bill C-27 is stagnating is because of oppositions. The Conservatives notably accused the Liberal government of seeking to “censor the Internet” – the Conservatives are opposed to governmental influence (i.e., regulation) on what can or can’t be posted online. But we also know that one significant risk of the rise of AI is the growth of disinformation, deepfakes, and more. So… maybe a certain level of “quality control” or fact-checking would be a good thing? 

    All in all, it seems like Conservatives would in theory support a growing use of AI to fight against Canada’s productivity crisis and reduce red tape. In another post previously this year, Alex has also already talked about what a Poilievre Government science policy could look like, and we both agree that the Conservatives at least appear to be committed to investing in technology. However, how they would plan to regulate the tech to ensure ethical use remains to be seen. If you have any more thoughts on that, though, I’d love to hear them. Leave a comment or send me a quick email!

    And if you want to continue discussing Canada’s role in the future of AI, make sure to register to HESA’s AI-CADEMY so you do not miss our panel “Canada’s Policy Response to AI”, where we’ll have the pleasure of welcoming Rajan Sawhney, Minister of Advanced Education (Government of Alberta), Mark Schaan, Deputy Secretary to the Cabinet on AI (Government of Canada), and Elissa Strome, Executive Director of the Pan-Canadian AI Strategy (CIFAR), and where we’ll discuss all things along the lines of what should governments’ role be in shaping the development of AI?.

    Enjoy the rest of your week-end, all!

    – Sandrine Desforges, Research Associate

    sdesforges@higheredstrategy.com 

    Source link

  • Department of Labor Publishes AI Framework for Hiring Practices

    Department of Labor Publishes AI Framework for Hiring Practices

    by CUPA-HR | October 16, 2024

    On September 24, the Department of Labor (DOL), along with the Partnership on Employment & Accessible Technology (PEAT), published the AI & Inclusive Hiring Framework. The framework is intended to be a tool to support the inclusive use of artificial intelligence in employers’ hiring technology, specifically for job seekers with disabilities.

    According to DOL, the framework was created in support of the Biden administration’s Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence. Issued in October 2023, the executive order directed the Secretary of Labor, along with other federal agency officials, to issue guidance and regulations to address the use and deployment of AI and other technologies in several policy areas. Notably, it also directed DOL to publish principles and best practices for employers to help mitigate harmful impacts and maximize potential benefits of AI as it relates to employees’ well-being.

    The new AI Framework includes 10 focus areas that cover issues impacting the recruitment and hiring of people with disabilities and contain information on maximizing the benefit of using and managing the risks associated with assessing, acquiring and employing AI hiring technology.

    The 10 focus areas are:

    1. Identify Employment and Accessibility Legal Requirements
    2. Establish Roles, Responsibilities and Training
    3. Inventory and Classify the Technology
    4. Work with Responsible AI Vendors
    5. Assess Possible Positive and Negative Impacts
    6. Provide Accommodations
    7. Use Explainable AI and Provide Notices
    8. Ensure Effective Human Oversight
    9. Manage Incidents and Appeals
    10. Monitor Regularly

    Under each focus area, DOL and PEAT provide key practices and considerations for employers to implement as they work through the AI framework. It is important to note, however, that the framework does not have force of law and that employers do not need to implement every practice or goal for every focus area at once. The goal of the framework is to lead employers to inclusive practices involving AI technology over time.

    DOL encourages HR personnel — along with hiring managers, DEIA practitioners, and others — to familiarize themselves with the framework. CUPA-HR will keep members apprised of any future updates relating to the use of AI in hiring practices and technology.



    Source link

  • AI in Practice: Using ChatGPT to Create a Training Program

    AI in Practice: Using ChatGPT to Create a Training Program

    by Julie Burrell | September 24, 2024

    Like many HR professionals, Colorado Community College System’s Jennifer Parker was grappling with an increase in incivility on campus. She set about creating a civility training program that would be convenient and interactive. However, she faced a considerable hurdle: the challenges of creating a virtual training program from scratch, solo. Parker’s creative answer to one of these challenges — writing scripts for her under-10-minute videos — was to put ChatGPT to work for her. 

    How did she do it? This excerpt from her article, A Kinder Campus: Building an AI-Powered, Repeatable and Fun Civility Training Program, offers several tips.

    Using ChatGPT for Training and Professional Development

    I love using ChatGPT. It is such a great tool. Let me say that again: it’s such a great tool. I look at ChatGPT as a brainstorming partner. I don’t use it to write my scripts, but I do use it to get me started or to fix what I’ve written. I ask questions that I already know the answer to. I’m not using it for technical guidance in any way.

    What should you consider when you use ChatGPT for scriptwriting and training sessions?

    1. Make ChatGPT an expert. In my prompts, I often use the phrase, “Act like a subject matter expert on [a topic].” This helps define both the need and the audience for the information. If I’m looking for a list of reasons why people are uncivil on college campuses, I might prompt with, “Act like an HR director of a college campus and give me a list of ways employees are acting uncivil in the workplace.” Using the phrase above gives parameters on the types of answers ChatGPT will offer, as well as shape the perspective of the answers as for and about higher ed HR.
    2. Be specific about what you’re looking for. “I’m creating a training on active listening. This is for employees on a college campus. Create three scenarios in a classroom or office setting of employees acting unkind to each other. Also provide two solutions to those scenarios using active listening. Then, create a list of action steps I can use to teach employees how to actively listen based on these scenarios.” Being as specific as possible can help get you where you want to go. Once I get answers from ChatGPT, I can then decide if I need to change direction, start over or just get more ideas. There is no wrong step. It’s just you and your partner figuring things out.
    3. Sometimes ChatGPT can get stuck in a rut. It will start giving you the same or similar answers no matter how you reword things. My solution is to start a new conversation. I also change the prompt. Don’t be afraid to play around, to ask a million questions, or even tell ChatGPT it’s wrong. I often type something like, “That’s not what I’m looking for. You gave me a list of______, but what I need is ______. Please try again.” This helps the system to reset.
    4. Once I get close to what I want, I paste it all in another document, rewrite, and cite my sources. I use this document as an outline to rewrite it all in my own voice. I make sure it sounds like how I talk and write. This is key. No one wants to listen to ChatGPT’s voice. And I guarantee that people will know if you’re using its voice — it has a very conspicuous style. Once I’ve honed my script, I ensure that I find relevant sources to back the information up and cite the sources at the end of my documents, just in case I need to refer to them.

    What you’ll see here is an example of how I used ChatGPT to help me write the scripts for the micro-session on conflict. It’s an iterative but replicable process. I knew what the session would cover, but I wanted to brainstorm with ChatGPT.

    Once I’ve had multiple conversations with the chatbot, I go back through the entire script and pick out what I want to use. I make sure it’s in my own voice and then I’m ready to record. I also used ChatGPT to help with creating the activities and discussion questions in the rest of the micro-session.

    I know using ChatGPT can feel overwhelming but rest assured that you can’t really make a mistake. (And if you’re worried the machines are going to take over, throw in a “Thank you!” or “You’re awesome!” occasionally for appeasement’s sake.)

    About the author: Jennifer Parker is assistant director of HR operations at the Colorado Community College System.

    More Resources

    • Read Parker’s full article on creating a civility training program with help from AI.
    • Learn more about ChatGPT and other chatbots.
    • Explore CUPA-HR’s Civility in the Workplace Toolkit.



    Source link