Tag: polished

  • What gets lost when the first draft is always polished?

    What gets lost when the first draft is always polished?

    Something has shifted.

    I recognised immediately that the email in my inbox had been written with the help of AI. The message was structured, measured, and neatly aligned with the conventions of academic communication.

    There was nothing inappropriate about it. If anything, it stood out for its clarity. My immediate response was not irritation but reflection.

    Something fundamental in the landscape of student communication had shifted, and the implications were not merely technological.

    What struck me most was not the technology itself, but what it revealed about students’ relationship to academic communication.

    For many, AI has become the safest place to begin shaping their academic voice – a space where they can get help without fear of sounding incompetent, impolite, or out of place.

    However, with AI becoming the first point of contact, it also significantly reshapes how students build confidence, seek support, and come to understand themselves as legitimate participants in academic life.

    Email writing has long played a low-key, unacknowledged role in higher education. It shapes the quality of interactions between students and staff, influences how confident students feel when seeking support, and often serves as a first step in their academic identity formation.

    Yet it has rarely been at the forefront of academic and professional skills requiring structured teaching. Instead, tone, clarity, and cultural norms tend to be acquired informally, often with uneven outcomes.

    The shift in student email practices, however, did not occur overnight. It emerged in distinct waves. The earliest emails may have missed certain academic conventions, but they reflected something important – the unfiltered, often anxious, and deeply human nature of student expression.

    Hello Dear Tutor, please can you help me with XYZ!!!”

    What followed was a period in which translation apps and early AI-adjacent tools began shaping student messages. These drafts were typically literal, tonally inconsistent, or overly formal – sometimes more cumbersome than the messages students hoped to refine.

    By 2024, a third wave appeared – emails that were grammatically precise but noticeably mechanical, with phrasing that felt oddly detached or mismatched to the emotional context.

    The most recent wave, however, is markedly different. Today’s AI-assisted emails display tone, register, and interpersonal warmth that align strikingly well with UK higher education communication norms.

    They are clearer, easier to respond to, and more polished than many human-written counterparts – yet their increasing fluency raises questions about what may slowly be lost when AI mediates the first layer of human communication.

    Over the course of a three-year structured personal tutor pilot for postgraduate taught students, I delivered lecture-style workshops on email etiquette and intercultural communication across three cohorts – sessions designed to help students navigate tone, clarity, structure, and cultural awareness in their correspondence.

    These workshops were well received and often revealed the specific areas where students felt uncertain. But over time, as AI-generated drafts became more polished, student feedback pointed to a shift – they were less concerned with the mechanics of email writing and more interested in developing the judgement required for sensitive or complex interactions.

    Students increasingly arrived with structurally sound drafts but lacked confidence in how to handle the interpersonal dimensions of communication. This change has influenced how I teach these sessions and reflects a wider sector trend – as AI takes on the mechanical aspects of communication, pedagogical attention is moving toward the interpretive and relational elements that technology cannot replicate.

    Unequal support

    Universities have never been wholly indifferent to the communication needs of students. The presence of English for Academic Purposes tutors, academic skills tutors, study skills teams, and personal tutoring structures demonstrates a longstanding recognition of the complexities students face, particularly in an international setting.

    These roles bring linguistic, cultural, and interpersonal expertise into academic environments in ways that strengthen student experience and foster meaningful connection.

    However, what has changed recently is where students turn first for support. A student unsure about how to phrase a sensitive request, or anxious about sounding impolite, once came to a tutor for reassurance. Increasingly, their first point of reference is an AI tool.

    Students describe this shift not as a shortcut but as a source of confidence – a way to express themselves without fearing misinterpretation, judgement, or linguistic missteps.

    For international students navigating unfamiliar communicative conventions, and for home students who have not been explicitly taught professional email writing, the appeal is obvious.

    AI offers a sense of linguistic security – an initial framework that enables communication to begin – echoing findings from Jisc and Advance HE surveys which show that students often feel uncertain about tone, clarity, and how their messages will be interpreted.

    As sector guidance increasingly frames AI literacy as part of students’ digital capabilities, there is a risk that communication is treated as a technical skill rather than a relational one – something that can be optimised rather than learned through reflection and support.

    What AI can’t do

    In many cases, AI-generated messages are easier for staff to interpret. They arrive with clearer structure, fewer ambiguities, and a tone that aligns more closely with academic expectations. This has practical benefits – misunderstandings reduce, responses are quicker, and students feel more able to reach out.

    Yet beneath these advantages lie deeper pedagogical questions. Before the widespread use of AI, a substantial part of my work as a personal tutor involved helping students formulate messages that required sensitivity, boundary-setting, or careful negotiation. These conversations were not specifically about vocabulary or grammar – they were about judgement – understanding what to disclose, how much to say, how to position oneself, and how to communicate with integrity in complex interpersonal contexts.

    Consider the example of a student struggling with group dynamics. An AI tool might produce a concise, polite email requesting guidance, but this does not equip the student to navigate the underlying challenge – describing the issue accurately, advocating for fairness, protecting relationships with peers, and anticipating the consequences of escalation.

    These are sophisticated communicative decisions that depend on confidence, self-awareness, and the ability to read social context. AI at this current stage cannot develop those capacities.

    This distinction is critical. Although AI can generate language, it cannot cultivate the reflective judgement students require to express themselves authentically and ethically, which is arguably imperative to the development of their critical thinking skills.

    Dependency or gap?

    Some worry that students may become overly reliant on AI or that their individuality may be flattened by formulaic phrasing. These concerns are not without merit, but they require nuance.

    Many students who lean heavily on AI do so because they were never formally taught the communicative norms of academia. Their “dependency” reflects systemic gaps in the provision of communication education more than any inherent shortcoming.

    Others fear that AI obscures intercultural difference. While AI can smooth the surface of communication, it does not – and cannot – replace the development of intercultural understanding. Students still need to learn how communication functions within and across cultural contexts, even if AI helps them begin those conversations with greater confidence.

    With appropriate guidance, students can learn not to accept AI-generated outputs uncritically but to review, adapt, and personalise them. The tool becomes a starting point in a process of learning and refinement rather than an end product in itself.

    A starting point

    The most constructive approach is to understand AI as one element within a broader ecology of communication support. Students may begin with an AI-generated draft, but it’s imperative to spotlight that human judgement should remain indispensable. Here, the educator’s role shifts toward helping students decide when AI-generated phrasing is appropriate, when it is insufficient, and when a situation requires a more nuanced or personalised approach.

    This does not diminish the importance of academic communication support. Rather, it redefines it. Workshops and tutorials that once focused on sentence-level clarity may now need to prioritise higher-order skills – articulating uncertainty, navigating conflict, expressing boundaries, and understanding the ethics of communication in digital environments.

    Small reflective exercises can deepen this learning, prompting students to consider why they phrased a request in a particular way, what outcomes they were hoping for, and how AI either supported or limited their intentions.

    Teaching judgement

    If we accept that AI will remain embedded in student communication practices, then support systems must evolve accordingly. This does not mean discouraging AI use. Rather, it means situating AI within a broader pedagogical framework that helps students engage with it critically and responsibly.

    Educators might place greater emphasis on discerning when AI-generated phrasing is contextually appropriate, understanding how tone functions in sensitive or emotionally charged interactions, recognising when AI output should be revised, supplemented, or entirely set aside, and developing a professional voice that reflects authenticity, clarity, and ethical judgement.

    These shifts align communication support more closely with the realities of academic and professional life, where clarity can be assisted by technology but meaning must still be crafted by the human mind.

    AI has undoubtedly made some aspects of student communication easier. It has reduced uncertainty, accelerated clarity, and enabled students to express themselves in ways that feel more confident and secure. Nevertheless, the deeper work of communication – the development of judgement, voice, and relational understanding – remains firmly within the human domain.

    The task for higher education is not to resist AI but to understand how it reshapes the conditions under which communicative skills are learned. Our responsibility, therefore, should be to ensure that students are able not only to send clear messages but to navigate the complexities of academic and interpersonal life with confidence and integrity that are valuable assets for their future both personally and professionally.

    The key issue, then, is not whether students use AI to write emails, but whether higher education continues to take responsibility for teaching the judgement that surrounds communication. As AI absorbs the mechanics of clarity and tone, educators must focus more intentionally on helping students navigate uncertainty, vulnerability, boundaries, and ethical self-expression.

    The development of judgement, voice, and interpersonal understanding remains firmly within the human domain – and at the heart of higher education’s purpose.

    AI may draft the email, but it cannot teach students who they are in the process of writing it.

    Source link