Tag: pretence

  • AI shatters the pretence that academic polish was ever anything but gatekeeping

    AI shatters the pretence that academic polish was ever anything but gatekeeping

    Universities are punishing students for doing what journals like Nature and Science explicitly permit their authors to do. This isn’t confusion, it’s gatekeeping masquerading as pedagogy. And in the gap between what universities demand and what the professional academic world has already accepted, we can see the death throes of an entire model of knowledge production.

    Having read several universities’ generative AI policies, my conclusion is clear: they are well-intentioned but fundamentally incoherent. They ask students to use a tool designed to generate content but only let it generate the scaffolding without influencing the building, a nearly impossible task to monitor or self-regulate.

    The policies permit students to use AI for “ideation support” and to “create structure or outline,” but insist that “core ideas” and “core reasoning” must be the student’s own. This creates an invisible line. If a student uses AI to brainstorm ten potential essay angles and chooses one, is that idea theirs or the AI’s? If AI provides an outline, hasn’t it done significant analytical heavy lifting?

    The university is saying: “As long as you do the assembly, it’s your house.” But most educators would argue that designing the blueprint is the most creative and critical part of the process.

    What journals are already doing

    Meanwhile, the professional institutions students are being trained to join have quietly resolved this confusion. This table summarises how the journals are resolving this issue:

    Publisher policies on generative AI

    Publisher Key points of policy Disclosure requirements Authorship rules Implications for authors
    Taylor & Francis Welcomes AI for idea generation, language support, and dissemination. Warns of risks (fabrication, bias). Authors must disclose AI use in manuscripts. Editors/reviewers also guided. AI tools cannot be authors; responsibility lies with humans. Transparent but permissive: disclosure is mandatory, so authors should prepare a clear AI-use statement.
    Elsevier Allows AI for efficiency, readability, and language improvement. Prohibits AI from generating scientific content or conclusions. Disclosure required if AI used in writing. Human oversight mandatory. AI cannot be credited as author; authors retain full accountability. Strictest stance: disclosure always required, and AI cannot generate substantive content.
    Springer Nature Permits AI-assisted copy editing without disclosure. Requires disclosure if AI used for substantive text generation, data analysis, or methods. Must document AI use in methods (or equivalent). Copy-editing only does not require disclosure. AI cannot meet authorship criteria; only humans accountable. More permissive: minor copy-editing can be done without disclosure, but substantive use must be declared.
    Wiley Provides detailed guidelines for ethical AI use. Supports creativity and workflow efficiency but stresses originality and integrity. Transparency required when AI used in manuscript preparation. AI cannot be listed as author; human authorship must be preserved. Balanced: disclosure required, but policy emphasizes ethical creativity rather than prohibition.
    SAGE Recognizes AI’s potential for idea generation, editing, and structuring. Emphasises limits: AI cannot replicate human creativity/critical thinking. Authors must disclose AI use. Editors/reviewers guided on ethical use. AI cannot be listed as author; responsibility remains with humans. Similar to Taylor & Francis: disclosure is mandatory, but AI can be used for supportive tasks.

    The journals are saying: the “polish” is a technical skill that can be outsourced. What matters is the intellectual substance – the diamond – of the research.

    This is a seismic shift. It validates what decolonial pedagogy has long argued: that the obsession with academic register is not about intellectual rigour but about gatekeeping a form of linguistic expression, what Pierre Bourdieu would call cultural capital.

    The gatekeeping model

    This confusion is not accidental. It is symptomatic of a deeper crisis in which the university can no longer coherently perform its dual function of credentialing the professional class while legitimating that process as meritocratic.

    The traditional model fuses the what (idea) and the how (writing). Assessment unconsciously rewards code-fluency over intellectual originality. This systematically disadvantages anyone not already socialised into academic register: working-class students, first generation students, non-native speakers, those from non-Western educational traditions.

    And here is where the class dimension becomes unavoidable: wealthier students have always had access to human “AI” – private tutors, professional editors, writing coaches. The university’s AI policy effectively punishes working-class students for accessing the free version of what wealth has always bought.

    The defence of this model often claims that “writing and idea development are interconnected.” But this argument privileges a specific type of complexity, the kind that aligns with Western academic traditions. It dismisses other forms of knowledge as lesser. Bob Marley was not a great writer in the academic sense, but he demonstrated through song that he was capable of profound philosophical thought. No-one listens to Redemption Song and thinks it would be better as a peer-reviewed journal article.

    The university’s insistence that writing and thinking are inseparable is not a pedagogical truth, it is epistemological imperialism that has mistaken the technology of one culture for universal human cognition.

    Disobedience

    The old world is dying and the new world struggles to be born. In this interregnum a great variety of morbid symptoms appear. – Antonio Gramsci

    What I am proposing (an idea-centric model that assesses intellectual substance separately from its expression in academic register) is not just an alternative assessment strategy. It is an act of epistemic disobedience.

    Universities are preparing students for a world that no longer exists. They treat the achievement of “academic register” as a core learning outcome, the very “polish” that proves the “diamond” is real. Meanwhile, the journals students aspire to publish in are saying “we care about the diamond – you can use a machine to help with the polish.”

    University AI policy, with all its confusions and contradictions, is a morbid symptom of this crisis. It is the institution desperately trying to maintain its gatekeeping function while the professional world it claims to prepare students for has already moved on.

    The journals have accidentally revealed that the emperor has no clothes: academic register was always about exclusion, not excellence.

    Universities must now choose. They can admit that “polish” was always just gatekeeping and redesign pedagogy around the substance of thought. Or they can maintain the fiction and continue punishing students (disproportionately working-class, non-native, and non-Western students) for seeing through it.

    As the old world of easily policed, surface-level assessments dies, we must embrace the struggle of the new. For this new world to be born, we must stop gatekeeping altogether – and start building gateways.

    Source link