The Future of AI Is Uncertain, And It’s Up to Us

The Future of AI Is Uncertain, And It’s Up to Us
  • Jack Goodman, Founder of Studiosity, reviews AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayan and Sayash Kapoor.

Is artificial intelligence (AI) going to transform our universities? Or will it destroy the need for a tertiary education? Right now, it’s impossible to tell.

If you read the media, you’re likely to think things will end up at one extreme or the other. That’s because we are living in an age of AI hype, where exaggerated claims about the technology – both on the plus side from the biggest AI engineering firms, and on the downside from those concerned about a dystopian future – are dominating the conversation.

For those of us who aren’t computer scientists or software engineers with domain expertise, wouldn’t it be helpful to have a guide to help us unpack what’s going on and figure out how to engage with this technology that may prove to be world-altering?

If you’re a head of state or a billionaire, then you probably already have an AI advisor. For the rest of us, Arvind Narayanan and Sayash Kapoor, two computer scientists at Princeton University, have kindly written AI Snake Oil as a layman’s roadmap to the current and likely future trajectory of the technology. (Alongside the book the pair have launched a website that’s full of the most current commentary and analysis.)

Narayanan and Kapoor are concerned with the full gamut of AI, not just the ‘generative’ variety that has garnered so much attention since its ‘debut’ with the arrival of ChatGPT. They helpfully separate AI into three main streams: Predictive AI, Generative AI and Content Moderation AI. All three suffer from claims of exaggerated effectiveness, a lack of scientific evidence and fantastic claims about their future capabilities.

For the purposes of a higher education audience, it’s generative AI that’s of most interest, because that’s the technology that can simulate the intellectual output of an educated brain – whether in the form of text or visual imagery. They put genAI into its historical context: most of us don’t know that the neural network theory that underpins genAI goes back to the 1950s, and that it’s been through a series of cycles of hype and disappointment.

Sadly, the authors aren’t particularly interested in the impact of genAI on higher education, apart from noting off-handedly that the technology appears to be largely undetectable, and that financially-strapped universities that think the technology will deliver endless efficiency dividends may be sadly disappointed. At various points they mention how they encourage active engagement with AI to understand what it can and cannot do, all from the perspective of their lives at Princeton. That’s not particularly helpful given how outlandishly wealthy, privileged, and tiny that university is.

Also, the authors miss an opportunity to explore different types of genAI technologies, particularly those that may be designed to encourage learning versus others that improve human productivity by offloading cognitive effort. No doubt the latter are already transforming human work, but whether they have a place in higher education is a different question.

There is a concept in AI known as ‘alignment’, which refers to the risk that uncontrolled AI may, as it approaches more powerful levels of general intelligence, act against the interests of humans and harm (or even kill) us. It’s controversial, and the authors devote an entire chapter to how we should think about, and respond to, technology companies’ pursuit of artificial general intelligence (AGI).

From the perspective of higher education, our sector may be better served in the immediate term by thinking about alignment in terms of the interests of educational institutions and the (mostly American) technology companies that are at the vanguard of developing genAI. The culture of incrementalism that has traditionally served universities well may not be so effective when dealing with such a rapidly approaching paradigm shift in humans’ relationship with technology.

The conclusion of AI Snake Oil is a little surprising. The authors make clear that humanity’s relationship with AI will be determined by all of us –individuals and institutions, as well as regulators and politicians. No doubt there is an opportunity for universities and their leaders to take a leading role in shaping this conversation, using their institutional resources and cultural authority to help inform the public and guide us all toward a better relationship with ever more powerful computers.

We all need to be educated, informed, and willing to speak up – so that we don’t end up living in a world where AI is dominated by the largest and most powerful corporations the planet has ever seen. That will be the worst of all possible outcomes.

Studiosity is a learning technology company that works with 100+ universities globally and serving 2.2 million university students across the UK, Australia, New Zealand, Canada, and the Middle East. Jack founded Studiosity in Sydney in 2003 with a vision to make the highest quality academic study support accessible to every student, regardless of their geographic or socio-economic circumstances.

Source link