Home > Smart Living > Innovation > The subliminal beauty of AI: Set chaos to prevent crises

The subliminal beauty of AI: Set chaos to prevent crises

December 11 marks the founding date of OpenAI. Now, more than a year since the debut of ChatGPT, what does it hold for our future?

The OpenAI logo is displayed on a cell phone with an image on a computer monitor generated by ChatGPT's Dall-E text-to-image model,(AP)

By Pranay Lal

LAST PUBLISHED 11.12.2023  |  12:00 PM IST

In 1950, Alan Turing, the British mathematician and computer scientist, provocatively asked in his seminal paper, "Can machines think?" Computer scientists have had a hunch of the power of how machines could self-learn and build artificial intelligence (AI).

The world got a feeler of this when on November 30, 2022, ChatGPT was released. Crafted by OpenAI, at its Italianate-styled historic office in San Francisco, the company, in its seven-year span, evolved into an AI pioneer.

Explained: Google announces Gemini, its largest, most capable AI model

Set up on December 11, 2015, OpenAI wasn't the first to boldly pursue AI. DeepMind had claimed that five years earlier (acquired by Google in 2014). OpenAI started with a small team and a modest investment of $1 billion. It received backing from Silicon Valley elites, Elon Musk (who parted ways in February 2019), Sam Altman (in the news recently), and Peter Thiel. At launch, ChatGPT faced the ire of sceptics and techno-luddites, including me. It was deemed of meagre intellect. Early users labelled it as clumsy, erratic, and error-prone on social media. Like many technologies and not sure of its powers, several nations like China, Iran, North Korea, Russia, and four others introduced a blanket ban on AI.

But benefits of early prototypes were visible even before November last year. After Hurricane Ian ravaged southwest Florida on September 28, 2022, thousands lost their homes. A week later, about 3,500 residents in the three hardest-hit counties received an unusual message of their smartphones. A Google algorithm deployed by the non-profit GiveDirectly used satellite images to send them a surprise: a $700 cash assistance, with no strings attached.

Much earlier, in the hidden world of tech insiders, the power of AI was well known. Before COVID-19 surfaced, two modest universities in Spain, harnessed an AI tool developed by a Canadian company. Their AI detected the early signals of the emergence of the virus in Wuhan, China. While lacking the variables to construct predictive models of its spread and repercussions, the team leveraged machine learning and natural language processing with data from past epidemics. What they lacked were the missing inputs from the ground —insights into human behaviour, travel patterns, historical demography, and the intimate dance between the biology of the pathogen and the susceptibility of the populations. Had they had these variables, the prediction of the epidemic spiralling out of control could have been better.  

At its core, AI, and by extension ChatGPT, operates on the principles of machine learning. It processes vast amounts of data, identifying patterns and relationships that often elude human perception. AI works faster and to scale compared to human-operated programmes. AI feasts on data. You can use very large datasets from clinical trials, climate records, stock markets, and currencies, and AI can sift through it in seconds. It can extract meaningful insights and recognise patterns that otherwise would escape a human eye. AI observes patterns to develop predictive models for future outcomes based on educated guesses.

Based on input instructions, ChatGPT organizes, summarizes, and writes new text. All AI engines made public “read" a massive quantum of existing data and text. It "learns" how instructions appear in context with other words. In some ways, they have learned to predict the next most likely word that might appear in response to a user request, a bit like auto-complete capabilities on search engines. AI platforms respond to an extensive array of queries and tasks. The more one uses it, the better it becomes with its responses. At first, as a user, one needs to validate each piece of information. Gradually it gets refined and begins to align with social and ethical standards. ChatGPT is only the first among many AI platforms to emerge.

MORE FROM THIS SECTION

view all

AI can use open databases such as PubMed, a popular repository of bioscience and health research, layer it with maps and remote-sensed data, and information on past outbreaks and transmission patterns to produce scenarios and models. Although this appears simple, AI is as good as the person using it. To get robust information out of AI, two safeguards are needed. First, most users can tweak the algorithms to bias them. This makes AI susceptible to misuse and can concentrate power in the hands of a few. In the wrong hands, AI can be catastrophic, warranting vigilant oversight. Second, the operator of AI must recognise that the algorithm is prone to flaws and biases (what techies call hallucinations). With a human operator, the biases, preferences, or preconceived notions can get magnified. In crisis situations, these biases can lead to flawed decision-making. How can researchers straddle these shaky foundations with unforeseen fault lines?

Running models using AI is a risky proposition. In the real world, variables are constantly changing. No two events will be similar. Relying on shaky foundations of bad data and biased AI (and AI-operator) can be a recipe for disaster. Researchers need to validate and re-validate their methods, data and queries when they present it to AI platforms. They need to delicately balance methods, tinker with variables, and understand the conditions for each prediction. Even with the same datasets, different researchers can arrive at different results.

Over time, models will become more rigorous, and simulations will improve. The power of AI is that it offers decisionmakers options when they do not see any. In this lies the subliminal beauty of AI.

Pranay Lal is a biochemist, a public health specialist, and a natural history writer. He is passionate about ecological restoration and reversing climate change. The views expressed are personal.

Also read: Week in tech: Facebook owner Meta and IBM launch AI Alliance