Generative AI: How it works and what it’s good for
It could be the biggest buzzword of the 2020s. AI seems to be in every product, performing a myriad of technical and creative tasks. It’s getting harder and harder to ignore.
While we know AI stands for artificial intelligence, the term is simultaneously meaningless and loaded with cultural baggage. Contrary to cyberpunk fiction, AI will not take over the world and enslave humanity. But, despite what tech companies want us to believe, it will also not revolutionise productivity so we can all retire to a life of leisure.
If consumers are going to jump on the AI bandwagon, we should cut through our preconceptions to dig into understanding how it works – and what it is good at.
How AI works
Discriminative versus generative AI
Artificial intelligence is a broad field of research that’s been expanding for decades. Depending on how you define it, it’s been developing since the invention of the modern computer – in 1950, Alan Turing proposed a test to work out whether a computer can think.
Until recently, most AI that’s penetrated the world of everyday consumers has been used to sort or categorise data. For example, email services use AI techniques to detect spam, and facial recognition systems use AI to match photos to existing records. These are called discriminative AI models, and they’re also used extensively in industries like medical research and finance.
The trendy AI models being hyped more recently are generative rather than discriminative. They’re designed to generate new data rather than analyse existing data.
Generative AI has also existed for a long time, but it hasn’t been able to create anything particularly useful until the past decade, when computing power became cheap and abundant enough to support it.
Large language models
Most of the major generative AI products we’re seeing are based on large language models (LLMs). The best-known LLM is the generative pre-trained transformer (GPT), developed by research organisation OpenAI with backing from Microsoft. LLMs have also been created by Google, Meta and many other companies.
An LLM is built from an enormous collection of hundreds of billions of words. The text is gathered from every source you can imagine, from digitised books to social media posts to transcribed video archives. The LLM analyses that mountain of text to find which words appear together and in which order. Then, it’s able to recombine those words into natural language on demand.
Because LLMs can generate believable language, they’re uniquely suited to powering chatbots – computer programs that mimic human conversation – such as ChatGPT, Copilot and Gemini.
Image and video generation
Modern “AI art” tools use a similar premise to text-based AI models. A text-to-image model analyses a trove of captioned images to detect patterns in how images look and how the captions relate to the pictures. The tool can then create a likely-looking image or video when given a text prompt.
What generative AI is good for
Generative AI is a creative tool. It’s especially useful for brainstorming and finding inspiration. Whatever you’re trying to achieve, it’s easier to get started once you have some ideas.
Another handy use is to generate subtitles for video and alt text for online images to make your content accessible to more people.
But, for most consumers, right now, AI is best used as a fun toy for entertainment. Because AI models are inherently chaotic and don’t follow the same rules as humans, if you request something absurd for comedic effect, you probably won’t be disappointed.
Don’t presume accuracy
When you input a prompt into an LLM-based chatbot, the model invents a response that looks sensible. It will have correct grammar, read well and be stated with confidence, but there’s no actual reasoning – it’s a statistical estimation of what a response might look like.
Because there’s no inherent truth to LLM outputs, you shouldn’t use them in an application that requires a correct answer or where accuracy matters. If you do, make sure you carefully fact-check the output first.
Mistakes are easier to spot in pictures generated by text-to-image models. We’ve all seen pictures online with odd features, such as too many fingers on a hand.
AI practitioners have adopted the medical term “hallucination” as a euphemism for the false information AI models produce.
Protect your privacy
Companies that build generative AI are always looking for more data to improve their models. They’ll take any opportunity to gather new data, including the prompts you feed into chatbots.
Don’t type anything into a prompt line that could identify you or someone you know. If you mention both your name and address to the same AI program, even on different days, it could connect those two data points and remember them forever – and for every user. Even using regional language, like talking about a bach or replying with “chur bro”, tells the AI which country you’re from.
In all likelihood, platforms are already feeding the words and photos you post online into AI models, but you might as well limit how much private information they can access.
Don’t be afraid to keep your distance
You may have heard you’ll get left behind somehow if you don’t keep up with the rapid developments in AI. But, if it all leaves you feeling confused, apprehensive or panicked, you’re not alone. Millions around the world are grappling with the implications of AI, and crucially, the fad may not last.
There’s a real chance that, by this time in 2026, the generative AI bubble will have burst. For one, the leading company in this space and developer of the GPT model, OpenAI, is burning through billions of dollars a year and has no clear path to profitability.
The law threatens to be another barrier to financial sustainability for AI development. Powerful rights holders such as Getty Images and The New York Times are angry their copyrighted property is being used without permission to build AI models, and lawsuits are beginning to pile up.
The giant American tech companies have such an existential demand for constant growth that they need the AI trend to keep gathering steam. Their last two big gambles – cryptocurrency and the metaverse (3D digital spaces, often utilising virtual reality) – are failing to catch on as long-term societal shifts. Consumers were also told to jump on those trends, and when we didn’t, it was fine. There’s no reason to believe it’ll be any different for generative AI.
So, if you want to take a breath and step away for the moment, feel free. You’re not missing much.
We've tested 22 antivirus and security software.
Find the right one for you.
Member comments
Get access to comment