What is artificial intelligence?
AI is a broad term used for a range of technologies that mimic various human cognitive functions, able to perform tasks that would typically require human intelligence. An AI model can perform tasks without explicit step-by-step instructions, working with uncertain or incomplete information and generalizing from known scenarios or data to previously unseen scenarios or data.
Machine learning
Machine learning is a pathway to AI – a method of developing an AI model. Machine learning models, a subset of AI systems, use statistical techniques to learn what to do for themselves, instead of being told what to do through explicit programming. They learn by being trained on large volumes of data, mainly through a mix of supervised, unsupervised and reinforcement learning. Read our blog on how AI is trained for more on this topic.
Deep learning
Deep learning is a subset of machine learning methods powered by multi-layered artificial neural networks – that's the 'deep' component – inspired by the human brain's own biological neural networks. The layers enable the model to train on a set of features and then use that knowledge to recognize, or 'classify', new inputs. Deep learning methods are applied in a wide range of AI applications, including facial recognition and natural language processing.
Natural language processing
Natural language processing (NLP) combines computational linguistics with statistical and machine learning models (especially deep learning) to process text or voice data and deal appropriately with its semantics (or meaning), including its sentiment. This is what gives machines the ability to derive meaning from human language in a valuable and structured way – and the ability to create text or speech.
Generative AI
Another application of deep learning, generative AI (GenAI) models can create new data such as text, images or music based on patterns in their training data.
Exploding into our consciousness with the release of ChatGPT, these models are quickly being adopted by businesses for a wide range of applications. This rapid adoption has raised concerns about GenAI, including its security and privacy, tendency to hallucinate, and the way its reliability or usefulness can be affected by bias and other data quality issues. To maintain the trust of their stakeholders, businesses developing and using GenAI need to take the principles of responsible AI seriously.
Large language models
A type of GenAI focused on human language, large language models (LLMs) respond to text prompts in a human-like way. OpenAI's GPT, Google’s Gemini and Meta’s Llama are well-known examples, with many more emerging as demand grows.
LLMs can write essays (or poetry, or any content you like) and answer questions. Though they learn and use language very differently from us, their ability to mimic our interactions makes LLMs valuable tools for organizations looking to improve customer service and engagement at scale. You can build LLM-powered chatbots or virtual assistants (something we’ve done, for example, to provide product support for our Trados portfolio) to answer queries, schedule appointments, place orders, generate reports and much more.
But as a type of GenAI, LLMs have the same vulnerabilities, and need expert human oversight to ensure their responsible use.