Tridion – AI done differently

Joe Pairman 30 Nov 2023 6 mins
Tridion

AI promises to put information at our fingertips, whether we write or consume content. But implementing AI has its own challenges, of cost, trust, and technical complexity. This blog explains how RWS is making AI-assisted content management accessible and trustable. 

When looking up content to do your job effectively, Artificial Intelligence (AI) and Large Language Models (LLMs) can be used as a kind of robot search assistant in all sorts of roles. For example:

  • Academics converse with them to understand what’s in research papers
  • Patients look up health issues in LLM tools
  • And even lawyers use it to research relevant cases

Sometimes that doesn’t work, though. A well-known case is an LLM which completely made up some legal cases that a lawyer relied on in court. The judge was not impressed, and the lawyer had a very bad day.

For all the promise of LLMs, they bring plenty of worry too. Many Tridion customers would like to deliver content in an easy way — if they could rely on it. They also hope it can help them write content easily, but not get them into trouble.

So what do we do?

For many vendors, the first reaction was to dress up their tools with some AI functionality — a barely disguised chat that would try to talk about your expert area or write content for you. It made a great party trick (tech specs in the style of Shakespeare, anyone?) but the novelty faded soon. It turned out that cleaning up content after an uncontrolled AI was like the evening times after your toddler has gone to bed.
 
Clearly, there had to be smarter ways to use this tech. Keep reading, because we’ll show that smarter approach in a video, further down in this blog post.

What are LLMs good at?

At RWS, we looked at LLMs as we’d look at any tool — seeing what they were designed to do well, what they could also be used for, and what they really weren’t good at at all.
 
It was clear that these generative AI text applications were great for … generating language! That is, for taking a source and improvising on that source in ways that closely mimicked real natural language. They were so good at language that they seemed to apply complex syntax that academic linguists struggle to capture formally.
 
They were also good at “language-adjacent” tasks, for example the kinds of tasks that frustrate authors, such as creating tables. Is a reference table language in itself? Not really. It’s a graphical arrangement of language. But LLMs, with their great ability to mimic patterns, are quite clever in their ability to work with tables.

What are LLMs not so good at?

Where they fall down is in accurately and logically applying knowledge. If you ask an LLM to help you solve a complex task, it may instruct you wrongly, with irrelevant, misleading information. If the task is a common one in the training data, there’s more chance of the instructions being right. But LLMs are designed to produce likely responses to a prompt, rather than provably correct responses. So, the bare output from an LLM is going to be wrong at least some of the time. And it’s often plausibly wrong, looking correct at first. Sometimes, cleaning up takes longer than any time initially saved. Used that way, an LLM is like an enthusiastic toddler “helping” you cook.

LLMs – the RAG approach (retrieval augmented generation)

How do we use that fantastic linguistic ability of LLMs in a more disciplined, adult way? Firstly, they need to concentrate. When you ask one of the popular chat tools for an answer, it cooks up its response from the whole store of data it was ever trained on. That’s way too wide. But we can narrow its focus by prompting it with the data that we think is relevant.
 
But why would you ask an LLM for help if you knew the answer anyway? That’s where the application builder (ourselves, in this case) comes in. With your question, a good search can already get closer to an answer, with a pool of relevant results. When the application passes those results to the LLM, it can then summarize a likely answer in a friendly way to you. Don’t feel the answer quite addresses the problem? Keep chatting, and the LLM will get you closer, just like a conversation with a colleague where you figure things out quicker together.
 
This approach is what people mean by “RAG” — retrieval augmented generation. Ask a question, and the tool retrieves some relevant data and uses that to augment the prompt to generate a useful response.

How do we productize this approach in Tridion?

It’s simple enough in principle, but getting it right takes a lot of work. Firstly, it would be very wasteful and expensive to simply pass a bunch of articles from search results into an LLM. We use a vector database to make the prompt smaller, saving resources and money. That is standard for RAG. What is not standard is the way to search for the results in the first place.
 
You’ll have personal experiences of searches that find the words you entered, but the results are miles away from the meaning you had in mind. Tridion is able to get the results much closer to your intent using the semantic AI approaches we’ve been pioneering.
 
There’s work also in getting performance and security right. Traditionally, enterprise software users tended to trust a tool until the moment it let them down. Now, reliability is something many content teams worry about, rightly. Data security was a popular topic in a discussion I recently had with the Tridion Docs User Group about AI. Maybe in some industries AI gets a free pass, but for Tridion’s customers, no way.
 
Knowing this, we’ve taken security as seriously as we always do — not only do we have our ISO 27001 and ISO 9001 rating to keep up, but we understand closely the potential risks for our customers if security is compromised. Practically, what does that mean for the AI tools we’re developing? We build them on a firm foundation on the aptly-named AWS Bedrock framework. And we continue to build securely by design, test iteratively, and use tools such as Veracode to harden everything before release.

How does this benefit our customers?

Let’s look at a video that we’ve created, showing how Tridion and AI can help a company like yours. We’re showing innovative ways to query a large corpus of content through a Trustable Chat, and then offer faster and more intuitive ways to navigate through content. Next in the video, we explore how AI can support content authors.
This is how we build our AI foundation, looking briefly at how end users of content can benefit from our Trustable Chat approach which we will ship in a next version of Tridion.
 
Also, we explored how to save writers’ time and perhaps even inspire them. But why did we select this particular use case?

Exploring editors’ pain points, and how AI can help

As we looked at customers’ pain points and matched them against AI’s potential, it became clear that we needed to develop a long-term view. We’ve long had a vision of content projects that build themselves, with human supervision. Generative AI adds another concrete piece to the architectural design, but to reach that vision will take several technical steps, over several releases.
 
Like any good roadmap, our AI roadmap looks at what we need to build and learn and breaks that into chunks that all our customers will benefit from. Each step brings concrete value to authors and editors, walking back from content reuse suggestions and handy tools to manipulate tables and the like, to something we are building right now with the linguistic power of an LLM.

Accessibility, a logical place to start

Given that LLMs are so good at natural-sounding language, what authoring pain point can they solve most easily, without lengthy training and complicated setup? Let’s think back to those early party tricks, like writing serious content in the style of Shakespeare. We might not have Elizabethans in our customer base, but we do have many end users who appreciate another kind of stylistic take on language — making it readable!
 
Increasingly, our customers’ content teams are expected to conform to readability requirements such as specific grade levels or Flesch-Kincaid scores. Sometimes they are mandated to do so, as in the case of public bodies and NGOs. Other times, they are strongly encouraged to do so. Standard accessibility guidelines include a substantial section on readability.
 
Moreover, we don’t know a single customer that does not want their content to be easily readable. That’s why we are productizing this feature in an upcoming release of Tridion, knowing that many more useful AI-driven features will follow in the future.
 
I hope this post has given you a hint of our commitment to an ambitious AI roadmap that doesn’t settle for simple party tricks but brings real, trustable value..!
Joe Pairman
Author

Joe Pairman

Director of Product Management
Joe is Director of Product Management for Tridion. He is currently shaping strategic design for a more accessible and impactful product, drawing on his experiences leading teams and bringing structured content operations to tech companies, banks, and pharma companies.
All from Joe Pairman