Tridion provides a cutting-edge component content management system (CCMS) and tools through Tridion. However, to derive maximum benefit from it, the content itself needs to be agile, e.g., it should be able to communicate with each other and with users across different silos. And that is the precise reason Tridion has partnered with the best in their respective domain. Recently, Tridion held a partner summit with all its partners to discuss and learn how partner solutions work with and enhance the Tridion experience.
Tridion partners’ panel convened to discuss the role of FAIR in structured content authoring
Tridion partners convened to form a panel that discussed FAIR principles in the context of structured content authoring. The meeting was moderated by Max Swisher, Director of Technology. The panel included representatives from our partners Klaus Fleischmann from Kaleidoscope, Jamila Elouazani from Acrolinx, Michael Mannhardt from Congree, Andreas Blumauer from Semantic Web Company, Dr. Donna Alexander from TWi, and Karsten Schrempp from Pantopix – all of these have been using FAIR principles in one way or other.
Max Swisher introduced the topic; FAIR stands for Findability, Accessibility, Interoperability, and Reusability of digital assets. FAIR’s Guiding Principles for scientific data management and stewardship were published in 2016. The principles emphasize machine-actionability as humans increasingly rely on computational tools to use data that grows in volume and complexity.
FAIR is the acronym used as the usage flows in this sequence. One needs first to find the data, then understand how to access it, often, the data needs to integrate or work with other data, and finally, it needs to be reusable. Metadata works at each stage to enable the user to achieve their goals.
Data standards are critical to enabling the FAIR principle because when data flows from one system to another, it needs to be in a standard format to relate and work. There are five dimensions for content standardization: output types, paragraphs, components, sentences, and words. For content to be FAIR, it must be standardized in all these Five Dimensions.
The topic was then opened to the panel that discussed various topics in the context of FAIR, such as:
- Standardizing content and data
- Use of metadata
- Taxonomies and terminologies
- The role it plays in overcoming the issue of silos
- Its role in Component Content Management System (CCMS)
- FAIR across different languages
- Case examples of FAIR works in real life, e.g., during migrations
For details on these topics, please refer to the full audio recording and some highlights.
Building semantic content hub based on knowledge graphs
We had the privilege to learn from Tridion partner Semantic Web Company’s CEO and Co-founder Andreas Blumauer about the semantic content hub and all that it entails. Andreas covered the topic with great breadth and depth; some of the key points from his session are summarized here; please refer to the detailed audio recording.
Andreas started with a classic example in which several departments of the same company discuss the same product or technology in silos. He gave an example of the requirement of a text mining solution that understands the content of its enterprise content management solution. E.g., for the legal team, it means software that follows the EU guidelines on ethics, while for the marketing team, it means software that can be used for text analytics from all standard Office document formats. Thus, the content hub connects various perspectives on the same product.
Need for semantic content hub
Today companies usually work with loosely coupled data silos that contain valuable facts about business objects but are not interconnected, which leads to inefficiency and a lack of practical knowledge transfer. In a traditional keyword-based search engine, results are just based on keyword matching, and the actual intent of the user or context of the search is not understood.
Metadata does help in findability, but its efficiency is often limited to the silo (original context) for which it was built. Andreas shared an example where a user types in the symptoms affecting him, e.g., muscle pain, headaches, and fatigue. Even advanced search techniques such as ML, classic NLP, and entity extraction won’t deliver the desired results as these passive relay metadata (lack context).
When queried on a semantic content hub, the exact search adds context to the search by using domain knowledge models or knowledge graphs. It connects content from various sources intelligently. It turns extracted metadata, like concepts and entities, into active metadata. The semantic content hub connects the three symptoms, “muscle pain”, “headaches”, and “fatigue” to understand that there is vitamin D deficiency and provide a solution, such as supplements that can be consumed to address it.
Semantic knowledge models thus bridge the silos between existing metadata and transform those into interoperable active metadata that can be linked and processed by other applications and systems.
Concept of Active metadata
Dynamic metadata is based on the FAIR principles, which, as mentioned earlier, follow a standard (founded on the Semantic Web Standard). Metadata are assigned a globally unique and persistent identifier. Metadata use vocabularies that follow the FAIR principles and are richly described with a plurality of accurate and relevant attributes.
Uses of semantic content hub
Semantic overcomes classic search limitations such as:
- Many names for the same thing (Synonyms)
- Ambiguity (Homographs)
- Lack of background knowledge
- Various languages and contexts
- Various data models and schemas
- Implicit semantics
- Missing logical links
Andreas demonstrated how semantic content hub using knowledge graphs answers questions using an example of an Austrian bank’s CEO commenting on the pandemic-hit year 2020. E.g., what do CEOs of banks with positive stock prices talk about? He showed us how knowledge graphs study the data, link concepts, and answer this question.
Later he demonstrated how terms turn into concepts. Using the same bank example, he showed how knowledge graphs understand the context of the word ‘resilience’ being searched and deliver the most relevant results accordingly.
Knowledge hubs based on knowledge graphs
Andreas mentioned that knowledge graphs (KGs) are ideal for building knowledge hubs for several reasons. To name a few, KGs combine unstructured and structured data, serve as a linking engine for data and knowledge management, and generate automated unified views of heterogeneous and originally non-linked data sources, such as “Customer 360”.
Andreas demonstrated how concept extraction makes semantic explicit using smart tagging for Tridion based on PoolParty. He used his own company SWC as a case study explaining how it benefited from semantic content hubs. He highlighted challenges like complex products, dynamic markets, and human resources.
SWC knowledge hub used existing resources such as implicit, procedural, and conceptual knowledge. Ultimately the knowledge hub provides all employees with easy access to essential information about our products, services, technologies and customers.
During the remaining presentation, Andreas touched upon semantic content hub topics such as:
- Use cases
- Knowledge hub system architecture
- Enterprise knowledge hub as a service
- Faceted search using taxonomies
- Integrated recommender system
- Efficient knowledge modeling with PoolParty
- PoolParty platform’s components and features
- Actual pharmaceutical client case study
- Development steps towards knowledge hub
Delivering content dynamically with Tridion DXD
Finding the content and answering questions are great but may not be effective without a seamless delivery experience. Tridion partner panel, including Joe Girling from Congility, Bruno Fraissinede from Fluid Topics, Eric Tengstrand from Etteplan, Karsten Schrempp from Pantopix, and Lesia Kalley from Acrolinx, convened to discuss the role of dynamic content delivery and how Tridion’s DXD solution masters this art. Ginika Ibeagha, Senior Manager, Tridion Professional Services, moderated the event.
Need for dynamic content delivery
Ginika introduced that while content is important, delivery channels enhance the consumer experience. Ginika discussed an example of music, the way music was delivered at the beginning, purely in the live form to the gramophone to radio to tape / CD / MP3 and now via an app on a smartphone – thus serving music to the users the way they prefer. Tridion DXD serves this exact role, delivering content the way users want or like.
The key, as everyone agreed, is that the content needs to be where the customers are or are going to be. ‘Going to be’ is very important; as technology is evolving, it is impacting the consumer behavior and the way they want to consume content – potentially, AR/VR is to play an increasing role in the future (this will be discussed in detail in our other upcoming ‘guided experience’ blog and presentation).
Speaking about Tridion DXD, there are three components for headless content delivery:
- Content delivery databases: data is stored or published here via content deployer
- Content delivery microservices: to enable various services such as query, discovery, indexing, etc.
- Presentation environment: includes web applications such as GraphQL and content interaction libraries
Tridion dynamic delivery supports multi-channel delivery, just-in-time delivery, semantic AI capabilities, and human in the loop. The latest technology enables users to access data from anywhere using handheld devices (tablet/smartphone) and directly access the relevant information they want. Tridion’s dynamic delivery not just pushes the data but also collects it and sends it back to the data store regarding customer feedback.
The forum was then opened to the panel and various points were discussed based on the experience of panel members while interacting with their respective customers. Some of the topics include:
- Information is fragmented and inconsistent (different structure and format)
- Delivering ‘the information from the content
- Link content from different data silos to deliver contextual information
- Let users browse the content in the format they want
- Delivery should be dynamic and yet simple enough to meet the requirements of all the different use cases
- And many more
For more details, please refer to the session’s audio recording sharing different perspectives from our partners based on our experience across several projects using DXD.