Building machines that think and act as humans do

Building truly intelligent machines requires the modeling of human cognition

Cognition is the key to understanding the world around us, but first, let’s talk about natural language.

Humans use natural language to encode a compressed version of the world around us into symbols (words), i.e., “Horse.” We also use structure (grammar) to depict how symbols are linked in time, space, and causality, i.e., “The horse is in the barn today because it’s raining.” 

Arguably, language is the protocol of the cognitive process which constructs and reconstructs the surrounding environment in the human mind; it is the “voice” of thoughts. To understand language is to model the world. 

Natural language understanding remains one of the most challenging pursuits of artificial intelligence. 

Today’s leading AI models, particularly Large Language Models (LLMs), are nothing short of astounding; They represent a significant advancement in AI research and engineering. Yet, these mathematical techniques can’t understand language because linguistic structures that carry context and meaning get stripped in the process; what we have in the end are some approximations of semantics but no fundamental comprehension –Natural Language Understanding is hard.  

Language forms constructs that help us make sense of the world.  

The human brain is a story storing and a story simulating machine. Storytelling is the recounting of events. Events describe relationships between concepts that also capture relationships between symbols and signals –This is the information hierarchy and abstraction. While there are many forms of storytelling (words, imagery, body language, music, and others), the key to developing human-relatable intelligence in machines lies in understanding how events form and function. 

Modeling cognition means building and integrating a tall tower of information hierarchy and abstraction (Titan Cognitive Engine)

Human-level AI needs a human-relatable world representation.

As we see it, the key to advancing toward a more generalizable artificial intelligence lies in developing a representation that can account for the explicit and implicit relationships between different concepts and the events that bind them. 

To this end, we do not believe that any system can efficiently learn a comprehensive world model exclusively from data, nor do we think an explicit representation is the answer. Instead, we believe a minimal structure is needed to guide how an AI learns and reasons in a human-relatable way. 

We conceive that any world model architecture must contain a set of repeatable building blocks that represent the fundamental aspects of the world. In the Titan Cognitive Engine, we capture these blocks and their relationships in the information abstraction and hierarchy:  Symbol representation, Concept representation, Event representation, and Agent representation. 

Human-level AI needs a human-like cognitive system. 

We aim to build general-purpose AI systems that act like natural cognitive agents –humans. Such quest demands building the structures for managing intelligent behavior in domain-generic complex environments.

world model

The Titan world model architecture is a multi-layered knowledge graph comprising the natural world's basic metaphysical building blocks and relationships (signals, symbols, concepts, events, agents, time, space, and others) Titan cognitive agents auto-generate a knowledge graph directly from Text.

world simulation

The Titan simulation engine manipulates instantiated mini-world models (focused attention). The simulation engine can depict, predict, and retrodict events through logical reasoning. The simulation engine can handle both abstract and concrete representations; It works as follows: Given a text stream, the simulation engine enables focused interactions in time and space (episodes of events).

world prediction

The Titan prediction engine provides the necessary algorithms and architecture to learn from (1) observed patterns that events generate in the simulation engine and (2) observed patterns that the cognitive agents generate through repeated interactions with users and other cognitive agents. These auto-predictions reinforce learning and create predictive shortcuts to uncover never-before-seen insights.

language model

The Titan language model maps directly to its world model. It uses a proprietary technique for representing a finite number of linguistic structures that can accurately stand-in for an infinite number of sentences. The Titan language model works at the cognitive engine's input (language understanding) and output (language generation).


agent model

Titan is a multi-agent system. Agents orchestrate work destined for the cognitive engine through a series of interactions. Agents self-teach and make decisions to achieve their prescribed objectives. Each agent is the custodian of its knowledge graph and simulation.

Human-level AI needs a human-like cognitive process.

The Titan cognitive process is a repeatable multi-step technique for learning, understanding, acting and adapting to information.

STEP-1 :
(natural language interface)

Users or other cognitive agents send the Titan Agent natural language-driven information, queries, or instructions. aka., Text.

STEP-2 :
(language model)

The Titan language model resolves the syntactic and semantic relationships in the Text.

STEP-3, Step-4, Step-5 :
(world model and simulation)

The Titan cognitive agent uses inference and simulation to focus attention on a mini version of the world model. Missing or incomplete information could be augmented from other world models or a common knowledge graph.

STEP-6 :
(episodic memory)

Simulations generate a fingerprint in the episodic memory.

Step-7, Step-8 :
(world prediction)

Overtime, these fingerprints can be used to learn new patterns and shortcuts and inform the agent.


Subscribe now to get the latest news about Titan Technologies

Getting Started

Welcome to the Titan API demo,

You have access to a cognitive agent with a pre-configured session timeout of 60 minutes. If you wish to continue testing after your session expires, you need to reload your web browser tab to access a new agent.

After your session expires or your browser is reset, we will automatically discard your previously loaded content.

Getting started with the Titan API Demo

Your agent does not have any information and does not know anything.

You can teach your agent by adding content (in the form of text) to its knowledge base.

Your agent will learn your content in real-time.

You can ask your agent questions regarding the content you provided.

  • If the answer is in the agent knowledge base, the agent will answer your question.
  • If the answer is not in the agent knowledge base, the agent will respond “I don’t know.”

The agent will continuously learn and answer more questions as you add content.

More Content –> Deeper Inference.


  • PoS:

Part-Of-Speech (PoS) tagger labels words as one of several categories to identify the word’s function in a given language.

  • Actions:

Titan Action System produces a table to identify the intents and actions in a given language.

  • Facts:

Titan Fact System produces a table to identify factual information in a given language.

  • Relations:

Titan graph Engine automatically constructs semantic representations from textual data sources.

  • Search:

Titan Cognitive Search Engine provides answers to inquiries about upload text. The questions should be in the form of: Who, What, When Where, and Why?

  • Instruct:

Titan Instruction System allows you to setup rules to govern and manage your Agent’s actions.

  • Insight:

Titan Insight System produces a report to identify knowledge either directly the provided text or inferred a combination of different content sources.