The first ideas that would later become the Titan Cognitive Engine appeared in 2014. We were discussing the monumental success of virtual worlds in gaming, centering on three aspects. The worlds were modular, the experiences collaborative, and the possibilities infinite.
How might we apply these principles to simulating real life?
Originally, this seemed like an excellent idea for a game or virtual reality platform; many have attempted varying success. The prospect of creating such innovation was stirring enough that we discussed it again and again, trying to pin down what parameters would allow such complex virtual simulation.
We envisioned a vast collaboration network of developers and users filling in an interconnected virtual world, with fully compatible objects created on the fly, usable in any world. A network comparable to the Internet itself, in three dimensions. Such a simulation would require a unique 3D engine, specifically designed for such a task, so we set to build this engine, which we named Gaia, mother Earth.
We quickly realized pure modular functionality in the finished worlds would require a fully modular engine. The implications of this went deep: modular editor windows and tools, meaning modular parameters within the engine itself. Such modularity was unheard of in any game engine and is really a property of code itself.
The lack of a proper editing interface posed a significant challenge. We wanted users of all experience levels to fully immerse themselves in as large a degree of customizability as possible. Initially, we went to work on a Natural Language system to allow such interactions between users and the engine to take place.
This, in turn, created a new problem: to modify the world with language, a certain standardization of the objects in the world would be necessary, abstracting them from mere objects. We created the Concept Container and the Event Container, two wrappers allowing anything to be described and interacted within the virtual worlds.
By 2017, we had done it. The system for complex simulation with natural language interaction was designed (though at this point far from finished being built). But a new idea had slowly permeated the entire process. Language and simulation are deep-rooted cognitive processes that make up our everyday intelligence, are they not?
After two years of hard work behind us, we ended up with more questions than answers. We moved from designing and building a simulation engine to testing and pushing the boundaries of its possibilities. This engine would be smart-very smart. Smart enough to understand descriptions of how banks work, how molecules interact, and to communicate results of simulations to the user. In thinking up applications for the engine going well beyond games and virtual reality, we realized the full potential of the system we had devised.
We slogged on, chipping away at the bedrock, trying to find the diamond that was Artificial Intelligence. For many years, the completed VR engine was ignored, and all work was devoted to AI. We refined the language and refined the Containers to fit the language. We read for hundreds of hours, attempting to intake the full conversation surrounding AI. The work we did to push the technology forward strained the entire team and our families.
If we dug a little deeper into language, a little further into AI, maybe the product would reach its full potential. As 2021 ended, we completed a viable version.
The unit of AI in the system we built is the Agent, a piece of code with tasks to search or interact with the simulated environment, moving through it, acquiring and discriminating information relevant to its goal.
The Agents became the user interface to access, interact with, modify, and read data from the simulation. The two core features of the Gaia engine, simulation, and language, remained the same, but the environment was new. Instead of discussing Virtual Worlds, we discussed Contexts and Domains instead of objects and actors, Concepts, and Agents. The name Gaia was scrapped and renamed Atlas, the Titan holding up the world.
As the primary Agent, Atlas is a localized brain for your computer. It learns about concepts you describe to it and simulates their interaction. It has no data-driven intelligence, no cloud computing. It is fully compatible with any AI and any code. It’s swift and scales indefinitely.
It is intelligence for software.
Welcome to the Titan API demo,
You have access to a cognitive agent with a pre-configured session timeout of 60 minutes. If you wish to continue testing after your session expires, you need to reload your web browser tab to access a new agent.
After your session expires or your browser is reset, we will automatically discard your previously loaded content.
Getting started with the Titan API Demo
Your agent does not have any information and does not know anything.
You can teach your agent by adding content (in the form of text) to its knowledge base.
Your agent will learn your content in real-time.
You can ask your agent questions regarding the content you provided.
The agent will continuously learn and answer more questions as you add content.
More Content –> Deeper Inference.
Part-Of-Speech (PoS) tagger labels words as one of several categories to identify the word’s function in a given language.
Titan Action System produces a table to identify the intents and actions in a given language.
Titan Fact System produces a table to identify factual information in a given language.
Titan graph Engine automatically constructs semantic representations from textual data sources.
Titan Cognitive Search Engine provides answers to inquiries about upload text. The questions should be in the form of: Who, What, When Where, and Why?
Titan Instruction System allows you to setup rules to govern and manage your Agent’s actions.
Titan Insight System produces a report to identify knowledge either directly the provided text or inferred a combination of different content sources.