That most popular characterization of the complexity causality, a butterfly beating its wings and causing a hurricane on the other side of the world, is thought-provoking but ultimately not useful. What we really need is to look at a hurricane and figure out which butterfly caused it — or perhaps stop it before it takes flight in the first place. DARPA thinks AI should be able to do just that.
a brand-new app at the research agency is aimed at creating an appliance learning system that can sift through the innumerable events and pieces of media generated every day and identify any threads of connection or narrative in them. It’s named
KAIROS: Knowledge-directed Artificial Intelligence logic Over Schemas.
“Schema” in this case has a very accurate meaning. It’s the concept of a basic process humans use to understand the world around them by creating tiny stories of interlinked events. For example when you purchase something at a store, you know that you generally walk into the store, appoint an item, bring it to the cashier, who scans it, then you pay in some route, and then leave the store. This “buying something” process is a schema we all recognize, and could of course have schemas within it (selecting a product; payment process) or be part of another schema (offering giving; home cooking).
Although these are easily imagined inside our heads, they’re surprisingly arduous to define formally in such a route that a computer system would be able to understand. They’re familiar to us from long use and understanding, but they’re not immediately obvious or rule-bound, like how an apple will plummet downwards from a tree at a regular acceleration.
And the more data there are, the more arduous it is to define. Buying something is comparatively uncomplicated, but how do you create a schema for recognizing a cold war, or a bear marketplace? That’s what DARPA wants to look into.
“The process of uncovering relevant connections across mountains of information and the static components that they underlie requires temporal information and event patterns, which can be arduous to catch at scale with currently available tools and systems,” said DARPA app manager Boyan Onyshkevych in a news release.
KAIROS, the agency said, “aims to develop a semi-automated system capable of identifying and drawing correlations between seemingly unrelated events or data, helping to inform or create broad narratives about the world around us.”
How? Well, they have a general concept but they’re looking for expertise. The problem, they note, is that schemas currently have to be laboriously defined and checked by humans. At that point you might as well inspect the information yourself. So the KAIROS app aims to have the AI teach itself.
At first the system will be limited to ingesting data in massive quantities to build a library of basic schemas. By reading books, watching news reports, and so on it should be able to create a laundry list of suspected schemas, like those mentioned above. It might even get a hint of larger, more hazy schemas that it can’t quite put its virtual finger on — love, racism, income disparity, etc — and how others might fit into them and each other.
Next it will be allowed to look at complex real-world data and strive to extract events and narratives based on the schemas it has created.
The army and defense applications are fairly obvious: imagine a system that took in all news and social media posts and informed its administrators that it seemed likely there would be a run on banks, or a coup, or a brand-new team emerging from a declining one. Intelligence officers do their best to perform this task now, and humankind involvement will almost certainly never cease, but they would likely appreciate a computer companion saying, “there are multiple reports of stockpiling, and these articles on chemical warfare are being shared widely, this could point to rumors of terrorist assault” or the like.
Of course at this point it is all purely theoretical, but that’s why DARPA is looking into it: the agency’s raison d’etre is to turn the theoretical into the pragmatic, or failing that, at least find out why they can’t. Given the extreme simplicity of most AI systems these days it’s solid to imagine one as sophisticated as they clearly want to create. Clearly we have a long route to go.