The Challenges and Limitations of Symbolic AI, and Overcoming Them
Share this Session:
  Douglas Lenat   Douglas Lenat
President and CEO
Cycorp, Inc.
 


 

Tuesday, January 31, 2017
02:30 PM - 03:15 PM

Level:  Technical - Intermediate


In his earlier talk at this meeting, Doug Lenat argued how useful it would be for an AI to be able to do "thinking slow" left-brain logical, causal, deductive, and inductive reasoning, in addition to modern machine learning. But there's a reason that almost everyone else has left that part of research-space: it's a hard problem. A really hard problem! How do we represent and reason logically with contradictions, contextualization, negation, ellipsis, nested modals (e.g., "In 2015, Israel believed that ISIS wanted the U.S. to worry that Israel would intervene if..."), and so on? And how can we possibly get an AI to automatically deduce logical entailments fast enough to be useful? Doug has been working steadily over the past 32 years to develop and scale exactly such a system, Cyc.

He will begin by summarizing the current state of Cyc -- where the first million researcher-hours have gotten them. They've built its knowledge base by educating it: hand-axiomatizing 10 million general, default-true things about the world and maximizing its deductive closure. That led to making the CycL representation language increasingly expressive, to introduce argumentation and context mechanisms, and so on. At the same time, they've been trying to maximize the fraction of that deductive closure which can efficiently be reached. That led to the Cyc inference engine as a community of agents, a hybrid of 1100 specialized reasoners - and overlaying that with dozens of meta-level and meta-meta-level control structures, techniques, and, yes, tricks. Along the way, there have been about 100 mini-breakthroughs in representation and reasoning - think of them as engineering breakthroughs more than scientific discoveries. That sounds hard to believe, but if you divide by 32 years it's, well, 32 times less impressive.

This talk will be one of the first times Doug has reported publicly on these mini-breakthroughs. Though he'll only have time to cover a few of the most significant ones, he will discuss how and why some cognitive tasks are easy for Cyc to do but difficult for neural systems, and vice versa. That's why many complex tasks will be best addressed by a hybrid approach - what he advocates for in his keynote talk - and he'll close by discussing a couple early but promising results of taking that "dual-hemisphere" approach.


Dr. Doug Lenat, a prolific author and pioneer in artificial intelligence, focuses on applying large amounts of structured knowledge to information management tasks. As the head of Cycorp, Dr. Lenat leads groundbreaking research in software technologies, including the formalization of common sense, the semantic integration of - and efficient inference over - massive information sources, the use of explicit contexts to represent and reason with inconsistent knowledge, and the use of existing structured knowledge to guide and strengthen the results of automated information extraction from unstructured sources. Doug is applying these technologies commercially in the healthcare information and energy industries, and for the U.S. government in intelligence analysis and K-12 education. Previously, he was a professor in Stanford University's computer science department and the principal scientist at Microelectronics and Computer Technology Corporation. Doug was also one of the original fellows of the American Association for Artificial Intelligence. He serves on the Rule Interchange Format and OWL 1.1 working groups of the World Wide Web Consortium, and he is the recipient of the biannual International Joint Conference on Artificial Intelligence Computers and Thought Award.


   
Close Window