Google’s Knowledge Graph has a long, long way to go. At least according to Google CEO Larry Page.
“We’re still in the early stages,” Page said on Google’s fourth-quarter 2012 earnings call. “We’re still at 1 percent of where we should be.”
Knowledge Graph is Google’s attempt to provide answers beyond simple keywords and queries. Answers, for instance, that an intelligent person or entity might provide and that demonstrate some degree of understanding of the concepts behind the questions.
In order to do that, Google has assembled a massive and growing “semantic network” of at least 570 million objects and 18 billion facts about and relationships between them, from sources such as the CIA World Factbook, Wikipedia, and Freebase, an entity graph of “people, places, and things” that is built the same way Wikipedia is: by interested volunteers.
Those facts about things like “Lamborghini Countach,” “bike,” or “cat” — and the connections between them =– are what Google hopes to use to improve search results and make searching more natural, similar in some ways to how Wolfram Alpha and Siri work, at least in their limited contexts. Ultimately, the Star Trek experience would be nice: simply asking computers natural questions.
That’s critical for Google, as Page acknowledged in a rare well-duh moment, saying that “getting people correct answers is really important for our business.” But he also laid a finger on one of Google’s biggest challenges with Knowledge Graph: internationalization, saying that it was “hard work.”
However, if humanity ever develops something that approaches artificial intelligence, it’ll probably be from something like Google Knowledge Graph. Perhaps when Page and team are at something like 75 percent.