As the tech industry rushes headlong into a future where the term "AI" seems to be bolted on to every new product release and development, Intel's CEO Pat Gelsinger gave an interesting take on the future of AI at the World Economic Forum this week. When asked about AI research and development, he made reference to Daniel Kahneman's book, "Thinking, Fast and Slow", and the merits of the book's foundational principle when applied to AI.
Kahneman makes the distinction that to his mind, there are two ways of thinking: Fast and intuitive, and slow and rational. Gelsinger makes the point that as AI development currently stands, all of our AI systems today are "thinking fast", and that bringing rational thinking and reasoning into AI, ie "thinking slow" is a huge area of current research.
"All of our AI systems today are thinking fast, we haven't brought reasoning into AI" said the CEO. "Today our systems hallucinate, tomorrow if we're going to use them broadly, they have to be right."
Given that current AI models can generate results at an astonishing pace, keeping track of the veracity of the data created can seem like a gigantic task, although progress does seem to have been made in the field. However, creating models that themselves are able to bring reasoning into the equation without the need for having their homework checked may well be the next step in AI evolution.
Thinking both "fast, and right", as the Intel chief puts it, would open the door to AI adoptions that could be both much more useful in regards to the results created, and more trustworthy in terms of the level of tasks to which they are applied.
While there are current implementations of AI in real-world products that require a level of trust in the systems reasoning, like autonomous vehicles, the results can vary, and on occasion, become truly catastrophic.
Thinking of upgrading?
It does feel like AI in its current form is fast becoming more of a buzzword associated with a product launch than a genuinely useful game-changer in the uses that it is currently applied to.
As we march forward into the unknown, and as discussions continue as to the morality and ethics of leaning into a technology we often struggle to define, it will be fascinating to see if these research developments into the "correctness" of AI generated results yield usable results, and much more useful implementations.
Only time will tell, but in the meantime, here's hoping that the great minds working on the next-generation of AI will bring us the sort of results that science fiction has long dreamed of, and less AI lapel pins strapped to the front of your shirt.
One can only dream…