Trust, he said, is one thing that could slow the integration of AI. Why? Because as long as we humans are willing to share our personal data, AI can use it, connect it to other data, and become ‘intelligent’ enough to detect our emotions, read our gestures, and deliver personalized experiences.
Sigg’s research is focused on sensors and human behavior. His team uses transmitters and receivers to capture human movement and gestures, which could be used in a variety of AI technology. For example, high frequency signals can monitor breathing. This type of research could be used by machines in the future to detect human emotions. He cited an example of studies of emotional states while driving. A futuristic car could talk to a frustrated human, calm him, and prevent accidents.
"Without data AI is just tools. We need a lot of data, and good data, to make use of AI," he said.
And with that message in the opening keynote address delivered by Sigg, he unwittingly encapsulated what seemed to become an underlying thread of the presentations and hallway conversations that occurred over two days at Information Energy 2018.
Data Is the Key
"Information is capital that is more valuable than oil."
Cruce Saunders, an information engineer at Simple [A], a company he founded, was speaking about the rapidly evolving content landscape. His remarks focused on the sharing of information from a more process-oriented perspective, but the message was clear:
"Information is more precious than gold. Content is more powerful than nuclear energy." Like Sigg, Saunders is convinced that data is key to driving AI. The problem, according to Saunders, is that we are falling behind in our efforts to generate data in the delivery forms necessary for rapidly evolving channels—devices, chatbots, voice interfaces—in addition to traditional content-delivery outlets. "We have all of this content, and it’s currently hidden from most of these devices."
"We need a content API," he said.
We Need Google-Sized Data
France Baril (founder of Architextus) and I illustrated the need for large amounts of data to increase the effectiveness of chatbots. We’ve both been researching and designing chatbots and have come to the conclusion that one of the reasons, although not the only one, that today’s chatbot experience is so frustrating for users is that there’s just not enough data in the bots.
It’s equally frustrating for chatbot designers, hence the title of our presentation at Information Energy 2018—Chatbots: like a cold shower in the middle of the Canadian Winter. (France is Canadian)
If you design a chatbot limited to one subject, it’s easier to get bot responses appropriate to user questions, but you still need a ton of data. Oh, how I wish I could just plug in to every database in the world, including Google, and have my bot look for a solution to every user problem.
And that was Cruce’s point. We need to move toward a scenario where we structure all data in such a way that it can be shared everywhere and read by every system, including the ‘intelligent’ machines of the future.
Waiting For Machines to Become Self-Aware
Imagine that we’ve arrived in the utopia where data is quantum, connected, and the machines have learned to decipher language algorithms perfectly.
According to Carlos Perez, author of Artificial Intuition and the Deep Learning Playbook and founder of Intuition Machine Inc., my chatbot still won’t respond like a human in such a utopia. Why? The bot is not self-aware.
Perez works with deep-learning technologies. He believes that true AI requires learning, not just data.
Perez explained to the audience at Information Energy that deep learning is about layers. Each layer, he said, is a filter so that you can classify data. As the layers grow and the amount of classified data grows, the machines use deep-learning algorithms to increase their 'intelligence'.
Once again, our intelligent machines need massive amounts of data…and intuition.
Perez’s research in AI is based on the idea that machines of the future will be intuitive.
"Humans are fundamentally intuition machines and our rational (and conscious) self are just a simulation layered on top of intuition-based machinery". (from medium.com; "Alphazero: How Intuition Demolished Logic"). He says in this article from Medium that the cognitive bias toward logic as the basis of intelligence was the reason for the failure of early machine learning systems.
"We create patterns because our brains are not fast enough or sufficient to recall everything," Perez told the audience at Information Energy. This subconscious pattern recognition is the basis for intuition. "Deep learning based on intuition bests logic machines."
Does It All Come Down to Trust?
So, if machines can already beat logic games like GO, why can’t my chatbot respond to a human inquiry?
Perez had this response: Conversational intelligence is complicated because conversations change context rapidly. Humans have the ability to follow rapid changes in context. Our brains use intuition. We recognize facial expressions and body language. We sense emotions.
"We don’t create intelligence until we create content and connect it," Saunders said. "When the machine takes that information in context, it can create experiences for us."
'When' is the key word in that statement. Detection of context is still an evolving skill for cognitive machines. And Sigg’s words remind us that we need human permission to get the data needed to arrive at the contextual-intuitive goal.
"We need a lot of data to learn from and train AI technology," Sigg said. "Sensors are everywhere, and this produces large amounts of data. By 2025, IOT will generate over 2 zettabytes of data, mostly from consumer electronics devices. "Good data is even more important than good algorithms, he added.
But Sigg insists that trust and privacy are critical. "If people don’t trust, they won’t share their private data."
"And we need the data to make AI work."