In-Depth
Can Machines Really Think?
The idea that machines will dominate human intelligence is mere science fiction today, but machines are getting better at complex decision making and active reasoning.
By Chetan Dube
Since the birth of computer science, we have been asking the same question: can machines really think? Some have argued that artificial intelligence is impossible (Dreyfus), immoral (Weizenbaum) and perhaps even incoherent (Searle). And yet, despite the cynicism of those before him, Alan Mathison Turing, the dubbed father of computer science, posed his famous challenge in the mid-1900s: is it possible to create a machine so intelligent that we cannot discern any difference between human and machine intelligence?
On these grounds, the world has wrestled with various cognitive models to mimic human intelligence, although none has yet been able to replicate it. In 1966, Weizenbaum created “Eliza,” a robot that imitated the behavior of a Rogerian psychotherapist by using rules that transformed users’ questions. For example, in response to a statement such as “I am feeling sad,” Eliza would ask “Why are you feeling sad?” Eliza could carry an effective psychotherapeutic discussion, but the technology did not go any further than asking these simple questions. She was therefore unable to beat the Turing Challenge despite many convinced patients.
Today, artificial intelligence has progressed much further than restructuring sentences and we have seen a grand spectrum of its capabilities in many different forms. IBM’s Watson can beat human champions on Jeopardy; cars manufactured by Google can drive themselves; and Apple’s Siri, now built into iPhone devices, can follow rudimentary oral commands to retrieve information and perform basic actions. Although these tools are helpful and perhaps entertaining, the question remains: can machines graduate from domain-specific game playing and office management tools to truly emulate and rival human intelligence?
It is quite tempting to study a specific body of knowledge and distill copious amounts of information into supercomputers in order to replicate intelligence. By this method, a machine may house more information than is possible for a human, demonstrating intelligence that rivals that of the human brain. However, this method also ignores a pivotal suggestion from Turing: the way to make machines think is not to simulate the adult mind, but to simulate a child’s brain and let it rapidly learn about its own environment. Thus, adaptive learning is the key to making machines intelligent and fostering their ability to counter human intelligence.
Leveraging this proposal from Turing, as well as theoretical computer science principles, we are precipitously close to sincerely answering the Turing Challenge. The goal is not to fake human intelligence but to make a sincere emulation of a human brain that is capable of adaptively learning just as a child learns, rapidly gaining aptitude by interacting with humans.
It is clear at this point that we cannot fake it. According to Nobel Prize winner Francis Crick, the father of modern genetics and DNA helix structures, there is a fundamental framework of ideas that are necessary for machines to achieve intelligence. Thus, while Watson and Siri demonstrate knowledge and comprehension that rivals humans, we must study hierarchical temporal memory systems in order to gain insight into the neuroscience behind a human brain’s work. To clone human intelligence, we must relay this neuroscience research into a sincere emulation of the human brain and its ability to think, comprehend, and solve problems rather than simply program a machine.
Some may ask, once machines have, in fact, achieved human intelligence, what impact will there be on modern times and our individual lives? At this point, however, it is hard to tell all the ramifications of machines’ potential to learn and think comprehensively. When microprocessors were invented, they were predominantly developed for calculators and traffic light controllers.
Today, the world is a more efficient shrinking village through the use of Internet and mobile communications. This was an unpredictable development from the original innovation, and will certainly be reflected in the widespread growth and evolution of intelligent machines. While any speculation at this point is just that -- speculation -- it is likely that machine intelligence will liberate mankind from menial and mundane tasks, allowing us to engage in higher forms of creative expression across all fields.
This type of innovation has long been squelched by a fear that machines will steal jobs, especially in an economic setting wholly focused on job creation. However, as a case study example of technology replacing humans, examine the outcome of the horse-and-carriage drivers once cars were introduced to the masses: they were promoted from driving horses to driving cars. Since then, we’ve seen a similar evolution from factories run by human hands to factories based on computer-aided design, modeling, and automatic assembly. Rather than limiting job availability to humans, I predict that machines will goad mankind to move the power of our brains higher in the value chain. A human brain is too beautiful a thing to waste, and should be focused only on the most creative, innovative, and thought-provoking matters.
At this point, the idea of machines dominating human intelligence is mere science fiction, as complex decision making, active reasoning, and emotional intelligence are still only human characteristics. It’s on the horizon, though, slowly coaxed along by our desire to solve the six decade-old Turing Challenge through the implementation of technologies such as expert systems and autonomics.
One thing we know for sure: these are the most exciting times of artificial intelligence and transformation.
Chetan Dube is the president and CEO of Ipsoft, a global provider of autonomic-based IT services. Dube founded the company in 1998 with the mission of powering the world with expert systems. During his tenure at IPsoft, Dube has led the company to create a radical shift in the way infrastructure is managed.
Prior to joining IPsoft, Dube served as an Assistant Professor at New York University. In conjunction with Distinguished Members of the Technical Staff at AT&T Bell Labs, he researched cognitive intelligence models that could facilitate cloning human intelligence. Dube has been working at IPsoft to create an IT world where machine intelligence will take care of most mundane chores, allowing mankind to concentrate on higher forms of creative thinking. Dube speaks frequently about autonomics and utility computing and has presented seminars about the environmental benefits of automation. You can contact the author at [email protected].