Strong and Weak Artificial Intelligence
John Searle didn’t just create the Chinese room argument. He also pointed out that you can think of AI in two different ways. He called them strong and weak AI (Figure 1.6).
With strong AI, a machine displays all the behavior you’d expect from a person. If you’re a Star Trek fan, this is Lieutenant Commander Data. If you prefer Star Wars, then this might be C3PO or R2-D2. These artificial beings have emotions, a sense of purpose, and even a sense of humor. They may learn a new language just for the joy of learning it. Some computer scientists refer to strong AI as general AI—a broad intelligence that doesn’t apply only to one narrow task.
Weak (or narrow) AI is confined to a narrow task, such as product recommendations on Amazon and Google in response to the keywords a user enters. A weak AI program doesn’t engage in conversation, recognize emotion, or learn for the sake of learning; it merely does whatever job it was designed to do.
Most AI experts believe that we’re just starting down the path of weak AI—using AI to answer factual questions, provide directions, manage our schedules, make recommendations based on our past choices and reactions, help us do our taxes, prevent online fraud, and so on. Many organizations already use weak AI to help with narrow tasks, such as these. Strong AI is still relegated to the world of science fiction.
You can witness weak AI at work in the latest generation of personal assistants, including Apple’s Siri and Microsoft’s Cortana. You can talk to them and even ask them questions. They convert spoken language into machine language and use pattern matching to answer your questions and respond to your requests. That’s not much different from traditional interactions with search engines such as Google and Bing. The difference is that Siri and Cortana behave more like human beings; they can talk. They can even book a reservation at your favorite restaurant and place calls for you.
These personal assistants don’t have general AI. If they did, they’d certainly get sick of listening to your daily requests. Instead, they focus on a narrow task of listening to your input and matching it to their database.
John Searle was quick to point out that any symbolic AI should be considered weak AI. However, in the 1970s and 1980s, symbolic systems were used to create AI software that could make expert decisions. These were commonly called expert systems.
In an expert system, people who specialize in a given field input the patterns that the computer can match to arrive at a given conclusion. For example, in medicine, a doctor may input groupings of symptoms that match up with various diagnoses. A nurse inputs the patient’s symptoms into the computer. The computer can then search its database for a matching diagnosis and present the most likely diagnosis to the patient. For example, if a patient has a cough, shortness of breath, and a slight fever, the computer may conclude that the patient probably has bronchitis. To the patient, the computer may seem to be as intelligent as a doctor, but in reality all the computer is doing is matching symptoms to possible diagnoses.
Expert systems run into the same problems as other symbolic systems; they ultimately experience combinatorial explosions. There are simply too many symptoms, diagnoses, and variables to consider when trying to diagnose an illness. Just think about all the steps a doctor must take to arrive at an accurate diagnosis—conducting a physical exam, interviewing the patient, ordering lab tests, and sometimes ruling out a long list of other illnesses that have similar symptoms. Imagine all the possible ways a patient could answer each question the doctor asks and all the various combinations of lab results.
These early expert systems also had a serious limitation—the real possibility that given certain input, the system would be unable to find a match. You have probably experienced this on various websites; you input your search phrase, and the site informs you that it found no match.
Even with these drawbacks, the symbolic approach was a key starting point for AI and is still in use today, typically with some modifications (as you’ll see next).