Intro to AI: What is Artificial Intelligence?

by Luke Muehlhauser on March 8, 2011 in Intro to AI

(part 1 of my intro to artificial intelligence, following along with Russell & Norvig’s textbook)

While cognitive brain science hopes to understand how brains can produce mind and intelligence, the field of artificial intelligence aims to build minds and intelligence.

Much of the field focuses on building machines that can intelligently perform very specific tasks, such as diagnosing a disease or driving a car or playing chess. This is called “narrow” AI (or sometimes “applied” AI or “weak” AI).

More ambitious is the goal of Strong AI. A Strong AI could match or surpass the average human in any intellectual task, including reasoning, strategy, learning, planning, using natural language, having knowledge, self-awareness, and so on. A related goal is Artificial General Intelligence (AGI), a machine capable of general intelligence that may not be so closely modeled after one particular kind of accidental intelligence: human intelligence.

How would we know if we had succeeded in producing Strong AI? One proposed test is the Turing Test, proposed by Alan Turing in 1950. A computer passes the Turing Test if a human, after posing some questions to the machine, cannot tell whether the responses come from a human or from a computer. Such a computer would need the following capacities:

  • natural language processing for communicating in a human language like English,
  • knowledge representation for storing what it knows or learns,
  • automated reasoning for using knowledge to answer questions and draw new conclusions,
  • machine learning for extrapolating patterns and adapting to new¬†circumstances

If we modify the test so that the human interrogator can pass objects through a hatch and ask questions of them (this is called a “total” Turing test), the computer will also need:

  • computer vision for perceiving objects
  • robotics for moving about and manipulating objects

As it happens, these six fields compose most of artificial intelligence research today. But, Russell & Norvig explain:

AI researchers have devoted little effort to passing the Turing test, believing that it is more important to study the underlying principles of intelligence than to duplicate an exemplar. The quest for “artificial flight” succeeded when the Wright brothers and others stopped imitating birds and started using wind tunnels and learning about aerodynamics. Aeronautical engineering texts do not define the goal of their field as making “machines that fly so exactly like pigeons that they can fool even other pigeons.”

Of course, understanding how human intelligence works can shed light on how we might engineer an artificial agent. But much research has also focused on the laws of thought: logic and probability theory and making correct inferences.

But correct belief is not all there is to being a rational agent. A rational agent also tries to achieve the best outcome, or at least the best expected outcome, given available knowledge. And sometimes acting rationally may not involve calculating probable inferences. A rational action like pulling your hand away from a hot stove does not involve inference, but pre-programmed reaction.

In their book, Russell & Norvig concentrate on the general principles of rational agents and on components for constructing them, so this will be the focus of my blog post series as well.

Previous post:

Next post:

{ 2 comments… read them below or add one }

MarkD March 8, 2011 at 11:45 pm

There may be a misunderstanding here: I’ve rarely encountered AI research that sought to maximize the correctness of the simulated thinking. The goals are generally quite different from that and focus on (1) increasing the coverage of knowledge bases in the hope that by increasing coverage there is the possible outcome of increasing intelligence (c.f., CYC); (2) developing incremental approaches to simulating natural phenomena from neural processes to evolutionary ones in the hope that the simulations shed light on how to build a system that can emulate intelligent behavior; (3) develop effective processes based on microtheories about language and statistical machine learning that are sufficient to achieve near-term goals (Watson, Big Blue, statistical machine translation); advance the underlying theory without working towards a specific objective (MDL applied to grammar learning, etc.). Classic GOFAI has also been interested in script following and naive physics modelling–both of which are about optimization to a stereotypical behavioral outcome and hardly a systematically or provably global result.

The notion of the “laws of thought” is troubling to me. I think we idealize such laws in fields like philosophy but observed thinking is less lawlike and more creative and expansive (hence all our instinctive biases that conspire with attempts at rational justification and mostly result in near random flashes).

  (Quote)

Grady March 9, 2011 at 11:50 am

The AI is apparently being developed by a directed process.

  (Quote)

Leave a Comment