Early Foundations of Artificial Intelligence

by Luke Muehlhauser on March 21, 2011 in Intro to AI

(part 2 of my intro to artificial intelligence, following along with Russell & Norvig’s textbook)

Last time I explained what AI is. Now we briefly review the early foundations of AI, in philosophy,

Philosophy

Thomas Hobbes (1588-1679) suggested that reasoning was a mathematical process, that “we add and subtract in our silent thoughts.” For him, an “artificial animal” was not implausible, for “what is the heart but a spring; and the nerves, but so many strings; and the joints, but so many wheels?”

Empiricism in philosophy eventually lead to Carnap’s The Logical Structure of the World (1928), which defined a mathematical procedure for extracting beliefs from experience.

Concerning rational action, Aristotle described a kind of algorithm for determining rational action:

We deliberate not about ends, but about means. For a doctor does not deliberate whether he shall heal, nor an orator whether he shall persuade… They assume the end and consider by what means it is attained… and if we come on an impossibility, we give up the search, for example if we need money and this cannot be got; but if a thing appears possible we try to do it.

Aristotle’s idea was written as a computer program by Newell and Simon in 1957.

Mathematics

Aristotle also invented formal logic, which was much improved by George Boole (1815-1864) and especially Gottlob Frege (1848-1925). This logic could be used to transform propositions into a precise formal language that can be understood by computers.

In 1931, Kurt Godel’s incompleteness theorem showed that there are limits on what can be computed. In 1936, Alan Turing used his notion of a Turing machine to show what could and could not be computed.

Tractability is also important. Some programs are computable, but only in a longer time than the universe has so far existed. In the 1970s, Steven Cook and Richard Karp developed a method for recognizing an intractable problem.

AI’s notion of probability is Bayes’ Rule, developed by Thomas Bayes, Pierre Laplace, and others.

Economics

John von Neumann and Oskar Morgenstern made advances in scientific economics with Theory of Games and Economic Behavior. Their notion of “expected utility” and other concepts related to preferred outcomes grew into the fields of game theory and decision theory, which provide a framework for making decisions about pretty much anything under a state of uncertainty.

Later developments showed how to make rational decisions when payoffs come only after a series of actions. That topic is discussed in operations research, and led to the discovery of Markov decision processes and other advances.

Neuroscience

Neuroscience discovered that thought, personality, emotion, and everything else we associate with the self arises from arrangements of brain cells. As John Searle put it: “Brains cause minds.”

Psychology

The rise of cognitive psychology in favor of behaviorism made talk about the mind respectable again, and Donald Broadbent was one of the first to model psychological phenomena as information processing, in Perception and Communication (1958).

Computer engineering

AI requires both intelligence and (physical) architecture, and the computer has been the architecture of choice. The development of the computer from Turing to Zuse to Atansoff and onward has perhaps done more to enable AI than anything else.

Control theory and cybernetics

Control theory developed as a way for engineers to get machines to operate under their own control, and some of its insights are used or imported into AI.

Linguistics

Chomsky’s Syntactic Structures (1957) introduced a computational way to think about language, leading to computational linguistics and natural language processing. Likewise, much of the early work on knowledge representation came from linguistics.

Previous post:

Next post:

{ 4 comments… read them below or add one }

cl March 21, 2011 at 9:17 am

Luke,

AI requires both intelligence and (physical) architecture…

Interesting: a form of dualism.

  (Quote)

MarkD March 21, 2011 at 11:08 pm

Any sense of dualism is an artifact of the metaphorical structure of the field, arising from computing technologies that supported loadable logical machines. Neuroscience and its dual in connectionism has never had a parallel to loadable machines states the way that our understanding of modern computing architectures operate. Even when we simulate connectionist architectures on traditional computing frameworks, the intelligence is learned rather than programmed and resides in distributed networks of weights. Could the weight matrix be saved and restored? Sure, but it is still directly tied to the connectionist architecture in the same way that brain states are tied to neural networks and titers of neurotransmitters and glial conglomerations. The “intelligent” functioning can’t be run without also simulating the architecture, and the complete simulation is reducible in materialistic terms.

  (Quote)

cl March 22, 2011 at 2:52 am

MarkD,

…and?

  (Quote)

Alex Flint March 22, 2011 at 8:13 am

You forgot statistics!! :)

  (Quote)

Leave a Comment