Lecture

Description

Resources

1

This lecture gives an overview of how the course will be taught
& examined. Then, it explains what the discipline of
Artificial Intelligence is about.

 Handout: ps (43k);
pdf (18k)
 You might like to read the following article:
John McCarthy: What is Artificial Intelligence?.
 Or maybe read & think about gorilla and chimpanzee
intelligence at
Koko.org  The Gorilla Foundation,
The Jane Goodall Institute or
The Language
Research Center.
 Among many places where you can begin to search for
information about AI, I recommend AAAI's
AI Topics Page.
Several of the topics pick up on ideas mentioned in this
first lecture, e.g.:
AI
Overview,
Applications,
Ethical &
Social Implications,
History,
Philosophy,
Science Fiction and
AI
in the News.
 Another excellent general site is the
AI on the Web
page that is associated with the best AI textbook
available: Stuart Russell's and Peter Norvig's
Artificial Intelligence:
A Modern Approach (2nd edn.), Prentice Hall, 2003.
 The case study covered in the lecture describes the work of
an Irish company: changingworlds.

2

This lecture explains what is meant in Artificial Intelligence
when we use the word agent and the phrase
intelligent agent. We look at different types of agent,
especially reactive agents. And we introduce one implementation
for reactive agents (using
production systems).


3

This lecture compares declarative and
procedural knowledge
representations and then looks in more detail at the use of
production systems to implement reactive agents.
A Java implementation is discussed. (The handout also includes
revision notes on Propositional Logic.)

 Handout: ps (136k);
pdf (57k)
 Code: jar (137k)
(This is
the agent demonstrator system that you have been seeing
in lectures, in case you want to play with it.)
You may need to refer to these
instructions for downloading
jar files. The download contains a README
file that explains how to run the program.

4

This lecture explains genetic algorithms and
genetic programming. In particular, it shows how
to use genetic programming to evolve a production system.


5

This lecture contrasts uncertainty and vagueness.
It then gives a brief introduction to fuzzy set theory
and fuzzy logic, which provide one way of handling
vagueness in AI.


6

This lecture discusses fuzzy control, which is
the application of fuzzy logic to control tasks. It gives
us our second way of building reactive agents.

 Handout: ps (85k);
pdf (27k)
 Optionally, read this more comprehensive treatment:
Jan Jantzen: Design of Fuzzy Controllers
 If you want to read a paper that critiques
fuzzy logic, it's called The Paradoxical Success of Fuzzy
Logic and can be downloaded from
Charles Elkan's
web site
 You'll undoubtedly want to play with
Edward Sazonov's superb applet that
demonstrates a
fuzzy controller for a crane. (You can manually control the
crane; you can use Edward's fuzzy rules; or you can edit and
try out your own fuzzy rules.)

7

This lecture introduces artificial neural networks
and focuses on threshold logic units (TLUs),
which are the building blocks for these networks.

 Handout: ps (169k);
pdf (68k)
 Code: jar (2k) (This is the
implementation of a standalone TLU.)
 If you wish, you can play with this
applet that demonstrates the operation of a TLU
 The most interested amongst you might accompany
this lecture and the next three by reading some chapters
from a neural nets book. The draft notes for one such book are
available online:
Kevin Gurney: An Introduction to Neural Networks

8

This lecture firstly discusses fully connected, layered,
feedforward nets and motivates the need to use machine
learning techniques to produce these nets. (The kind
of learning we look at is called supervised learning.)
It secondly
discusses learning in the case of TLUs.

 Handout: ps (138k);
pdf (74k)
 Code: jar (7k) (This is the
implementation of a fullyconnected, feedforward network having
one hidden layer)
 You might wish to play with this
applet that demonstrates learning for a single TLU

9

This lecture discusses the backpropagation
learning algorithm for fully connected, layered,
feedforward neural nets.

 Handout: ps (129k);
pdf (62k)
 Code: jar (12k) (This is
the implementation of the backpropagation algorithm)
 You might wish to play with these two applets
that demonstrate the backprop algorithm:
applet1 and
applet2

10

In this lecture we look at how to use neural nets for
classification tasks. We apply the ideas to iris
classification. And we give a critique of neural nets.

 Handout: ps (1436k);
pdf (48k)
 If you downloaded the agent demonstrator earlier, you
can run the demo that trains a neural net for wall
following.
 You can read about ALVINN, the ANNbased perception system
for the autonomous vehicle mentioned in the lecture
at
The ALVINN Project Page. (ALVINN was developed by the
Carnegie Mellon Robotics
Institute; see their
projects page for all sorts of fascinating stuff)
 Code: jar (2k) (These are the
files for using the backpropagation program for iris
classification. Put these files
into the
backProp directory from the previous
lecture)
 The iris data comes from the
UCI Machine Learning Repository.
If you're really keen, you could download another dataset from
the UCI Repository and try to use my backprop
code to train a net that accurately classifies the data
 You might like to
train and test this
character recognition applet

11

In this lecture we look at societies of reactive agents.
We discuss emergent properties, communication between reactive
agents, Artificial Life and the Society of Mind.

 Handout: ps (84k);
pdf (36k)
 Floys
 Here's a local copy
of the Floys applet used in lectures
 This applet was developed by Ariel Dolan. His
web site is worth
visiting
 Floys are based on Java Flies, credited to Alex Vulliamy and
Jeff Cragg. See Alex's
home page
 Boids
 Here's a local copy
of the Boids applet used in lectures
 This applet was developed by Conrad Parker. The presentation
of boids used in the lecture is also based on his work.
Visit his
boids page
 The man credited with originating boids is Craig Reynolds.
His web site
has all sorts of boidsrelated and ALiferelated links
 StarLogo
 The system I used to demonstrate the foraging ants is
StarLogo,
which is a programmable modelling environment
 If you decide to download this environment, you'll
receive some sample projects, you'll be able to write
your own projects, and you may be able to run my
foraging ants project.
(My foraging ants code is a rehash of some code written
by
Mitchel Resnick, the creator of StarLogo.
I updated his code to suit version 1.1
of StarLogo. My own code may not work on more recent
versions of StarLogo!)
 Framsticks
are a fun example of ALife.
 The man most associated with the Society of Mind idea
(and author of a book of this title) is
Marvin Minsky

12

In this lecture we see how adding a memory of past sensory
inputs can give us an agent that is more intelligent than
a reactive agent. We also
discuss the difference between iconic and logical
representations.

 Handout: ps (113k);
pdf (28k)

13

In this lecture we discuss deliberative agents.
In contrast to reactive agents, they think ahead: they simulate
the effects of actions 'in their heads' as a way of choosing
actions for execution. We introduce the idea of a
state space.

 Handout: ps (83k);
pdf (37k)

14

We look at a general search algorithm for finding paths in
state spaces. We discuss the difference between the state space
and the search tree.

 Handout: ps (59k);
pdf (32k)
 Code: search.jar (16k)
(This is a copy of my Java implementation of the
search algorithm, which you can use to explore
the ideas we cover in the next three lectures.)

15

We discuss several uninformed search strategies:
breadthfirst, depthfirst, depthbounded,
iterativedeepening and leastcost.

 Handout: ps (86k);
pdf (50k)
 Try out some uninformed searches using the Java implementation
that I made available above.

16

We consider the completeness, optimality,
time efficiency and space efficiency of the
uninformed search strategies.

 Handout: ps (62k);
pdf (26k)

17

We discuss some informed search strategies, which use
heuristic functions to focus the search. In particular,
we discuss greedy search and A* search.

 Handout: ps (130k);
pdf (62k)
 Try out some informed searches using the Java implementation
that I made available above.

18

We discuss both how to design and how to learn heuristic
functions. This gives us an example of another kind of
learning, called reinforcement learning.

 Handout: ps (126k);
pdf (54k)

19

We discuss beam search, local search
and local beam search (which we relate to GAs).
All these forms of search sacrifice completeness and
optimality for the sake of efficiency.

 Handout: ps (124k);
pdf (39k)

20

This lecture contains a brief overview of casebased
reasoning. This is followed by a presentation on the
use of casebased reasoning in AI search.

 Handout: ps (910k);
pdf (47k)
 The paper we discuss in the lecture is
McGinty, L. and Smyth, B.:
'Personalised Route Planning:
A CaseBased Approach'. In E.Blanzieri and L.Portinale
(eds.),
Advances in CaseBased Reasoning (Procs. of the 5th European
Workshop), LNAI 1898, pp.431442, Springer, 2000

21

In this lecture, we abandon iconic representations of states
and move on to using logical representations.
The lecture is a recap on the syntax of
firstorder predicate logic.

 Handout: ps (72k);
pdf (32k)

22

This lecture explains the semantics (or
modeltheory) of firstorder predicate logic.

 Handout: ps (240k);
pdf (67k)

23

In this lecture, we look at the operation of
unification, applied to atoms of firstorder predicate logic.
This is a patternmatching operation that is fundamental
to most computation using logic.

 Handout: ps (55k)
pdf (24k)

24

Problem class.


25

The lecture introduces AI planning: finding sequences of
actions when states have logical representations. Topics
mentioned include: The Blocks World;
StrIPSrepresentation;
progression planning and regression planning;
problem decomposition; The Sussman Anomaly;
statespace planning and planspace planning;
totalorder planning and partialorder
planning; and the principle of
leastcommitment.

 Handout: ps (109k)
pdf (41k)

26

We study a simplified planner called POP. We see that
POP is a regression planner that searches planspace,
uses problem decomposition, builds partiallyordered plans
and operates by the principle of least commitment.

 Handout: ps (101k)
pdf (43k)

27

We look at hierarchical planning.
After a brief study of hierarchical approximation,
we focus on hierarchical decomposition.
We discuss how to modify POP in both cases.

 Handout: ps (102k)
pdf (41k)

28

We turn from classical planners to
nonclassical planners, which can operate in
nondeterministic domains. We discuss how
bounded indeterminacy can be handled by
conditional planning (a.k.a. contingency
planning). And we look in more detail at how
to handle unbounded indeterminacy using
execution monitoring & replanning.
We end with an example of continuous planning.

 Handout: ps (129k)
pdf (46k)

29

We consider adding a knowledge base and its
inference engine to an agent. We see the different
roles the knowledge base can play in different types of agent.
And we discuss the
process of knowledge engineering.

 Handout: ps (106k)
pdf (46k)

30

In this lecture, we work through a knowledge engineering
case study.


31

We turn now to reasoning. In this lecture, we see what we
require of an inference engine. We look at logical
consequence, proof theories and soundness &
completeness.

 Handout: ps (97k)
pdf (44k)

32

We look at clausal form logic. This gives us a canonical
form for FOPL, which is useful in automated proof theories.

 Handout: ps (79k)
pdf (32k)

33

Many inference engines use just one rule of inference, which
we look at today. It is called resolution.

 Handout: ps (88k)
pdf (33k)

34

Inference engines that use resolution tend to carry out
refutation proofs. In this lecture, we practice doing
such proofs.

 Handout: ps (61k)
pdf (25k)

35

We discuss some of the issues involved in automating
resolution refutation proof. We discuss search, and we discuss
a restriction in expressiveness to socalled positive Horn
clauses as a means of gaining greater efficiency.

 Handout: ps (71k)
pdf (33k)

36

We discuss a particular kind of 'standalone' agent:
interactive expert systems. And we discuss one way of
implementing these systems,
using rules. (Effectively, these are based on positive
Horn clauses.)


37

We look at an algorithm for rule induction, i.e.
for learning rules from training data.

 Handout: ps (74k)
pdf (36k)

38

We look in more detail at the kinds of rules used in
rulebased systems, and we consider alternative ways of
building expert systems.

 Handout: ps (74k)
pdf (39k)

39

We look again at CaseBased Reasoning, especially
from the point of view of building 'standalone' agents.

 Handout: ps (101k)
pdf (39k)
 To see one of the first ever ecommerce systems to use CBR,
visit Hooke
& MacDonald's web site. Click 'Let on the Net',
fill in the form
and submit your search for
your desired property.
 The two main suppliers of CBR shells are
Kaidara and
empolis.

40

We look at multiagent systems again.
An intelligent agent that
exists in a multiagent system needs (a) to model the other agents,
and (b) to communicate with the other agents. We give an overview
of the issues involved.


41

We begin a consideration of how we might get computers to
communicate in natural languages, such as English. We give a
rapid overview of the different types of knowledge needed,
including: syntax, semantics, pragmatics and
world knowledge.


42

We examine the operation of a simple parser.
And we look at a rich grammar
formalism (the Definite Clause Grammar formalism) that is
wellsuited to encoding fine syntactic distinctions.

 Handout: ps (59k)
pdf (31k)

43

We look at semantic rules (based on lambdaexpressions).
And we discuss disambiguation.

 Handout: ps (58k)
pdf (27k)

44

We end with a general discussion: what is intelligence;
is machine intelligence possible in principle; is artificial
intelligence possible in practice; is it desirable;
how will we know if we have
succeeded; where is AI going from here?

