Derek Bridge

Past Teaching

Topics in Artificial Intelligence

Teaching Materials

NOTE: This web page describes a 10-credit module that was taught between 2005 and 2007. Maybe instead you are looking for CS4618 Artificial Intelligence I or CS4619 Artificial Intelligence II.

Lecture Description Resources
1 This lecture gives an overview of how the course will be taught & examined. Then, it explains what the discipline of Artificial Intelligence is about.
2 This lecture introduces the task of classification. It then reviews some concepts of probability theory and shows how to do inference in general and classification in particular using a full joint probability distribution.
3 This lecture explains the operation of the naive Bayes classifier.
  • Handout: ps (72k); pdf (31k)
4 This lecture explains the operation of the kNN classifier. Regression and product recommendation are also mentioned.
  • Handout: ps (120k); pdf (54k)
5 This lecture explains how to evaluate a classifier. The focus is on accuracy, and we discuss different validation methods.
  • Handout: ps (63k); pdf (30k)
  • The demos in the lecture were run using the Weka system, which is available for download from The University of Waikato in New Zealand. The system comes with tutorials and other documentation, but is more fully described in I.H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques (2nd edn.), Morgan Kaufmann, 2005.
  • The drinking dataset was collected by Dónal Doyle now of University College Dublin and I am grateful to him for making it available to us.
  • The Spambase dataset is available from the UCI Machine Learning Repository, along with numerous other datasets. The datasets will need editing into ARFF format to prepare them for use with Weka. (Details come with the Weka system.)
  • If you're interested, you can read about the many technqiues used for spam filtering by the Apache SpamAssassin Project.
6 We look at rule-based classifiers. We show they are based on a restricted form of propositional logic, which we refer to as propositional definite clauses. We look at forwards- and backwards-chaining inference engines.
7 This lecture introduces artificial neural networks and focuses on threshold logic units (TLUs), which are the building blocks for these networks.
8 This lecture firstly discusses fully connected, layered, feedforward nets and motivates the need to use machine learning techniques to produce these nets. It secondly discusses learning in the case of TLUs.
  • Handout: ps (105k); pdf (34k)
9 This lecture discusses the back-propagation learning algorithm for fully connected, layered, feedforward neural nets.
  • Handout: ps (876k); pdf (84k)
  • You might wish to play with these two applets that demonstrate the back-prop algorithm: applet1 and applet2
10 This lecture explains how to compare classifiers. Criteria include accuracy, ability to handle different types of data, efficiency, transparency, etc.
  • Handout: ps (170k); pdf (65k)
11 This lecture explains what is meant in Artificial Intelligence when we use the word agent. We discuss different kinds of agents and different kinds of environments. We look at how to build a table-driven reactive agent.
12 This lecture explains genetic algorithms (GAs). In particular, it shows how to use a GA to evolve a table-driven agent.
13 This lecture looks at how to learn the table used by a table-driven agent. The learning algorithm we look at is called Q-Learning. It implements a form of reinforcement learning, which we contrast with supervised learning.
14 This lecture looks at how to build production systems to implement reactive agents.
  • Handout: ps (94k); pdf (41k)
  • If you want to read more, then take a look at SOAR. SOAR is a general cognitive architecture which, at its lowest level, is implemented as a production system.
15 This lecture contrasts uncertainty and vagueness. It then gives a brief introduction to fuzzy set theory and fuzzy logic, which provide one way of handling vagueness in AI.
16 This lecture discusses fuzzy control, which is the application of fuzzy logic to control tasks. It gives us another way of building reactive agents.
  • Handout: ps (95k); pdf (31k)
  • Optionally, read this more comprehensive treatment:
    Jan Jantzen: Design of Fuzzy Controllers
  • If you want to read a paper that critiques fuzzy logic, it's called The Paradoxical Success of Fuzzy Logic and can be downloaded from Charles Elkan's web site
  • You'll undoubtedly want to play with Edward Sazonov's superb applet that demonstrates a fuzzy controller for a crane. (You can manually control the crane; you can use Edward's fuzzy rules; or you can edit and try out your own fuzzy rules.)
17 In this lecture we look at societies of reactive agents. We discuss emergent properties, communication between reactive agents, Artificial Life and the Society of Mind.
  • Handout: ps (83k); pdf (39k)
  • Floys
    • Here's a local copy of the Floys applet used in lectures
    • This applet was developed by Ariel Dolan. His web site is worth visiting
    • Floys are based on Java Flies, credited to Alex Vulliamy and Jeff Cragg. See Alex's home page
  • Boids
    • Here's a local copy of the Boids applet used in lectures
    • This applet was developed by Conrad Parker. The presentation of boids used in the lecture is also based on his work. Visit his boids page
    • The man credited with originating boids is Craig Reynolds. His web site has all sorts of boids-related and ALife-related links
  • StarLogo
    • The system I used to demonstrate the foraging ants is StarLogo, which is a programmable modelling environment
    • If you decide to download this environment, you'll receive some sample projects, you'll be able to write your own projects, and you may be able to run my foraging ants project. (My foraging ants code is a rehash of some code written by Mitchel Resnick, the creator of StarLogo. I updated his code to suit version 1.1 of StarLogo. My own code may not work on more recent versions of StarLogo!)
  • Framsticks are a fun example of ALife.
  • The man most associated with the Society of Mind idea (and author of a book of this title) is Marvin Minsky
18 We conclude our discussion of swarm intelligence by describing an ant algorithm for solving travelling salesperson problems.
19 In this lecture we see how adding a memory of past sensory inputs can give us an agent that is more intelligent than a reactive agent. Our discussion touches on the notions of belief states and fully observable and partially observable environments.
  • Handout: ps (97k); pdf (30k)
20 We discuss world models in more detail. In particular, we distinguish between what we will call analogical representations and logical representations.
  • Handout: ps (104k); pdf (25k)
21 In this lecture, we abandon analogical representations of states and move on to using logical representations. The lecture reviews the syntax of first-order predicate logic.
  • Handout: ps (60k); pdf (26k)
22 This lecture explains the semantics (or model-theory) of first-order predicate logic.
  • Handout: ps (513k); pdf (48k)
23 We introduce the idea of an agent's knowledge base and inference engine. We discuss the process of knowledge engineering.
  • Handout: ps (53k) pdf (22k)
24 In this lecture, we work through a knowledge engineering case study.
25 We turn now to reasoning. In this lecture, we see what we require of an inference engine. We look at logical consequence, proof theories and soundness & completeness.
  • Handout: ps (88k) pdf (43k)
26 We look at clausal form logic. This gives us a canonical form for FOPL, which is useful in automated proof theories.
  • Handout: ps (72k) pdf (30k)
27 In this lecture, we look at the operation of unification, applied to atoms of first-order predicate logic. This is a pattern-matching operation that is fundamental to most computation using logic.
  • Handout: ps (48k) pdf (20k)
28 Many inference engines use just one rule of inference, which we look at today. It is called resolution.
  • Handout: ps (68k) pdf (24k)
29 Inference engines that use resolution tend to carry out refutation proofs. In this lecture, we practice doing such resolution refutation proofs.
  • Handout: ps (63k) pdf (29k)
30 In this lecture we discuss deliberative agents. In contrast to reactive agents, they think ahead: they simulate the effects of actions 'in their heads' as a way of choosing actions for execution. We introduce the idea of a state space.
  • Handout: ps (75k); pdf (33k)
31 We look at a general search algorithm for finding paths in state spaces. We discuss the difference between the state space and the search tree. Finally, we look at two uninformed search strategies: breadth-first and depth-first.
  • Handout: ps (69k); pdf (31k)
32 We discuss one more uninformed search strategy: least-cost search. Then, we look at some informed search strategies, which use heuristic functions to focus the search. In particular, we discuss greedy search and A* search.
  • Handout: ps (114k); pdf (41k)
33 This lecture contains a brief overview of case-based reasoning. This is followed by a presentation on the use of case-based reasoning in AI search.
  • Handout: ps (915k); pdf (25k)
  • The paper we discuss in the lecture is
    McGinty, L. and Smyth, B.: 'Personalised Route Planning: A Case-Based Approach'. In E.Blanzieri and L.Portinale (eds.), Advances in Case-Based Reasoning (Procs. of the 5th European Workshop), LNAI 1898, pp.431-442, Springer, 2000
34 The lecture introduces AI planning: finding sequences of actions when states have logical representations. Topics mentioned include: The Blocks World; StrIPS-representation; progression planning and regression planning; problem decomposition; The Sussman Anomaly; state-space planning and plan-space planning; total-order planning and partial-order planning; and the principle of least-commitment.
35 We study a simplified planner called POP. We see that POP is a regression planner that searches plan-space, uses problem decomposition, builds partially-ordered plans and operates by the principle of least commitment.
  • Handout: ps (95k) pdf (36k)
36 We look at hierarchical planning. After a brief study of hierarchical approximation, we focus on hierarchical decomposition. We discuss how to modify POP in both cases.
  • Handout: ps (119k) pdf (48k)
37 We turn from classical planners to non-classical planners, which can operate in non-deterministic domains. We discuss how bounded indeterminacy can be handled by conditional planning (a.k.a. contingency planning). And we look in more detail at how to handle unbounded indeterminacy using execution monitoring & replanning. We end with an example of continuous planning.
  • Handout: ps (123k) pdf (41k)
38 We look at multi-agent systems again. An intelligent agent that exists in a multi-agent system needs (a) to model the other agents, and (b) to communicate with the other agents. We give an overview of the issues involved.
39 We begin a consideration of how we might get computers to communicate in natural languages, such as English. We give a rapid overview of the different types of knowledge needed, including: syntax, semantics, pragmatics and world knowledge.
40 We examine the operation of a simple parser. And we look at a rich grammar formalism (the Definite Clause Grammar formalism) that is well-suited to encoding fine syntactic distinctions.
41 We look at semantic rules (based on lambda-expressions). And we discuss pragmatics.
  • Handout: ps (49k) pdf (23k)
42 We look briefly at some applications of natural language processing including spell-checking, machine translation and question answering. In doing so, we discuss deep and shallow approaches, and we discuss part-of-speech tagging and word sense disambiguation.
43 We end with a general discussion: what is intelligence; is machine intelligence possible in principle; is artificial intelligence possible in practice; is it desirable; how will we know if we have succeeded; where is AI going from here?