NOTE: This web page describes a 10-credit module that was taught between 2005 and 2007. Maybe instead you are looking for CS4618 Artificial Intelligence I or CS4619 Artificial Intelligence II.
Lecture | Description | Resources |
---|---|---|
1 | This lecture gives an overview of how the course will be taught & examined. Then, it explains what the discipline of Artificial Intelligence is about. |
|
2 | This lecture introduces the task of classification. It then reviews some concepts of probability theory and shows how to do inference in general and classification in particular using a full joint probability distribution. |
|
3 | This lecture explains the operation of the naive Bayes classifier. | |
4 | This lecture explains the operation of the kNN classifier. Regression and product recommendation are also mentioned. | |
5 | This lecture explains how to evaluate a classifier. The focus is on accuracy, and we discuss different validation methods. |
|
6 | We look at rule-based classifiers. We show they are based on a restricted form of propositional logic, which we refer to as propositional definite clauses. We look at forwards- and backwards-chaining inference engines. |
|
7 | This lecture introduces artificial neural networks and focuses on threshold logic units (TLUs), which are the building blocks for these networks. |
|
8 | This lecture firstly discusses fully connected, layered, feedforward nets and motivates the need to use machine learning techniques to produce these nets. It secondly discusses learning in the case of TLUs. | |
9 | This lecture discusses the back-propagation learning algorithm for fully connected, layered, feedforward neural nets. | |
10 | This lecture explains how to compare classifiers. Criteria include accuracy, ability to handle different types of data, efficiency, transparency, etc. | |
11 | This lecture explains what is meant in Artificial Intelligence when we use the word agent. We discuss different kinds of agents and different kinds of environments. We look at how to build a table-driven reactive agent. |
|
12 | This lecture explains genetic algorithms (GAs). In particular, it shows how to use a GA to evolve a table-driven agent. |
|
13 | This lecture looks at how to learn the table used by a table-driven agent. The learning algorithm we look at is called Q-Learning. It implements a form of reinforcement learning, which we contrast with supervised learning. |
|
14 | This lecture looks at how to build production systems to implement reactive agents. | |
15 | This lecture contrasts uncertainty and vagueness. It then gives a brief introduction to fuzzy set theory and fuzzy logic, which provide one way of handling vagueness in AI. |
|
16 | This lecture discusses fuzzy control, which is the application of fuzzy logic to control tasks. It gives us another way of building reactive agents. |
|
17 | In this lecture we look at societies of reactive agents. We discuss emergent properties, communication between reactive agents, Artificial Life and the Society of Mind. |
|
18 | We conclude our discussion of swarm intelligence by describing an ant algorithm for solving travelling salesperson problems. |
|
19 | In this lecture we see how adding a memory of past sensory inputs can give us an agent that is more intelligent than a reactive agent. Our discussion touches on the notions of belief states and fully observable and partially observable environments. | |
20 | We discuss world models in more detail. In particular, we distinguish between what we will call analogical representations and logical representations. | |
21 | In this lecture, we abandon analogical representations of states and move on to using logical representations. The lecture reviews the syntax of first-order predicate logic. | |
22 | This lecture explains the semantics (or model-theory) of first-order predicate logic. | |
23 | We introduce the idea of an agent's knowledge base and inference engine. We discuss the process of knowledge engineering. | |
24 | In this lecture, we work through a knowledge engineering case study. |
|
25 | We turn now to reasoning. In this lecture, we see what we require of an inference engine. We look at logical consequence, proof theories and soundness & completeness. | |
26 | We look at clausal form logic. This gives us a canonical form for FOPL, which is useful in automated proof theories. | |
27 | In this lecture, we look at the operation of unification, applied to atoms of first-order predicate logic. This is a pattern-matching operation that is fundamental to most computation using logic. | |
28 | Many inference engines use just one rule of inference, which we look at today. It is called resolution. | |
29 | Inference engines that use resolution tend to carry out refutation proofs. In this lecture, we practice doing such resolution refutation proofs. | |
30 | In this lecture we discuss deliberative agents. In contrast to reactive agents, they think ahead: they simulate the effects of actions 'in their heads' as a way of choosing actions for execution. We introduce the idea of a state space. | |
31 | We look at a general search algorithm for finding paths in state spaces. We discuss the difference between the state space and the search tree. Finally, we look at two uninformed search strategies: breadth-first and depth-first. | |
32 | We discuss one more uninformed search strategy: least-cost search. Then, we look at some informed search strategies, which use heuristic functions to focus the search. In particular, we discuss greedy search and A* search. | |
33 | This lecture contains a brief overview of case-based reasoning. This is followed by a presentation on the use of case-based reasoning in AI search. |
|
34 | The lecture introduces AI planning: finding sequences of actions when states have logical representations. Topics mentioned include: The Blocks World; StrIPS-representation; progression planning and regression planning; problem decomposition; The Sussman Anomaly; state-space planning and plan-space planning; total-order planning and partial-order planning; and the principle of least-commitment. |
|
35 | We study a simplified planner called POP. We see that POP is a regression planner that searches plan-space, uses problem decomposition, builds partially-ordered plans and operates by the principle of least commitment. | |
36 | We look at hierarchical planning. After a brief study of hierarchical approximation, we focus on hierarchical decomposition. We discuss how to modify POP in both cases. | |
37 | We turn from classical planners to non-classical planners, which can operate in non-deterministic domains. We discuss how bounded indeterminacy can be handled by conditional planning (a.k.a. contingency planning). And we look in more detail at how to handle unbounded indeterminacy using execution monitoring & replanning. We end with an example of continuous planning. | |
38 | We look at multi-agent systems again. An intelligent agent that exists in a multi-agent system needs (a) to model the other agents, and (b) to communicate with the other agents. We give an overview of the issues involved. |
|
39 | We begin a consideration of how we might get computers to communicate in natural languages, such as English. We give a rapid overview of the different types of knowledge needed, including: syntax, semantics, pragmatics and world knowledge. |
|
40 | We examine the operation of a simple parser. And we look at a rich grammar formalism (the Definite Clause Grammar formalism) that is well-suited to encoding fine syntactic distinctions. |
|
41 | We look at semantic rules (based on lambda-expressions). And we discuss pragmatics. | |
42 | We look briefly at some applications of natural language processing including spell-checking, machine translation and question answering. In doing so, we discuss deep and shallow approaches, and we discuss part-of-speech tagging and word sense disambiguation. |
|
43 | We end with a general discussion: what is intelligence; is machine intelligence possible in principle; is artificial intelligence possible in practice; is it desirable; how will we know if we have succeeded; where is AI going from here? |
|