Uncertainty in Recommender Systems
In Victor Coscrato's Ph.D work, completed under my supervision, we have developed techniques to estimate, evaluate and utilise uncertainty in recommender systems (RSs). We have covered three cases: prediction uncertainty in explicit-feedback recommender systems; prediction uncertainty in implicit-feedback recommender systems; and label uncertainty in implicit-feedback recommender systems.
Explicit-feedback recommender systems are ones that ask users for their opinions of recommended items — usually the opinions come in the form of numeric ratings. From these ratings, these systems learn to predict ratings for items the user has not consumed, and the predictions can determine which items to recommend. The predictions are uncertain. In the literature, we found several ways of estimating this uncertainty, and we devised some of our own. We adapted metrics for comparing these uncertainty estimates, and developed new methods. And we explored ways of using the estimates in determining the items to recommend to a user [Coscrato & Bridge 2023].
Implicit-feedback recommender systems do not ask users for opinions. Instead, they infer user preferences from user behaviour, e.g. from clicks and purchases. From these implicit signals, these system learn to predict the relevance of items the user has not interacted with, and the predictions can determine which items to recommend. Again, the predictions are uncertain. We adapted methods from the general machine learning literature so that they could be used to estimate this uncertainty. Using metrics and methods that we adapted from our work on explicit-feedback RSs, we compared these uncertainty estimation methods. And we explored ways of using them to determine which items to recommend [Coscrato & Bridge 2022].
There is uncertainty too in the interaction data that implicit-feedback RSs learn from. For example, a click is taken to be a sign of interest, but might have been a mistake. The absence of a click is taken to be a sign of lack of interest — at least in the case of those that are sampled during the learning process. But the absence of a click may just be due to lack of exposure or a sign of consumption on some other platform. We refer to this as label uncertainty. We developed several novel methods to estimate label uncertainty and ways of utilising it in the determining which items would be recommended.
iSee: Intelligent Sharing of Explanation Experience by Users for Users
Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Numerous explanation techniques (explainers) exist in the literature, and recent findings suggest that addressing multiple user needs requires employing a combination of these explainers. We refer to such combinations as explanation strategies. In this project, we developed an interactive platform that facilitates the reuse of explanation strategies and promotes best practices in XAI. It employs Case-based Reasoning (CBR) to capture and reuse explanation strategy experience [Wijekoon et al. 2023].
The project was funded through the CHIST-ERA Programme. Alongside Preeja Pradeep and me from UCC, the project partners were Complutense University of Madrid, Robert Gordon University, and BT France.
Interpretability of Music Playlists
In Giovanni Gabbolini's Ph.D work, completed under my supervision, we have worked on enhancing the intelligibility of music playlists. We started with the idea of segues — natural language connections between consecutive songs. We introduced Dave, a system for finding segues: Dave finds paths between songs in a music knowledge graph, scores them for interestingness, and converts the most interesting into English. Our novel definition of interestingness along with results from a user trial are in [Gabbolini & Bridge 2021a]. From there, we devised algorithms to create tours, where a tour is a playlist but with segues between each pair of consecutive songs [Gabbolini & Bridge 2021b]. We evaluated tours through a thematic analysis of semi-structured interviews [Gabbolini & Bridge 2022]. Our work on segues also led us to develop and evaluate IPSim, a new similarity measure for nodes in a knowledge graph [Gabbolini & Bridge 2021c].
Another way to improve the intelligibility of playlists is to assign semantic information to the whole playlist. In [Gabbolini & Bridge 2023], for example, we designed and compared various classifiers that predict the listening contexts of playlists. We found that using knowledge graphs gave us the best results. Finally, Giovanni also did some work with research scientists in the Deezer music streaming service to automatically caption playlists.publications.html
Evaluation of Active Learning in Recommender Systems
We have explored the role of Active Learning in recommender systems, where the system proactively asks users for their opinions of query items. The query items are not recommendations (which might typically be items that the user has not previously consumed); rather, the query items are items that the system predicts are known to the user and where the system predicts that knowing the user's opinion of these items will be useful to the quality of its subsequent recommendations.
In Diego Carraro's Ph.D work, completed under my supervision, we have
focused on the offline evaluation of these Active Learning strategies. In particular, we devise a new so-called `intervention' method for debiasing a recommender dataset so that it gives more reliable estimates of real-world performance [Carraro & Bridge 2019; Carraro & Bridge 2020a; Carraro & Bridge 2020b; Carraro & Bridge 2021]. We also propose a much more systematic way of evaluating Active Learning strategies that consider their effects on a much wider range of system behaviours [Carraro & Bridge 2018]. And we propose some new Active Learning strategies.
Chain-Based Recommendation
We have developed a form of content-based recommender system that build chains of items (movies, for example). There is a local constraint that
consecutive items be related; and there is a global constraint that ensure that items are related either to a candidate for recommendation or to
items in the user's profile. We have found this novel formulation to be useful so far in achieving two different recommendation goals.
In one piece of work, we have used the chains to explain recommendations. We call this Recommendation-by-Explanation (r-by-e). Here, the system constructs
a reason, or explanation, for recommending each candidate item; then it recommends those candidate items that have the best explanations. By unifying recommendation and explanation, r-by-e finds relevant recommendations with explanations that have a high degree of fidelity
[Rana et al. 2022, Rana & Bridge 2018,
Rana & Bridge 2017].
In another piece of work, we have used chains to navigate through a space of recommendations, guided by user feedback. We call this
Navigation-by-Preference (n-by-p). We show that by combining a user’s feedback over several cycles and using the user’s long-term preferences,
Navigation-by-Preference can take the user to a recommendation that is in tune with her long-term interests but which also takes her short-term
interests (revealed by her feedback) into account.
Subprofile-Aware Diversification of Recommendations
In Mesut Kaya's PhD, completed under my supervision, we investigated the diversity of recommendation lists again. We built on work on
intent-aware diversification, where diversity is formulated in terms of coverage and relevance of so-called aspects. The aspects are most commonly defined in terms of item features. By trying to ensure that the aspects of a set of recommended items cover the aspects of the items in the user’s profile, the level of diversity is more personalized. In our work, we developed a new form of intent-aware diversification, which we call SPAD (Subprofile-Aware Diversification). In SPAD, the aspects are not item features; they are subprofiles of the user’s profile. We developed and compared a number of different ways of extracting subprofiles from a user’s profile. None of them is defined in terms of item features. Therefore, SPAD is useful even in domains where item features are not available or are of low quality. We found SPAD to give very accurate recommendations, but also to often defy the relevance/diversity trade-off
[Kaya & Bridge 2019a,
Kaya & Bridge 2019b,
Bridge-Et-Al-2019
Kaya & Bridge 2018a,
Kaya & Bridge 2018b,
Kaya & Bridge 2017].
Context-Driven Recommender Systems based on User Reviews
In Francisco Peña's PhD, completed under my supervision, we worked on recommendations that are contextualized, as well as personalized.
This requires a representation of context. Our view, however, was that other work takes a narrow view of context. By looking at user reviews,
we showed that there is a much richer, more open-ended notion of context. Reviews tend to describe experiences — the experience of consuming an item. And there is no such thing as a context-less experience. Therefore, reviews will tend to describe contexts too!
We worked on methods to mine reviews to automatically extract their contextual information. Then we learned a recommendation model from these reviews.
We compared our model to other context-aware recommender systems, with very good results
[Peña & Bridge 2017].
A Linked Data Browser with Recommendations
In an effort to improve the human exploitation of Linked Data, Fred Durão and I developed a Linked Data browser that is enhanced with recommendation
functionality. Based on a user profile, also represented as Linked Data, we proposes a technique that we called LDRec that chooses in a
personalized way which of the resources that lie within a certain neighbourhood in a Linked Data graph to recommend to the
user. The recommendation technique, which is novel, is inspired by a collective classifier known as the Iterative Classification
Algorithm. We evaluated LDRec using both an off-line experiment and a user trial, with good results
[Durão & Bridge 2018].
Product Recommendation for Small-Scale Retailers
The small-scale retail setting poses additional challenges
for product recommendation. Users of small-scale e-commerce websites often do
not have extensive shopping history records, many customers being one-time
visitors. Consequently, traditional rating-based personalization techniques (i.e.,
user-based or item-based collaborative filtering) are inapplicable in such settings
Working with Nitrosell, Marius Kaminskas and I
devised some ways of making recommendations based on browsing history. We evaluated
the idea using A/B experiments on the web sites of two web retailers. We doubled and, in
one case, more than doubled the clickthrough compared with manually-defined recommendations
[Kaminskas et al. 2015,
Kaminskas et al. 2016].
Explaining User-Based Collaborative Recommenders
The recommendations of user-based collaborative filtering systems are notoriously difficult to explain to end-users.
These recommenders work by finding similar users to the active user. Displaying the identities of the active user’s
neighbours is unlikely to be effective, since the user will in general not know the neighbours; displaying their profiles
is unlikely to be effective, since even the parts of their profiles they have in common with the active user
will be too large to be readily comprehended. Displaying their profiles also raises privacy issues.
With Kevin Dunleavy and later with Marius Kaminskas and Fred Dur{\~a}o, I have been working on a way to usefully explain
this kind of
recommendation. Our approach is to use rule-mining techniques around items that the active user and one or more of their
neighbours have in common in their profiles. This produces highly interpretable explanations that are item-based, also
known as influence based — the kind of explanations that are familiar to users of Amazon.
User trials produced some encouraging results [Bridge &
Dunleavy 2014,Kaminskas et al. 2017].
Beyond-Accuracy in Recommender Systems
Marius Kaminskas and I conducted a thorough survey of ways of measuring the quality of the recommendations made by
recommender systems. Specifically, we were looking at measures other than accuracy, such as their serendipity,
novelty, diversity and coverage. In doing so, we published two ways of measuring the surpirse of a recommendation (for use
in quantifying serendipity) [Kaminskas & Bridge 2014].
Subsequently, we considered systems that explicitly seek to increase either the serendipity, novelty or diversity of their
recommendations. We built a framework that allowed us to conduct experiments to investigate the relationships between these measures —
for example, when a system increases serendipity does this also increase novelty? Our results, along with a comprehensive
survey, are published in a best paper prize-winning article in ACM TiiS [Kaminskas & Bridge 2017].
Collective Classification of Posts to Internet Forums
Pádraig Ó Duinn and I are working on labelling posts to Internet forums, e.g. to say whether they are questions, answers, requests for
clarification, spam and so on. We are using collective classification, where an object is classified based both on its own attributes but
also based on the labels of its neighbours — in this case, previous posts in the thread. Collective classification works iteratively:
an object is classified on the basis of its neighbours, and the the neigbours are re-classified taking into account the label of the just-classified
object [Ó Duinn & Bridge 2014].
Group Recommender Systems
I worked with Lara Quijano-Sánchez, Belén Díaz-Agudo and Juan A. Recio-García on systems that make recommendations to groups of people. In particluar, we looked at how Case-Based Reasoning can play a role in such systems. Our work shows how CBR can help the cold-start problem (where a user has rated too few items to receive useful recommendations) and also in the aggregation of user preferences
[Quijano-Sánchez et al. 2012a,
Quijano-Sánchez et al. 2012b,
Quijano-Sánchez et al. 2013,
Quijano-Sánchez & Bridge 2013].
More recently, I have collaborated with Thuy Ngoc Nguyen, Francesco Ricci and Amra Delic on the role that people's conflict resolution strategies plays in
group recommender systems [Nguyen et al. 2019].
More recently still, I have collaborated with Mesut Kaya and Nava Tintarev on trying to improve the fairness in group recommender systems. The fairness that we are talking about here is a property of the top-n recommendations as a set,
and not of any single recommendation within the top-n. In other words, we are talking about aiming for more equal treatment of group members when looking at the top-n as a whole. Our approach involves a rank-sensitive balancing of relevance in ever larger prefixes of the top-n [Kaya et al. 2020].
Using Recommendation Technology to Enhance Waste Exchange Services
I won funding from the Irish
Environmental Protection Agency
for this STRIVE Programme Project.
The project employed Aidan Waugh.
Our partner was
Macroom-E.
A waste exchange service connects people who produce waste with
people who can reuse or recycle the waste. We are applying
recommender systems ideas to Macroom-E's web-based waste
exchange service, with the goal or making more and better
matches. A major research challenge is that people are reluctant
to describe the waste they have or want in sufficient detail
to enable good matches to be made.
We proposed the ideas of smart search and smart suggestions as ways of
heping users to find items and helping users to author descriptions of
items respectively. Aidan wrote up this work for his Research Masters,
with the smart suggestions work published in
[Bridge & Waugh 2009,
Waugh & Bridge 2010].
Subsequently, Macroom-e won an
Enterprise Ireland
Innovation Voucher, which they spent with us to bring these
ideas into their WasteMatchers service (later called
GiveOrGet).
The smart suggestions idea is implemented in a family of systems, which
we call GhostWriter systems. For example, following on from Aidan's GhostWriter
system for waste exchange services, Paul Healy and I developed the
GhostWriter-2.0 system to enable
amazon.com users to write more useful
product reviews
[Healy & Bridge 2010,
Bridge & Healy 2010].
Information Recommendation
I worked with
Francesco Ricci
on advice-giving in conversational recommender systems.
We have shown how such a system can infer some of its user's
preferences by observing the user's queries, and then
give advice about next possible actions based on what it has
inferred. We are able to show that a user can find the
item s/he wants more quickly if advice is given, and we
are able to show that this advice is shorter if it is based
on what has been inferred about the user
[Bridge & Ricci 2007]. Both the Free University of Bozen-Bolzano
and the Center for Scientific and Technological Research
(ITC-irst) in Trento have supported me with travel funding.
I worked with Nic Wilson and
Walid Trablesi to use different formalisms for representing and reasoning with
the user's inferred preferences
[Trabelsi et al. 2010a,
Trabelsi et al. 2010b,
Trabelsi et al. 2011,
Trabelsi et al. 2013].
Work with Francesco Ricci and Henry Blanco Lores continued, with an investigation of the case where the system has only a finite set of possible user profiles and must infer which subset of these is consistent with the user's actions. This results in shorter dialogs, faster processing and hardly any reduction in advice quality
[Blanco et al. 2012a,
Blanoc et al. 2012b].
Explaining Case Base Maintenance
I won funding from
Science Foundation Ireland
for this Research Frontiers Project.
The project employed Lisa Cummins.
Generous support came from Mehmet H. Göker of
PricewaterhouseCoopers, and we enjoyed some
collaboration with
Belén Díaz-Agudo
and
Juan Antonio Recio García
of the Universidad Complutense de Madrid.
We investigated algorithms for maintenance of case bases.
These algorithms propose the deletion of noisy or redundant cases,
with a view to increasing retrieval efficiency while doing as little
damage as possible or even increasing reasoning accuracy.
In particular, we developed a 'committee of experts' approach to case-base
maintenance, which we call the MACE approach
[Cummins and Bridge 2009],
looked at the complexity of datasets before and after case base maintenance
[Cummins and Bridge 2011a],
and evaluated a meta-CBR approach to choosing a maintenance algorithm
[Cummins and Bridge 2011b].
Case Based Reasoning in Forest Management
Conor Nugent and I developed Case-Based Reasoning methods for predicting the product yield of the trees
in a managed forest
[Nugent et al 2009].
TalkingShop: Conversational Recommenders for Configurable Products
James Little and I won funding from
Enterprise Ireland's
Commercialisation Fund for this Technology Development Project.
For some of the time, the project
has employed Ross Nicholson.
Nic Wilson,
Oskar Manzano and
Peter MacHale
have also been involved.
In the project, we developed a compact graphical data structure
into which case bases or product catalogs can be compiled. The data
structure is flexible enough to support a variety of forms of retrieval
(e.g. exact-matching and inexact-matching), and does so in a way that is
generally faster than conventional retrieval algorithms
[Nicholson et al. 2006].
Feature-Based and Feature-Free Textual CBR for Spam Filtering
Sarah Jane Delany
and I investigated textual Case-Based Reasoning (CBR) for
spam filtering. We compared a feature-based approach that Sarah Jane had
developed in her Ph.D. research with a feature-free approach, that used
text compression in its similarity measure. Accuracy results were very
encouraging
[Delany & Bridge 2006a,
Delany & Bridge 2006b].
We went on to test the robustness of both approaches, in particular in
dealing with the fact that spam is a moving target, i..e we are faced with
what machine learning
researchers would call 'concept drift'. Again, results with the feature-free
approach are extremely encouraging: we can preserve accuracy over time,
with low-effort maintenance of the case base
[Delany & Bridge 2007].
Diversity-Enhanced Conversational Collaborative Recommendations
John Paul Kelly and I investigated ways of building conversational
collaborative recommenders, in which user feedback over the course of
a multi-step interaction guides the recommender so that it makes
recommendations that satisfy the user's short-term, ephemeral interests.
We also showed how to enhance the diversity of recommendations made by
a collaborative recommender, which is useful in the conversational
scenario described above. Our major contribution was to show diversity
can be enhanced using only the knowledge already contained in the ratings matrix
[Bridge & Kelly 2005,
Bridge & Kelly 2006,
Kelly & Bridge 2006].
John Paul received some support from the Adaptive Information
Cluster, funded by Science Foundation Ireland.
Efficient Collaborative Recommendations using Formal Concept Analysis
Patrick du Boucher-Ryan and I applied Formal Concept Analysis to the
ratings matrix used by a collaborative recommender. We showed how the
concept lattice that this produces can be used as an index to rapidly find
candidate neighbours for making predictions and recommendations.
Although the concept lattice is large, we saved in excess of 90% of the
time taken by the conventional user-based nearest neighbours algorithm
[du Boucher-Ryan & Bridge 2005,
du Boucher-Ryan & Bridge 2006].
Pat wrote up this work for his Research Masters. He was supported by an award made by
the Boole Centre for Research in Informatics,
which was funded by the Higher Education Authority’s Programme for Research in Third Level Institutions Cycle 3.
Knowledge-Lite Explanation-Oriented Retrieval
The conventional wisdom in case-based classification is that the
system can explain its conclusion by showing the user the
case or cases that are closest to the new problem (i.e. the nearest
neighbours). Dónal Doyle challenged this in his Ph.D. research, and in
[Doyle et al 2004],
he,
Pádraig Cunningham,
Yusof Rahman and I showed how a case-based
system could retrieve 'marginal' cases that better explain why the
new problem has been classified the way that it has.
Lisa Cummins and I later generalised this work and made it more knowledge lite,
i.e. so that it requires less effort on the part of the knowledge engineer and
domain expert
[Bridge & Cummins 2005,
Cummins and Bridge 2006].
Ant Algorithms for Subset Selection Problems
Finbarr Tarrant and I experimented with
ant algorithms for solving constraint satisfaction problems
[Tarrant & Bridge 2004,
Tarrant & Bridge 2005].
Later
Dr. Christine Solnon
and I generalised the work: we used ant algorithms for
solving a whole class of problems that we refer to as 'subset selection
problems'
[Solnon & Bridge 2006].
This class includes constraint satisfaction problems but also
maximum clique problems and multi-dimensional knapsack problems.
Using CBR to Select Solution Strategies in Constraint Programming
Cormac Gebruers and I (with
Eugene Freuder,
Brahim Hnich and
James Little)
experimented with case-based classification
techniques for two of the phases of constraint programming. The first
phase is how best to model a problem using constraints; the second is
choosing the best solution strategy (search algorithm, heuristics, etc.)
for finding solutions
[Little et al. 2002,
Gebruers et al. 2005].
Using Clustering to Build Scalable Collaborative Filtering Systems
Jerome Kelleher and I applied clustering techniques to the ratings matrices
used by collaborative recommenders. We then predicted the
active user's rating for a product that s/he has not yet rated
from the clusters. We found that using neighbourhood-based
methods on the clustered data gave disappointing
performance
[Bridge & Kelleher 2002].
But, if we make less personalised predictions
from summary information about the clusters, we obtain
surprisingly good accuracy at tremendous speed
[Kelleher & Bridge 2003,
Kelleher & Bridge 2004].
Supporting Reuse in the Software Development Cycle
I won funding from
Enterprise Ireland
for this Strategic Research Project.
The project employed Markus Grabert.
We built a Case-Based Reasoning system for improving software
reuse. In particular, the case base contains what we called Java
'examplets', i.e. snippets that demonstrate Java usage of the kind
you might find in a how-to FAQ list. We defined two retrieval
mechanisms, one based on text retrieval, the other based on
spreading activation over a graph-based representation of the
examplet token stream. We showed that combining both types of
retrieval gave the best results
[Grabert & Bridge 2003c].
Order-Based Retrieval
With Alex Ferguson,
I devised a query language that builds on the best
aspects of similarity-based retrieval, while offering
a number of new operators that give greater
expressiveness. We called the approach
Order-Based Retrieval (OBR). We showed that, not only is OBR expresssive,
but it gives
a natural way of obtaining a diverse set of retrieved items.
We applied the ideas to case-based recommender systems
[Bridge 2001,
Bridge & Ferguson 2002a,
Bridge & Ferguson 2002b].
WWW Shopping using Refinable Similarity Measures
I won funding from
Enterprise Ireland
for this Strategic Research Project, with industrial support from
Interactive Multimedia Systems (IMS).
Alex Ferguson and I defined a new way of combining similarity measures
[Ferguson & Bridge 1999b]
and we investigated the range of ways in which users
can interact with case-retrieval systems
[Ferguson & Bridge 2000c].
This work relies on a notion
of indifference between degrees of similarity. We have
argued that, for on-line shopping, our framework,
deploying this notion of indifference between degrees
of similarity, avoids a level of spurious precision
which plagues systems that are based on conventional
numeric-valued similarity measures
[Ferguson & Bridge 2000a].
We have also looked at more closely
reconciling our ways of combining similarity metrics
with more conventional
numeric weighted averages
[Ferguson & Bridge 2000b
Ferguson & Bridge 2001].
In an interesting spin-off of this work, we have applied
the idea of using a pair of weight vectors to define
similarity measures for case-based classification systems
that can report their classification confidence
[Bridge & Ferguson 2001].
Concurrent Architectures for Heterogeneous Knowledge Manipulation Systems
Alan Wood
and I won funding under the U.K. Department of Trade and Industry/Engineering and
Physical Sciences Research Council's
Architectures for Integrated Knowledge-Manipulation Systems (AIKMS)
programme. The industrial partner was
Transtech Parallel Systems.
The project employed Hugh Osborne,
Duncan Campbell and Andy Johnson.
The most important outcome from this project was a
generalised framework for measuring similarity in
CBR systems.
In our new framework, similarity is measured by functions
whose result types are
partial orders. This both subsumes and extends many existing
numeric-valued and non-numeric-valued ways of measuring
similarity.
We use the phrase 'similarity metric' to refer to our
generalised similarity measures.
Within our new framework, we defined operators for
combining similarity metrics. These
operators make use of ways of combining partial orders.
Strikingly, this means that in our framework different
similarity metrics (e.g. set-valued and numeric-valued) can
be combined, without inter-conversion
[Osborne & Bridge 1997a,
Osborne & Bridge 1997b,
Osborne & Bridge 1997c].
Within the framework, we also investigated
the axiomatisation of similarity
metrics
[Osborne & Bridge 1997a,
Bridge 1998].
and algebraic equivalences. (The latter might allow
metrics to
be transformed to forms that are best suited to whatever
evaluation mechanism a particular CBR system provides,
including parallel evaluation strategies
[Osborne & Bridge 1996a,
Osborne & Bridge 1996b,
Osborne & Bridge 1997d,
Campbell et al. 1997,
Campbell et al. 1998].)
Inductive Generalisation in Case-Based Reasoning Systems
Tony Griffiths' prize-winning D.Phil. was successfully completed under my supervision.
Tony provides a careful analysis of case-based learning.
In particular, using worst-case (PAC), average-case and
empirical analyses, he shows the effect of inductive bias
on case-based learning accuracy. Bias is incorporated
through the similarity measure and/or the 'semantics'
ascribed to a case-based system
[Griffiths & Bridge 1995a,
Griffiths & Bridge 1995b,
Griffiths & Bridge 1995c,
Griffiths & Bridge 1996,
Griffiths & Bridge 1997a,
Griffiths & Bridge 1997b,
Griffiths & Bridge 1997c].
Tony was funded by an award from the UK Science and Engineering Research Council.
Help-Desk Support
Andy Dearden,
Michael Harrison
and I received funding from British Telecom
to investigate the applicability of knowledge-based
systems, especially those that use Case-Based Reasoning,
to British Telecom help desk functions
[Bridge & Dearden 1992a,
Dearden & Bridge 1993].
Assessment of Knowledge-Based Systems Applications in Social Policy
Funded by the University of York, with support from the UK Department
of Social Security, Michael Hirst and I
applied Case-Based Reasoning (CBR) techniques to Department of Social Security
Disability Living Allowance data. Early results showed CBR
better predicting mobility awards than care awards.
Our report [Hirst & Bridge 1996]
offers possible explanations for this.
Formal Justification in Requirements Engineering
Simon Smith successfully completed a D.Phil., partly under
my supervision, in which he
developed a way of structuring formal specifications to
allow incremental re-proof of previously established
properties following changes to the specification.
Learning Unification-Based Natural Language Grammars
Miles Osborne
successfully completed his D.Phil. under my supervision.
Miles developed one of the first systems capable of
learning unification-based grammars. The inductive search
space was constrained by a set of linguistic
universals (and, in the implemented system, some
language-specific linguistic constraints)
[Osborne & Bridge 1993a,
Osborne & Bridge 1993b,
Osborne & Bridge 1994b,
Osborne & Bridge 1997e].
Miles was funded by an award from the UK Science and Engineering Research Council.
Logic and Databases: An Object-Oriented Approach
Chris Higgins successfully completed a D.Phil., partly under
my supervision, in which he
formalised object-oriented databases. He used a logic
programming formalisation: objects are theories, and
inter-object communication operates at a metalogical
level.
A Process-Oriented Approach to Configuration Management for Communications Software
Shem Ochuodho successfully completed a D.Phil., partly
under my supervision, in which he
developed both a new data model and a new process model for
supporting configurations management, with particular
applications to telecoms software.
Computing Presuppositions in an Incremental Natural Language Processing System
My Ph.D. was completed under the supervision of the late Karen
Spärck Jones, Computer Laboratory, University of Cambridge.
The Ph.D. gives a partial solution to the presupposition projection
problem. The solution distinguishes two types of
presuppositions failure: where a presupposition is not
satisfied by the discourse model, and where a presupposition
is false. An incremental natural language processing system
was shown to be especially well-suited to computing
presuppositions of compound sentences.