WYSIWYD Project
Overview
WYSIWYD is a research project funded by the 7th Framework Programme of the European Commission. The project started on January 1st 2014.
The What You Say Is What You Did project (WYSIWYD) will create a new transparency in human robot interaction (HRI) by allowing robots to both understand their own actions and those of humans, and to interpret and communicate these in human compatible intentional terms expressed as a language-like communication channel we call WYSIWYD Robotese (WR). WYSIWYD will advance this critical communication channel following a biologically and psychologically grounded developmental perspective allowing the robot to acquire, retain and express WR dependent on its individual interaction history. In order to achieve transparency and communication in HRI a number of elements must be put in place: a well defined experimental paradigm, an integrated architecture for perception, cognition, action and intrinsic motivation that, among other things, provides the backbone for the acquisition of an autonomous communication structure, a mechanism of robot self that together with mirroring mechanisms allows for mind reading, an autobiographical memory that compresses data streams and develops a personal narrative of the interaction history, a conceptual space that provides for an interface from memory to linguistic structures and their expression in speech and communicative actions. WYSIWYD will deliver these components as elements of an integrated architecture WR-DAC. The WYSIWYD project will advance all these elements building on the strong track record of the project partners in robotics, cognitive science, psychology and computational neuroscience. WYSIWYD will contribute to a qualitative change in human-robot interaction (HRI) and cooperation, unlocking new capabilities and application areas together with enhanced safety, robustness and monitoring. It is only through this step that humans will be able to trust robots: when they say what they do and do what they say. The project is sponsored by EU FP7-ICT Project Ref 612139 and is a collaboration with Tony Prescott of University of Sheffield, Mat Evans of University of Sheffield, Paul Verschure of Universitat Pompeu Fabra, Peter Ford Dominey of Institute National de la Sante et de la Recherche Medicale (INSERM), Giorgio Metta of Fondazione Istituto Italiano di Tecnologia, Peter Gardenfors of Lunds University and Yiannis Demiris of Imperial College.
Personnel from ML@SITraN
- Andreas Damianou Post-doctoral research assistant
Publications
The following publications have provided background to our work in this project.
“Gaussian processes for big data” in A. Nicholson and P. Smyth (eds) Uncertainty in Artificial Intelligence, AUAI Press, . [PDF][Google Scholar Search]
(2013)“Deep Gaussian processes” in C. Carvalho and P. Ravikumar (eds) Proceedings of the Sixteenth International Workshop on Artificial Intelligence and Statistics, JMLR W&CP 31, AZ, USA, pp . [Software][PDF][Google Scholar Search]
(2013)Abstract
In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GP-LVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples.
“Probabilistic non-linear principal component analysis with Gaussian process latent variable models” in Journal of Machine Learning Research 6, pp 1783–1816 [Errata][C++ Software][MATLAB Software][JMLR PDF][JMLR Abstract][Google Scholar Search]
(2005)Abstract
Summarising a high dimensional data set with a low dimensional embedding is a standard approach for exploring its structure. In this paper we provide an overview of some existing techniques for discovering such embeddings. We then introduce a novel probabilistic interpretation of principal component analysis (PCA) that we term dual probabilistic PCA (DPPCA). The DPPCA model has the additional advantage that the linear mappings from the embedded space can easily be non-linearised through Gaussian processes. We refer to this model as a Gaussian process latent variable model (GP-LVM). Through analysis of the GP-LVM objective function, we relate the model to popular spectral techniques such as kernel PCA and multidimensional scaling. We then review a practical algorithm for GP-LVMs in the context of large data sets and develop it to also handle discrete valued data and missing attributes. We demonstrate the model on a range of real-world and artificially generated data sets.
“Learning to learn with the informative vector machine” in R. Greiner and D. Schuurmans (eds) Proceedings of the International Conference in Machine Learning, Omnipress, , pp 512–519. [Software][Gzipped Postscript][PDF][DOI][Google Scholar Search]
(2004)Abstract
This paper describes an efficient method for learning the parameters of a Gaussian process (GP). The parameters are learned from multiple tasks which are assumed to have been drawn independently from the same GP prior. An efficient algorithm is obtained by extending the informative vector machine (IVM) algorithm to handle the multi-task learning case. The multi-task IVM (MT-IVM) saves computation by greedily selecting the most informative examples from the separate tasks. The MT-IVM is also shown to be more efficient than sub-sampling on an artificial data-set and more effective than the traditional IVM in a speaker dependent phoneme recognition task.
“Fast sparse Gaussian process methods: the informative vector machine” in S. Becker, S. Thrun and K. Obermayer (eds) NIPS, MIT Press, Cambridge, MA, pp 625–632. [Software][Gzipped Postscript][Google Scholar Search]
(2003)Abstract
We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on information-theoretical principles, previously suggested for active learning. In contrast to most previous work on sparse GPs, our goal is not only to learn sparse predictors (which can be evaluated in O(d) rather than O(n), d<<n, n the number of training points), but also to perform training under strong restrictions on time and memory requirements. The scaling of our method is at most O(nd^2^), and in large real-world classification experiments we show that it can match prediction performance of the popular support vector machine (SVM), yet it requires only a fraction of the training time. In contrast to the SVM, our approximation produces estimates of predictive probabilities (`error bars’), allows for Bayesian model selection and is less complex in implementation.