开源软件名称(OpenSource Name):SheffieldML/GPmat开源软件地址(OpenSource Url):https://github.com/SheffieldML/GPmat开源编程语言(OpenSource Language):MATLAB 93.0%开源软件介绍(OpenSource Introduction):GPmatThe GPmat toolbox is the 'one stop shop' on github for a number of dependent toolboxes, each of which used to be released independently. Since May 2015 each toolbox is a sub-directory within GPmat. They are included as subtrees from the relevant repositories. The summary of demos from each toolbox is given below. It is advisable that you also include netlab (https://github.com/sods/netlab) as a dependency. The first release of the full GPmat toolbox is version 1.0.0, released on 28th May 2015 to coincide with the reformatting of the toolbox as sub-trees. GPVersion 0.136Changes to gpReadFromFID for compatibility with C++ code. Version 0.135Modifications by Carl Henrik Ek for compatability with the SGPLVM toolbox. Version 0.134Updates to allow deconstruction of model files when writing to disk (gpWriteResult, gpLoadResult, gpDeconstruct, gpReconstruct). Version 0.133Updates for running a GPLVM/GP using the data's inner product matrix for Interspeech synthesis demos. Version 0.132Examples transfered from oxford toolbox, variational approximation from Titsias added as an option with 'dtcvar'. Version 0.131Changes to allow compatibility with SGPLVM and NCCA toolboxes. Version 0.13Changes to allow more flexibility in optimisation of beta. Version 0.12Various minor changes for enabling back constraints in hierarchical GP-LVM models. Version 0.11Changes include the use of the optimiDefaultConstraint('positive') to obtain the function to constrain beta to be positive (which now returns 'exp' rather than 'negLogLogit' which was previously the default). Similarly default optimiser is now given by a command in optimiDefaultOptimiser. Version 0.1The first version which is spun out of the FGPLVM toolbox. The corresponding FGPLVM toolbox is 0.15. Release 0.1 splits away the Gaussian process section of the FGPLVM toolbox into this separate toolbox. Other GP related softwareThe GP-LVM C++ software is available from here. The IVM C++ software is available from here. The MATLAB IVM toolbox is available here here. The original MATLAB GP-LVM toolbox is available here here. ExamplesFunctions from GaussiansThis example shows how points which look like they come from a function to be sampled from a Gaussian distribution. The sample is 25 dimensional and is from a Gaussian with a particular covariance. >> demGpSample Left A single, 25 dimensional, sample from a Gaussian distribution. Right the covariance matrix of the Gaussian distribution.. Joint Distribution over two VariablesGaussian processes are about conditioning a Gaussian distribution on the training data to make the test predictions. To illustrate this process, we can look at the joint distribution over two variables. >> demGpCov2D([1 2]) Gives the joint distribution for f1 and f2. The plots show the joint distributions as well as the conditional for f2 given f1. Left Blue line is contour of joint distribution over the variables f1 and f2. Green line indicates an observation of f1. Red line is conditional distribution of f2 given f1. Right Similar for f1 and f5. Different Samples from Gaussian ProcessesA script is provided which samples from a Gaussian process with the provided covariance function. >> gpSample('rbf', 10, [1 1], [-3 3], 1e5) will give 10 samples from an RBF covariance function with a parameter vector given by [1 1] (inverse width 1, variance 1) across the range -3 to 3 on the x-axis. The random seed will be set to 1e5. >> gpSample('rbf', 10, [16 1], [-3 3], 1e5) is similar, but the inverse width is now set to 16 (length scale 0.25). Left samples from an RBF style covariance function with length scale 1. Right samples from an RBF style covariance function with length scale 0.25. Other covariance functions can be sampled, an interesting one is the MLP covariance which is non stationary and can produce point symmetric functions, >> gpSample('mlp', 10, [100 100 1], [-1 1], 1e5) gives 10 samples from the MLP covariance function where the "bias variance" is 100 (basis functions are centered around the origin with standard deviation of 10) and the "weight variance" is 100. >> gpSample('mlp', 10, [100 1e-16 1], [-1 1], 1e5) gives 10 samples from the MLP covariance function where the "bias variance" is approximately zero (basis functions are placed on the origin) and the "weight variance" is 100. Left samples from an MLP style covariance function with bias and weight variances set to 100. Right samples from an MLP style covariance function with weight variance 100 and bias variance approximately zero. Posterior SamplesGaussian processes are non-parametric models. They are specified by their covariance function and a mean function. When combined with data observations a posterior Gaussian process is induced. The demos below show samples from that posterior. >> gpPosteriorSample('rbf', 5, [1 1], [-3 3], 1e5) and >> gpPosteriorSample('rbf', 5, [16 1], [-3 3], 1e5) Left samples from the posterior induced by an RBF style covariance function with length scale 1 and 5 "training" data points taken from a sine wave. Right Similar but for a length scale of 0.25. Simple Interpolation DemoThis simple demonstration plots, consecutively, an increasing number of data points, followed by an interpolated fit through the data points using a Gaussian process. This is a noiseless system, and the data is sampled from a GP with a known covariance function. The curve is then recovered with minimal uncertainty after only nine data points are included. The code is run with >> demInterpolation Gaussian process prediction left after two points with a new
data point sampled right after the new data point is included
in the prediction. Gaussian process prediction left after five points with a four
new data point sampled right after all nine data points are
included. Simple Regression DemoThe regression demo very much follows the format of the interpolation demo. Here the difference is that the data is sampled with noise. Fitting a model with noise means that the regression will not necessarily pass right through each data point. The code is run with >> demRegression Gaussian process prediction left after two points with a new
data point sampled right after the new data point is included
in the prediction. Gaussian process prediction left after five points with a four
new data point sampled right after all nine data points are
included. Optimizing Hyper ParametersOne of the advantages of Gaussian processes over pure kernel interpretations of regression is the ability to select the hyper parameters of the kernel automatically. The demo >> demOptimiseGp shows a series of plots of a Gaussian process with different length scales fitted to six data points. For each plot there is a corresponding plot of the log likelihood. The log likelihood peaks for a length scale equal to 1. This was the length scale used to generate the data. From top left to bottom right, Gaussian process regression applied to the data with an increasing length scale. The length scales used were 0.05, 0.1, 0.25, 0.5, 1, 2, 4, 8 and 16. Log-log plot of the log likelihood of the data against the length scales. The log likelihood is shown as a solid line. The log likelihood is made up of a data fit term (the quadratic form) shown by a dashed line and a complexity term (the log determinant) shown by a dotted line. The data fit is larger for short length scales, the complexity is larger for long length scales. The combination leads to a maximum around the true length scale value of 1. Regression over Motion Capture MarkersAs a simple example of regression for real data we consider a motion capture data set. The data is from Ohio State University. In the example script we perform Gaussian process regression with time as the input and the x,y,z position of the marker attached to the left ankle. To demonstrate the behavior of the model when the marker is lost, we remove data from This code can be run with >> demStickGp1 The code will optimize hyper parameters and show plots of the posterior process through the training data and the missing test points. The result of the script is given in the plot below. Gaussian process regression through the x (left), y (middle) and z (right) position of the left ankle. Training data is shown as black spots, test points removed to simulate a lost marker are shown as circles, posterior mean prediction is shown as a black line and two standard deviations are given as grey shading. Notice how the error bars are tight except in the region where the training data is missing and in the region where the training data disappears. Sparse Pseudo-input Gaussian ProcessesThe sparse approximation used in this toolbox is based on the Sparse Pseudo-input Gaussian Process model described by Snelson and Ghahramani. Also provided are the extensions suggested by Quiñonero-Candela and Rasmussen. They provide a unifying terminology for describing these approximations which we shall use in what follows. There are three demos provided for Gaussian process regression in 1-D. They each use a different form of likelihood approximation. The first demonstration uses the "projected latent variable" approach first described by Csato and Opper and later used by Seeger et al.. In the terminology of Quiñonero-Candela and Rasmussen (QR-terminology) this is known as the "deterministic training conditional" (DTC) approximation. To use this approximation the following script can be run. >> demSpgp1dGp1 The result of the script is given in the plot below. Gaussian process using the DTC approximation with nine inducing variables. Data is shown as black spots, posterior mean prediction is shown as a black line and two standard deviations are given as grey shading. The improved approximation suggested by Snelson and Ghahramani, in QR-terminology this is known as the fully independent training conditional (FITC). To try this approximation run the following script >> demSpgp1dGp2 The result of the script is given on the left of the plot below. Left: Gaussian process using the FITC approximation with nine inducing variables. Data is shown as black spots, posterior mean prediction is shown as a black line and two standard deviations are given as grey shading. Right: Similar but for the PITC approximation, again with nine inducing variables. At the Sheffield Gaussian Process Round Table Lehel Csato pointed out that the Bayesian Committee Machine of Schwaighofer and Tresp can also be viewed within the same framework. This idea is formalised in Quiñonero-Candela and Rasmussen's review. This approximation is known as the "partially independent training conditional" (PITC) in QR-terminology. To try this approximation run the following script >> demSpgp1dGp3 The result of the script is given on the right of the plot above. Finally we can compare these results to the result from the full Gaussian process on the data with the correct hyper-parameters. To do this the following script can be run. >> demSpgp1dGp4 The result of the script is given in the plot below. Full Gaussian process on the toy data with the correct hyper-parameters. Data is shown as black spots, posterior mean prediction is shown as a black line and two standard deviations are given as grey shaded area. GP-LVMChanges for compatibility with new SGPLVM toolbox by Carl Henrik Ek. Version 0.162Added new files fgplvmWriteResults fgplvmLoadResults for saving smaller model files. Version 0.161Updates for running a GPLVM when the inner produce matrix is used (i.e. dimensionality much greater than data points). Minor changes to fix reading of GPLVM files from latest C++ code. Version 0.16Incorporate varational approximation from Michalis in the code. Version 0.153Changes to allow compatibility with SGPLVM and NCCA toolboxes. Version 0.152Bug fix from fgplvmReadFromFID where the values of model.m weren't being computed correctly. Version 0.151In this version results for the CMU Mocap data set from Taylor et al. of subject 35 running and walking are included, as well as some minor changes to allow hierarchical GP-LVMs to be used. Version 0.15This version splits the Gaussian process portion into a new GP toolbox, the corresponding version is 0.1. Fixed bug in gpDynamicsExpandParam, gpDynamicsExractParam and gpDynamicsLogLikeGradient where 'fixInducing' option was not being dealt with. Fixed bug in fgplvmCreate.m where the back constraints were set up, but the latent positions were not being set according to the back constraints in the returned model. Version 0.141Changed GP-LVM default optimiser to scg rather than conjgrad. Added fgplvmOptimiseSequence and dependent files. This is for optimising a test sequence in the latent space, for the case where there are dynamics on the model. Version 0.14Carl Ek implemented multiple sequences in the gpDynamics model used for dynamics in the GPLVM, this was refined and integrated by Neil. Fixed two bugs in gpPosteriorGradMeanVar which appeared if fitc was used or the scales on the outputs were non-zero. This in turn affected fgplvmOptimisePoint. Default under back constraints switched to not optimise towards a PCA initialisation. Fixed bug in fgplvmReadFromFID where the old form of fgplvmCreate was being called. Version 0.132Release 0.132 includes two speed improvements on the pitc approximation. Thanks to Ed Snelson for pointing out that it was unusually slow! New versions of the NDLUTIL and KERN toolbox are also required. Release 0.131 adds the ability to handle missing data and a new reversible dynamics model. Release 0.13 is a (hopefully) fairly stable base release for which several results in forthcoming papers will be created. Additional features are better decompartmentalisation of dynamics models, regularisation of inducing variable's inputs and introduction of fgplvmOptions and gpOptions for setting default options for the models. Release 0.11 is the first release that contains the fully independent training conditional approximation (Snelson and Ghahramani, Quinonero Candela and Rasmussen). Release 0.1 is a pre-release to make some of the model functionality available. The some of the different approximations (such as fully independent training conditional and partially independent training conditional) are not yet implemented and the dynamics currently has no sparse approximations associated. This toolbox also implements back constraints (joint work with Joaquin Quinonero Candela). The mappings that can be used as back constraints are those described in the MLTOOLS toolbox. Alternative GP-LVM implementations from this site: The GP-LVM C++ software is available from here. The original MATLAB version of the toolbox is available here here. ExamplesGP-LVMThe three approximations outlined above can be used to speed up learning in the GP-LVM. They have the advantage over the IVM approach taken in the original GP-LVM toolbox that the algorithm is fully convergent and the final mapping from latent space to data space takes into account all of the data (not just the points in the active set). As well as the new sparse approximation the new toolbox allows the GP-LVM to be run with dynamics as suggested by Wang et al.. Finally, the new toolbox allows the incorporation of `back constraints' in learning. Back constraints force the latent points to be a smooth function of the data points. This means that points that are close in data space are constrained to be close in latent space. For the standard GP-LVM points close in latent space are constrained to be close in data space, but the converse is not true. Various combinations of back constraints and different approximations are used in the exmaples below. Oil DataThe 'oil data' is commonly used as a bench mark for visualisation algorithms. For more details on the data see this page. The C++ implementation of the GP-LVM has details on training the full GP-LVM with this data set. Here we will consider the three different approximations outlined above. FITC ApproximationIn all the examples we give there will be 100 points in the active set. We first considered the FITC approximation. The script Left: GP-LVM on the oil data using the FITC approximation without back constraints. The phases of flow are shown as green circles, red crosses and blue plusses. One hundred inducing variables are used. Right: Similar but for a back-constrained GP-LVM, the back constraint is provided by a multi-layer perceptron with 15 hidden nodes. Back constraints can be added to each of these approximations. In the example on the right we used a back constraint given by a multi-layer perceptron with 15 hidden nodes. This example can be recreated with DTC ApproximationThe other approximations can also be used, in the figures below we give results from the DTC approximation. The can be recreated using Left: GP-LVM on the oil data using the DTC approximation without back constraints. The phases of flow are shown as green circles, red crosses and blue plusses. One hundred inducing variables are used. Right: Similar but for a back-constrained GP-LVM, the back constraint is provided by a multi-layer perceptron with 15 hidden nodes. PITC ApproximationWe also show results using the PITC approximation, these results can be recreated using the scripts Left: GP-LVM on the oil data using the PITC approximation without back constraints. The phases of flow are shown as green circles, red crosses and blue plusses. One hundred inducing variables are used. Right: Similar but for a back-constrained GP-LVM, the back constraint is provided by a multi-layer perceptron with 15 hidden nodes. Variational DTC ApproximationFinally we also show results using the variational DTC approximation of Titsias, these results can be recreated using the scripts Left: GP-LVM on the oil data using the variational DTC approximation without back constraints. The phases of flow are shown as green circles, red crosses and blue plusses. One hundred inducing variables are used. Right: Similar but for a back-constrained GP-LVM, the back constraint is provided by a multi-layer perceptron with 15 hidden nodes. Back Constraints and DynamicsFirst we wi |
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论