pyGPGO.surrogates.GaussianProcess module¶
-
class
pyGPGO.surrogates.GaussianProcess.
GaussianProcess
(covfunc, optimize=False, usegrads=False, mprior=0)[source]¶ Bases:
object
Gaussian Process regressor class. Based on Rasmussen & Williams [1]_ algorithm 2.1.
Parameters: - covfunc (instance from a class of covfunc module) – Covariance function. An instance from a class in the covfunc module.
- optimize (bool:) – Whether to perform covariance function hyperparameter optimization.
- usegrads (bool) – Whether to use gradient information on hyperparameter optimization. Only used if optimize=True.
Notes
[1] Rasmussen, C. E., & Williams, C. K. I. (2004). Gaussian processes for machine learning. International journal of neural systems (Vol. 14). http://doi.org/10.1142/S0129065704001899
-
__init__
(covfunc, optimize=False, usegrads=False, mprior=0)[source]¶ Gaussian Process regressor class. Based on Rasmussen & Williams [1]_ algorithm 2.1.
Parameters: - covfunc (instance from a class of covfunc module) – Covariance function. An instance from a class in the covfunc module.
- optimize (bool:) – Whether to perform covariance function hyperparameter optimization.
- usegrads (bool) – Whether to use gradient information on hyperparameter optimization. Only used if optimize=True.
-
covfunc
Internal covariance function.
Type: object
-
optimize
User chosen optimization configuration.
Type: bool
-
usegrads
Gradient behavior
Type: bool
-
mprior
Explicit value for the mean function of the prior Gaussian Process.
Type: float
Notes
[1] Rasmussen, C. E., & Williams, C. K. I. (2004). Gaussian processes for machine learning. International journal of neural systems (Vol. 14). http://doi.org/10.1142/S0129065704001899
-
_grad
(param_vector, param_key)[source]¶ Returns gradient for each hyperparameter, evaluated at a given point.
Parameters: Returns: Gradient for each evaluated hyperparameter.
Return type: np.ndarray
-
_lmlik
(param_vector, param_key)[source]¶ Returns marginal negative log-likelihood for given covariance hyperparameters.
Parameters: Returns: Negative log-marginal likelihood for chosen hyperparameters.
Return type:
-
fit
(X, y)[source]¶ Fits a Gaussian Process regressor
Parameters: - X (np.ndarray, shape=(nsamples, nfeatures)) – Training instances to fit the GP.
- y (np.ndarray, shape=(nsamples,)) – Corresponding continuous target values to X.
-
getcovparams
()[source]¶ Returns current covariance function hyperparameters
Returns: Dictionary containing covariance function hyperparameters Return type: dict
-
optHyp
(param_key, param_bounds, grads=None, n_trials=5)[source]¶ Optimizes the negative marginal log-likelihood for given hyperparameters and bounds. This is an empirical Bayes approach (or Type II maximum-likelihood).
Parameters:
-
param_grad
(k_param)[source]¶ Returns gradient over hyperparameters. It is recommended to use self._grad instead.
Parameters: k_param (dict) – Dictionary with keys being hyperparameters and values their queried values. Returns: Gradient corresponding to each hyperparameters. Order given by k_param.keys() Return type: np.ndarray
-
predict
(Xstar, return_std=False)[source]¶ Returns mean and covariances for the posterior Gaussian Process.
Parameters: - Xstar (np.ndarray, shape=((nsamples, nfeatures))) – Testing instances to predict.
- return_std (bool) – Whether to return the standard deviation of the posterior process. Otherwise, it returns the whole covariance matrix of the posterior process.
Returns: - np.ndarray – Mean of the posterior process for testing instances.
- np.ndarray – Covariance of the posterior process for testing instances.