GP with Zero Mean¶
Zero Mean Gaussian Process
-
class
zerogp.
GP
(theta: numpy.ndarray, y: numpy.ndarray, var: float = 1e-05, x_trans: bool = False, y_trans: bool = False, use_mean: bool = False)[source]¶ Bases:
object
Module to perform a zero mean Gaussian Process regression. One can also specify if we want to apply the pre-whitening step at the input level and the logarithm transformation at the output level.
- Param
theta (np.ndarray) : matrix of size ntrain x ndim
- Param
y (np.ndarray) : output/target
- Param
var (float or np.ndarray) : noise covariance matrix of size ntrain x ntrain
- Param
x_trans (bool) : if True, pre-whitening is applied
- Param
y_trans (bool) : if True, log of output is used
- Param
use_mean (bool) : if True, the outputs are centred on zero
-
derivatives
(test_point: numpy.ndarray, order: int = 1) → Tuple[numpy.ndarray, numpy.ndarray][source]¶ If we did some transformation on the ouputs, we need this function to calculate the ‘exact’ gradient
- Param
test_point (np.ndarray) : array of the test point
- Param
order (int) : 1 or 2, referrring to first and second derivatives respectively
- Returns
grad (np.ndarray) : first derivative with respect to the input parameters
- Returns
gradient_sec (np.ndarray) : second derivatives with respect to the input parameters, if specified
-
evidence
(params: numpy.ndarray) → Tuple[numpy.ndarray, numpy.ndarray][source]¶ Calculate the log-evidence of the GP and the gradient with respect to the kernel hyperparameters
- Param
params (np.ndarray) : kernel hyperparameters
- Returns
neg_log_evidence (np.ndarray) : the negative log-marginal likelihood
- Returns
-gradient (np.ndarray) : the gradient with respect to the kernel hyperparameters
-
fit
(method: str = 'CG', bounds: numpy.ndarray = None, options: dict = {'ftol': 1e-05}, n_restart: int = 2) → numpy.ndarray[source]¶ The kernel hyperparameters are learnt in this function.
- Param
method (str) : the choice of the optimizer:
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html
Recommend L-BFGS-B algorithm
- Param
bounds (np.ndarray) : the prior on these hyperparameters
- Param
options (dictionary) : options for the L-BFGS-B optimizer. We have:
options={'disp': None, 'maxcor': 10, 'ftol': 2.220446049250313e-09, 'gtol': 1e-05, 'eps': 1e-08, 'maxfun': 15000, 'maxiter': 15000, 'iprint': - 1, 'maxls': 20, 'finite_diff_rel_step': None}
- Param
n_restart (int) : number of times we want to restart the optimizer
- Returns
opt_params (np.ndarray) : array of the optimised kernel hyperparameters
-
grad_pre_computations
(test_point: numpy.ndarray, order: int = 1) → Tuple[numpy.ndarray, numpy.ndarray][source]¶ Pre-compute some quantities prior to calculating the gradients
- Param
test_point (np.ndarray) : test point in parameter space
- Param
order (int) : order of differentiation (default: 1) - not to be confused with order of the polynomial
- Returns
gradients (tuple) : first and second derivatives (if order = 2)
-
noise_covariance
() → numpy.ndarray[source]¶ Build the noise covariance matrix
- Returns
the pre-defined (co-)variance in its appropriate form
-
pred_original_function
(test_point: numpy.ndarray, n_samples: int = None) → numpy.ndarray[source]¶ Calculates the original function if the log_10 transformation is used on the target.
- Param
test_point (np.ndarray) - the test point in parameter space
- Param
n_samples (int) - we can also generate samples of the function, assuming we have stored the Cholesky factor
- Returns
y_samples (np.ndarray) - if n_samples is specified, samples will be returned
- Returns
y_original (np.ndarray) - the predicted function in the linear scale (original space) is returned
-
prediction
(test_point: numpy.ndarray, return_var: bool = False) → Tuple[numpy.ndarray, numpy.ndarray][source]¶ Predicts the function at a test point in parameter space
- Param
test_point (np.ndarray) : test point in parameter space
- Param
return_var (bool) : if True, the predicted variance will be computed
- Returns
mean_pred (np.ndarray) : the mean of the GP
- Returns
var_pred (np.ndarray) : the variance of the GP (optional)