beebo package

Subpackages

Submodules

beebo.acquisition module

class beebo.acquisition.AugmentedPosteriorMethod(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source][source]

Bases: Enum

Used to specify the method for augmenting the GP model with new points and computing the posterior covariance after augmentation in the BatchedEnergyEntropyBO acquisition function.

NAIVE

We keep a copy of the original model and augment it with the new points using the set_train_data method each time we evaluate the acquisition function. This is memory safe but slow.

CHOLESKY

We perform a low rank update to the cholesky decomposition of the training covariance, adding the new points. This is fast, but circumvents the default GPyTorch inference in favor of cholesky-based predictions. Uses GPPosteriorPredictor to compute the augmented covariance.

GET_FANTASY_MODEL

We use the get_fantasy_model method of the GP model to get a new model with the new points. This is not memory safe when running with gradients enabled.

CHOLESKY = 'cholesky'[source]
GET_FANTASY_MODEL = 'get_fantasy_model'[source]
NAIVE = 'naive'[source]
class beebo.acquisition.BatchedEnergyEntropyBO(model: Model, temperature: float, kernel_amplitude: float = 1.0, posterior_transform: PosteriorTransform | None = None, X_pending: Tensor | None = None, maximize: bool = True, logdet_method: str | LogDetMethod = 'svd', augment_method: str | AugmentedPosteriorMethod = 'naive', energy_function: str | EnergyFunction = 'sum', **kwargs)[source][source]

Bases: AnalyticAcquisitionFunction

The BEEBO batch acquisition function. Jointly optimizes a batch of points by minimizing the free energy of the batch.

Parameters:
  • model (GPyTorch Model) – A fitted single-outcome GP model. Must be in batch mode if candidate sets X will be.

  • temperature (float) – A scalar representing the temperature. Higher temperature leads to more exploration.

  • kernel_amplitude (float, optional) – The amplitude of the kernel. Defaults to 1.0. This is used to bring the temperature to a scale that is comparable to UCB’s hyperparameter beta.

  • posterior_transform (PosteriorTransform, optional) – A PosteriorTransform. If using a multi-output model, a PosteriorTransform that transforms the multi-output posterior into a single-output posterior is required.

  • maximize (bool, optional) – If True, consider the problem a maximization problem. Defaults to False.

  • logdet_method (str, optional) – The method to use to compute the log determinant of the covariance matrix. Should be one of the members of the LogDetMethod enum: "svd", "cholesky", or "torch". Defaults to "svd".

  • augment_method (str, optional) – The method to use to augment the model with the new points and computing the posterior covariance. Should be one of the members of the AugmentedPosteriorMethod enum: "naive" or "cholesky". Defaults to "naive".

  • energy_function (str, optional) – The energy function to use in the BEEBO acquisition function. Should be a string representing one of the members of the EnergyFunction enum: "softmax" or "sum". “softmax” implements the maxBEEBO and “sum” implements the meanBEEBO. Defaults to "sum".

  • **kwargs – Additional arguments to be passed to the energy function.

Base constructor for analytic acquisition functions.

Parameters:
  • model – A fitted single-outcome model.

  • posterior_transform – A PosteriorTransform. If using a multi-output model, a PosteriorTransform that transforms the multi-output posterior into a single-output posterior is required.

compute_energy(X: Tensor) Tensor[source][source]

Evaluate the energy of the candidate set X. :param X: A (b1 x … bk) x q x d-dim batched tensor of d-dim design points.

Returns:

A (b1 x … bk)-dim tensor of BOSS values at the given design points X.

compute_entropy(X: Tensor) Tensor[source][source]

Evaluate the energy of the candidate set X. :param X: A (b1 x … bk) x q x d-dim batched tensor of d-dim design points.

Returns:

A (b1 x … bk)-dim tensor of information gain values at the given design points X.

forward(X: Tensor) Tensor[source][source]

Evaluate the free energy of the candidate set X. :param X: A (b1 x … bk) x q x d-dim batched tensor of d-dim design points.

Returns:

A (b1 x … bk)-dim tensor of BOSS values at the given design points X.

set_X_pending(X_pending: Tensor | None = None) None[source][source]

Informs the acquisition function about pending design points.

Parameters:

X_pendingn x d Tensor with n d-dim design points that have been submitted for evaluation but have not yet been evaluated.

class beebo.acquisition.EnergyFunction(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source][source]

Bases: Enum

Used to specify the energy function to be used in the BatchedEnergyEntropyBO acquisition function.

SOFTMAX

Implements maxBEEBO. This option will lead the acquisition function to focus more on the point with the highest expected improvement in the batch. If this is chosen, BatchedEnergyEntropyBO accepts the additional arguments softmax_beta and f_max. softmax_beta is a scalar representing the inverse temperature of the softmax function. f_max is a scalar representing the maximum value of the function to be optimized.

SUM

Implements meanBEEBO. This option will lead the acquisition function to focus on improving the overall batch.

SOFTMAX = 'softmax'[source]
SUM = 'sum'[source]
class beebo.acquisition.LogDetMethod(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source][source]

Bases: Enum

Used to specify the method for computing the log determinant of the covariance matrices in the BatchedEnergyEntropyBO acquisition function.

SVD

Computes the log determinant using singular value decomposition.

CHOLESKY

Computes the log determinant using the cholesky decomposition, taking advantage of the fact that the covariance matrix is positive definite. This is not always numerically stable.

TORCH

Computes the log determinant using the default torch function torch.logdet. This is not always numerically stable.

CHOLESKY = 'cholesky'[source]
SVD = 'svd'[source]
TORCH = 'torch'[source]
beebo.acquisition.softmax_expectation(mvn: MultivariateNormal, a: Tensor, softmax_beta: float, f_max: float = None)[source][source]
beebo.acquisition.softmax_expectation_a_is_mean(mvn, softmax_beta, f_max=None)[source][source]
beebo.acquisition.stable_softmax(x: Tensor, beta: float, f_max: float = None, eps=1e-06, alpha=0.05)[source][source]

Module contents