torchmin.ScipyMinimizer

class torchmin.ScipyMinimizer(params, method='bfgs', bounds=None, constraints=(), tol=None, options=None)[source]

A PyTorch optimizer for constrained & unconstrained function minimization.

Note

This optimizer is a wrapper for scipy.optimize.minimize(). It uses autograd behind the scenes to build jacobian & hessian callables before invoking scipy. Inputs and objectivs should use PyTorch tensors like other routines. CUDA is supported; however, data will be transferred back-and-forth between GPU/CPU.

Warning

This optimizer doesn’t support per-parameter options and parameter groups (there can be only one).

Warning

Right now all parameters have to be on a single device. This will be improved in the future.

Parameters
  • params (iterable) – An iterable of torch.Tensor s. Specifies what Tensors should be optimized.

  • method (str) – One of the various optimization methods offered in scipy minimize. Defaults to ‘bfgs’.

  • bounds (iterable, optional) – An iterable of torch.Tensor s or float s with same length as params. Specifies boundaries for each parameter.

  • constraints (dict, optional) – TODO

  • tol (float, optional) – TODO

  • options (dict, optional) – TODO

__init__(params, method='bfgs', bounds=None, constraints=(), tol=None, options=None)[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__(params[, method, bounds, …])

Initialize self.

add_param_group(param_group)

Add a param group to the Optimizer s param_groups.

load_state_dict(state_dict)

Loads the optimizer state.

profile_hook_step(func)

register_step_post_hook(hook)

Register an optimizer step post hook which will be called after optimizer step. It should have the following signature::.

register_step_pre_hook(hook)

Register an optimizer step pre hook which will be called before optimizer step. It should have the following signature::.

state_dict()

Returns the state of the optimizer as a dict.

step(closure)

Perform an optimization step.

zero_grad([set_to_none])

Sets the gradients of all optimized torch.Tensor s to zero.