torchmin.Minimizer

class torchmin.Minimizer(params, method='bfgs', **minimize_kwargs)[source]

A general-purpose PyTorch optimizer for unconstrained function minimization.

Warning

This optimizer doesn’t support per-parameter options and parameter groups (there can be only one).

Warning

Right now all parameters have to be on a single device. This will be improved in the future.

Parameters
  • params (iterable) – An iterable of torch.Tensor s. Specifies what Tensors should be optimized.

  • method (str) – Minimization method (algorithm) to use. Must be one of the methods offered in torchmin.minimize(). Defaults to ‘bfgs’.

  • **minimize_kwargs (dict) – Additional keyword arguments that will be passed to torchmin.minimize().

__init__(params, method='bfgs', **minimize_kwargs)[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__(params[, method])

Initialize self.

add_param_group(param_group)

Add a param group to the Optimizer s param_groups.

closure(x)

dir_evaluate(x, t, d)

load_state_dict(state_dict)

Loads the optimizer state.

profile_hook_step(func)

register_step_post_hook(hook)

Register an optimizer step post hook which will be called after optimizer step. It should have the following signature::.

register_step_pre_hook(hook)

Register an optimizer step pre hook which will be called before optimizer step. It should have the following signature::.

state_dict()

Returns the state of the optimizer as a dict.

step(closure)

Perform an optimization step.

zero_grad([set_to_none])

Sets the gradients of all optimized torch.Tensor s to zero.

Attributes

nfev