minimize(method=’newton-exact’)

torchmin.newton._minimize_newton_exact(fun, x0, lr=1.0, max_iter=None, line_search='strong-wolfe', xtol=1e-05, normp=1, tikhonov=0.0, handle_npd='grad', callback=None, disp=0, return_all=False)[source]

Minimize a scalar function of one or more variables using the Newton-Raphson method.

This variant uses an “exact” Newton routine based on Cholesky factorization of the explicit Hessian matrix.

Parameters
  • fun (callable) – Scalar objective function to minimize.

  • x0 (Tensor) – Initialization point.

  • lr (float) – Step size for parameter updates. If using line search, this will be used as the initial step size for the search.

  • max_iter (int, optional) – Maximum number of iterations to perform. Defaults to 200 * x0.numel().

  • line_search (str) – Line search specifier. Currently the available options are {‘none’, ‘strong_wolfe’}.

  • xtol (float) – Average relative error in solution xopt acceptable for convergence.

  • normp (Number or str) – The norm type to use for termination conditions. Can be any value supported by torch.norm().

  • tikhonov (float) – Optional diagonal regularization (Tikhonov) parameter for the Hessian.

  • handle_npd (str) –

    Mode for handling non-positive definite hessian matrices. Can be one of the following:

    • ’grad’ : use steepest descent direction (gradient)

    • ’lu’ : solve the inverse hessian with LU factorization

    • ’eig’ : use symmetric eigendecomposition to determine a diagonal regularization parameter

  • callback (callable, optional) – Function to call after each iteration with the current parameter state, e.g. callback(x).

  • disp (int or bool) – Display (verbosity) level. Set to >0 to print status messages.

  • return_all (bool) – Set to True to return a list of the best solution at each of the iterations.

Returns

result – Result of the optimization routine.

Return type

OptimizeResult