Question or problem about Python programming:
I use scipy.optimize to minimize a function of 12 arguments.
I started the optimization a while ago and still waiting for results.
Is there a way to force scipy.optimize to display its progress (like how much is already done, what are the current best point)?
How to solve the problem:
Solution 1:
As mg007 suggested, some of the scipy.optimize routines allow for a callback function (unfortunately leastsq does not permit this at the moment). Below is an example using the “fmin_bfgs” routine where I use a callback function to display the current value of the arguments and the value of the objective function at each iteration.
import numpy as np from scipy.optimize import fmin_bfgs Nfeval = 1 def rosen(X): #Rosenbrock function return (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \ (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2 def callbackF(Xi): global Nfeval print '{0:4d} {1: 3.6f} {2: 3.6f} {3: 3.6f} {4: 3.6f}'.format(Nfeval, Xi[0], Xi[1], Xi[2], rosen(Xi)) Nfeval += 1 print '{0:4s} {1:9s} {2:9s} {3:9s} {4:9s}'.format('Iter', ' X1', ' X2', ' X3', 'f(X)') x0 = np.array([1.1, 1.1, 1.1], dtype=np.double) [xopt, fopt, gopt, Bopt, func_calls, grad_calls, warnflg] = \ fmin_bfgs(rosen, x0, callback=callbackF, maxiter=2000, full_output=True, retall=False)
The output looks like this:
Iter X1 X2 X3 f(X) 1 1.031582 1.062553 1.130971 0.005550 2 1.031100 1.063194 1.130732 0.004973 3 1.027805 1.055917 1.114717 0.003927 4 1.020343 1.040319 1.081299 0.002193 5 1.005098 1.009236 1.016252 0.000739 6 1.004867 1.009274 1.017836 0.000197 7 1.001201 1.002372 1.004708 0.000007 8 1.000124 1.000249 1.000483 0.000000 9 0.999999 0.999999 0.999998 0.000000 10 0.999997 0.999995 0.999989 0.000000 11 0.999997 0.999995 0.999989 0.000000 Optimization terminated successfully. Current function value: 0.000000 Iterations: 11 Function evaluations: 85 Gradient evaluations: 17
At least this way you can watch as the optimizer tracks the minimum
Solution 2:
Following @joel’s example, there is a neat and efficient way to do the similar thing. Following example show how can we get rid of global
variables, call_back
functions and re-evaluating target function multiple times.
import numpy as np from scipy.optimize import fmin_bfgs def rosen(X, info): #Rosenbrock function res = (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \ (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2 # display information if info['Nfeval']%100 == 0: print '{0:4d} {1: 3.6f} {2: 3.6f} {3: 3.6f} {4: 3.6f}'.format(info['Nfeval'], X[0], X[1], X[2], res) info['Nfeval'] += 1 return res print '{0:4s} {1:9s} {2:9s} {3:9s} {4:9s}'.format('Iter', ' X1', ' X2', ' X3', 'f(X)') x0 = np.array([1.1, 1.1, 1.1], dtype=np.double) [xopt, fopt, gopt, Bopt, func_calls, grad_calls, warnflg] = \ fmin_bfgs(rosen, x0, args=({'Nfeval':0},), maxiter=1000, full_output=True, retall=False, )
This will generate output like
Iter X1 X2 X3 f(X) 0 1.100000 1.100000 1.100000 2.440000 100 1.000000 0.999999 0.999998 0.000000 200 1.000000 0.999999 0.999998 0.000000 300 1.000000 0.999999 0.999998 0.000000 400 1.000000 0.999999 0.999998 0.000000 500 1.000000 0.999999 0.999998 0.000000 Warning: Desired error not necessarily achieved due to precision loss. Current function value: 0.000000 Iterations: 12 Function evaluations: 502 Gradient evaluations: 98
However, no free launch, here I used function evaluation times
instead of algorithmic iteration times
as a counter. Some algorithms may evaluate target function multiple times in a single iteration.
Solution 3:
Try using:
options={'disp': True}
to force scipy.optimize.minimize
to print intermediate results.
Solution 4:
Which minimization function are you using exactly?
Most of the functions have progress report built, including multiple levels of reports showing exactly the data you want, by using the disp
flag (for example see scipy.optimize.fmin_l_bfgs_b).
Solution 5:
Many of the optimizers in scipy indeed lack verbose output (the ‘trust-constr’ method of scipy.optimize.minimize
being an exception). I faced a similar issue and solved it by creating a wrapper around the objective function and using the callback function. No additional function evaluations are performed here, so this should be an efficient solution.
import numpy as np class Simulator: def __init__(self, function): self.f = function # actual objective function self.num_calls = 0 # how many times f has been called self.callback_count = 0 # number of times callback has been called, also measures iteration count self.list_calls_inp = [] # input of all calls self.list_calls_res = [] # result of all calls self.decreasing_list_calls_inp = [] # input of calls that resulted in decrease self.decreasing_list_calls_res = [] # result of calls that resulted in decrease self.list_callback_inp = [] # only appends inputs on callback, as such they correspond to the iterations self.list_callback_res = [] # only appends results on callback, as such they correspond to the iterations def simulate(self, x, *args): """Executes the actual simulation and returns the result, while updating the lists too. Pass to optimizer without arguments or parentheses.""" result = self.f(x, *args) # the actual evaluation of the function if not self.num_calls: # first call is stored in all lists self.decreasing_list_calls_inp.append(x) self.decreasing_list_calls_res.append(result) self.list_callback_inp.append(x) self.list_callback_res.append(result) elif result < self.decreasing_list_calls_res[-1]: self.decreasing_list_calls_inp.append(x) self.decreasing_list_calls_res.append(result) self.list_calls_inp.append(x) self.list_calls_res.append(result) self.num_calls += 1 return result def callback(self, xk, *_): """Callback function that can be used by optimizers of scipy.optimize. The third argument "*_" makes sure that it still works when the optimizer calls the callback function with more than one argument. Pass to optimizer without arguments or parentheses.""" s1 = "" xk = np.atleast_1d(xk) # search backwards in input list for input corresponding to xk for i, x in reversed(list(enumerate(self.list_calls_inp))): x = np.atleast_1d(x) if np.allclose(x, xk): break for comp in xk: s1 += f"{comp:10.5e}\t" s1 += f"{self.list_calls_res[i]:10.5e}" self.list_callback_inp.append(xk) self.list_callback_res.append(self.list_calls_res[i]) if not self.callback_count: s0 = "" for j, _ in enumerate(xk): tmp = f"Comp-{j+1}" s0 += f"{tmp:10s}\t" s0 += "Objective" print(s0) print(s1) self.callback_count += 1
A simple test can be defined
from scipy.optimize import minimize, rosen ros_sim = Simulator(rosen) minimize(ros_sim.simulate, [0, 0], method='BFGS', callback=ros_sim.callback, options={"disp": True}) print(f"Number of calls to Simulator instance {ros_sim.num_calls}")
resulting in:
Comp-1 Comp-2 Objective 1.76348e-01 -1.31390e-07 7.75116e-01 2.85778e-01 4.49433e-02 6.44992e-01 3.14130e-01 9.14198e-02 4.75685e-01 4.26061e-01 1.66413e-01 3.52251e-01 5.47657e-01 2.69948e-01 2.94496e-01 5.59299e-01 3.00400e-01 2.09631e-01 6.49988e-01 4.12880e-01 1.31733e-01 7.29661e-01 5.21348e-01 8.53096e-02 7.97441e-01 6.39950e-01 4.26607e-02 8.43948e-01 7.08872e-01 2.54921e-02 8.73649e-01 7.56823e-01 2.01121e-02 9.05079e-01 8.12892e-01 1.29502e-02 9.38085e-01 8.78276e-01 4.13206e-03 9.73116e-01 9.44072e-01 1.55308e-03 9.86552e-01 9.73498e-01 1.85366e-04 9.99529e-01 9.98598e-01 2.14298e-05 9.99114e-01 9.98178e-01 1.04837e-06 9.99913e-01 9.99825e-01 7.61051e-09 9.99995e-01 9.99989e-01 2.83979e-11 Optimization terminated successfully. Current function value: 0.000000 Iterations: 19 Function evaluations: 96 Gradient evaluations: 24 Number of calls to Simulator instance 96
Of course this is just a template, it can be adjusted to your needs. It does not provide all information about the status of the optimizer (like e.g. in the Optimization Toolbox of MATLAB), but at least you have some idea of the progress of the optimization.
A similar approach can be found here, without using the callback function. In my approach the callback function is used to print output exactly when the optimizer has finished an iteration, and not every single function call.
Solution 6:
It is also possible to include a simple print() statement in the function to be minimized. If you import the function you can create a wapper.
import numpy as np from scipy.optimize import minimize def rosen(X): #Rosenbrock function print(X) return (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \ (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2 x0 = np.array([1.1, 1.1, 1.1], dtype=np.double) minimize(rosen, x0)
Solution 7:
Below is a solution that works for me :
def f_(x): # The rosenbrock function return (1 - x[0])**2 + 100 * (x[1] - x[0]**2)**2 def conjugate_gradient(x0, f): all_x_i = [x0[0]] all_y_i = [x0[1]] all_f_i = [f(x0)] def store(X): x, y = X all_x_i.append(x) all_y_i.append(y) all_f_i.append(f(X)) optimize.minimize(f, x0, method="CG", callback=store, options={"gtol": 1e-12}) return all_x_i, all_y_i, all_f_i
and by example :
conjugate_gradient([2, -1], f_)
Source