Dual and Primal/Dual algorithms¶
Distributed dual decomposition¶
- class disropt.algorithms.dual_decomp.DualDecomposition(agent, initial_condition, enable_log=False)[source]¶
Bases:
disropt.algorithms.algorithm.Algorithm
Distributed dual decomposition.
From the perspective of agent \(i\) the algorithm works as follows.
Initialization: \(\lambda_{ij}^0\) for all \(j \in \mathcal{N}_i\)
For \(k=0,1,\dots\)
Gather \(\lambda_{ji}^k\) from neighbors \(j \in \mathcal{N}_i\)
Compute \(x_i^k\) as an optimal solution of
\[\min_{x_i \in X_i} \: f_i(x_i) + x_i^\top \sum_{j \in \mathcal{N}_i} (\lambda_{ij}^k - \lambda_{ji}^k)\]Gather \(x_j^k\) from neighbors \(j \in \mathcal{N}_i\)
Update for all \(j \in \mathcal{N}_i\)
\[\lambda_{ij}^{k+1} = \lambda_{ij}^{k} + \alpha^k (x_i^k - x_j^k)\]where \(x_i\in\mathbb{R}^{n}\), \(\lambda_{ij}\in\mathbb{R}^n\) for all \(j \in \mathcal{N}_i\), \(X_i\subseteq\mathbb{R}^{n}\) for all \(i\) and \(\alpha^k\) is a positive stepsize.
The algorithm has been presented in ????.
- get_result()[source]¶
Return the current value primal solution and dual variable
- Returns
value of primal solution (np.ndarray), dual variables (dictionary of np.ndarray)
- Return type
tuple (primal, dual)
- run(iterations=1000, stepsize=0.1, verbose=False, **kwargs)[source]¶
Run the algorithm for a given number of iterations
- Parameters
iterations (
int
) – Number of iterations. Defaults to 1000.stepsize (
Union
[float
,Callable
]) – If a float is given as input, the stepsize is constant. If a function is given, it must take an iteration k as input and output the corresponding stepsize.. Defaults to 0.1.verbose (
bool
) – If True print some information during the evolution of the algorithm. Defaults to False.
- Raises
- Return type
- Returns
return a tuple (x, lambda) with the sequence of primal solutions and dual variables if enable_log=True.
Distributed ADMM¶
- class disropt.algorithms.admm.ADMM(agent, initial_lambda, initial_z, enable_log=False)[source]¶
Bases:
disropt.algorithms.algorithm.Algorithm
Distributed ADMM.
From the perspective of agent \(i\) the algorithm works as follows.
Initialization: \(\lambda_{ij}^0\) for all \(j \in \mathcal{N}_i\), \(\lambda_{ii}^0\) and \(z_i^0\)
For \(k=0,1,\dots\)
Compute \(x_i^k\) as the optimal solution of
\[\min_{x_i \in X_i} \: f_i(x_i) + x_i^\top \sum_{j \in \mathcal{N}_i \cup \{i\}} \lambda_{ij}^k + \frac{\rho}{2} \sum_{j \in \mathcal{N}_i \cup \{i\}} \| x_i - z_j^t \|^2\]Gather \(x_j^k\) and \(\lambda_{ji}^k\) from neighbors \(j \in \mathcal{N}_i\)
Update \(z_i^{k+1}\)
\[z_i^{k+1} = \frac{\sum_{j \in \mathcal{N}_i \cup \{i\}} x_j^k}{|\mathcal{N}_i| + 1} + \frac{\sum_{j \in \mathcal{N}_i \cup \{i\}} \lambda_{ji}^k}{\rho (|\mathcal{N}_i| + 1)}\]Gather \(z_j^{k+1}\) from neighbors \(j \in \mathcal{N}_i\)
Update for all \(j \in \mathcal{N}_i\)
\[\lambda_{ij}^{k+1} = \lambda_{ij}^{k} + \rho (x_i^k - z_j^{k+1})\]where \(x_i, z_i\in\mathbb{R}^{n}\), \(\lambda_{ij}\in\mathbb{R}^n\) for all \(j \in \mathcal{N}_i \cup \{i\}\), \(X_i\subseteq\mathbb{R}^{n}\) for all \(i\) and \(\rho\) is a positive penalty parameter.
The algorithm has been presented in ????.
- get_result()[source]¶
Return the current value primal solution, dual variable and auxiliary primal variable
- Returns
value of primal solution (np.ndarray), dual variables (dictionary of np.ndarray), auxiliary primal variable (np.ndarray)
- Return type
tuple (primal, dual, auxiliary)
- run(iterations=1000, penalty=0.1, verbose=False, **kwargs)[source]¶
Run the algorithm for a given number of iterations
- Parameters
- Raises
TypeError – The number of iterations must be an int
- Return type
- Returns
return a tuple (x, lambda, z) with the sequence of primal solutions, dual variables and auxiliary primal variables if enable_log=True.
Distributed dual subgradient method¶
- class disropt.algorithms.dual_subgradient.DualSubgradientMethod(agent, initial_condition, initial_runavg=None, enable_log=False)[source]¶
Bases:
disropt.algorithms.consensus.Consensus
Distributed dual subgradient method.
From the perspective of agent \(i\) the algorithm works as follows. For \(k=0,1,\dots\)
\[ \begin{align}\begin{aligned}y_i^{k} &= \sum_{j=1}^N w_{ij} \lambda_j^k\\x_i^{k+1} &\in \text{arg} \min_{x_i \in X_i} f_i(x_i) + g_i(x_i)^\top y_i^k\\\lambda_i^{k+1} &= \Pi_{\lambda \ge 0}[y_i^k + \alpha^k g_i(x_i^{k+1})]\\\hat{x}_i^{k+1} &= \hat{x}_i^{k} + \frac{\alpha^k}{\sum_{r=0}^{k} \alpha^r} (x_i^{k+1} - \hat{x}_i^{k})\end{aligned}\end{align} \]where \(x_i, \hat{x}_i\in\mathbb{R}^{n_i}\), \(\lambda_i,y_i\in\mathbb{R}^S\), \(X_i\subseteq\mathbb{R}^{n_i}\) for all \(i\), \(\alpha^k\) is a positive stepsize and \(\Pi_{\lambda \ge 0}[]\) denotes the projection operator over the nonnegative orthant.
The algorithm has been presented in [FaMa17].
Warning
this algorithm is still under development
- get_result()[source]¶
Return the actual value of dual and primal averaged variable
- Returns
value of primal running average, value of dual variable
- Return type
tuple (dual, primal) of numpy.ndarray
- run(iterations=1000, stepsize=0.1, verbose=False, callback_iter=None)[source]¶
Run the algorithm for a given number of iterations
- Parameters
iterations (
int
) – Number of iterations. Defaults to 1000.stepsize (
Union
[float
,Callable
]) – If a float is given as input, the stepsize is constant. If a function is given, it must take an iteration k as input and output the corresponding stepsize.. Defaults to 0.1.verbose (
bool
) – If True print some information during the evolution of the algorithm. Defaults to False.callback_iter (
Optional
[Callable
]) – callback function to be called at the end of each iteration. Must take an iteration k as input. Defaults to None.
- Raises
- Return type
- Returns
return a tuple (lambda, x_hat) with the sequence of dual and primal estimates if enable_log=True.
Distributed Primal Decomposition¶
- class disropt.algorithms.primal_decomp.PrimalDecomposition(agent, initial_condition, enable_log=False)[source]¶
Bases:
disropt.algorithms.algorithm.Algorithm
Distributed primal decomposition.
From the perspective of agent \(i\) the algorithm works as follows.
Initialization: \(y_i^0\) such that \(\sum_{i=1}^N y_i^0 = 0\)
For \(k=0,1,\dots\)
Compute \(((x_i^k, \rho_i^k), \mu_i^k)\) as a primal-dual optimal solution of
\begin{split} \min_{x_i, \rho_i} \hspace{1.1cm} & \: f_i(x_i) + M \rho_i \\ \mathrm{subj. to} \: \mu_i: \: & \: g_i(x_i) \le y_i^k + \rho_i \boldsymbol{1} \\ & \: x_i \in X_i, \rho_i \ge 0 \end{split}Gather \(\mu_j^k\) from \(j \in \mathcal{N}_i\) and update
\[y_i^{k+1} = y_i^{k} + \alpha^k \sum_{j \in \mathcal{N}_i} (\mu_i^k - \mu_j^k)\]where \(x_i\in\mathbb{R}^{n_i}\), \(\mu_i,y_i\in\mathbb{R}^S\), \(X_i\subseteq\mathbb{R}^{n_i}\) for all \(i\) and \(\alpha^k\) is a positive stepsize.
The algorithm has been presented in ????.
- get_result(return_runavg=False)[source]¶
Return the current value of primal solution, allocation and cost
- Parameters
return_runavg (
bool
) – whether or not to return also running average of allocation. Defaults to False.- Returns
value of primal solution, allocation, cost (if return_runavg = False) tuple (primal, allocation, allocation_avg cost) if return_runavg = True
- Return type
tuple (primal, allocation, cost) of numpy.ndarray
- iterate_run(stepsize, M, update_runavg, event, **kwargs)[source]¶
Run a single iterate of the algorithm
- run(iterations=1000, stepsize=0.1, M=1000.0, verbose=False, callback_iter=None, compute_runavg=False, runavg_start_iter=0, event=None, **kwargs)[source]¶
Run the algorithm for a given number of iterations
- Parameters
iterations (
int
) – Number of iterations. Defaults to 1000.stepsize (
Union
[float
,Callable
]) – If a float is given as input, the stepsize is constant. If a function is given, it must take an iteration k as input and output the corresponding stepsize.. Defaults to 0.1.M (
float
) – Value of the parameter \(M\). Defaults to 1000.verbose (
bool
) – If True print some information during the evolution of the algorithm. Defaults to False.callback_iter (
Optional
[Callable
]) – callback function to be called at the end of each iteration. Must take an iteration k as input. Defaults to None.compute_runavg (
bool
) – whether or not to compute also running average of allocation. Defaults to False.runavg_start_iter (
int
) – specifies when to start computing running average (applies only if compute_runavg = True). Defaults to 0.
- Raises
- Return type
- Returns
return a tuple (x, y, J) with the sequence of primal solutionsm allocation estimates and cost if enable_log=True. If compute_runavg=True, then return (x, y, y_avg, J)
Distributed Primal Decomposition for MILPs¶
- class disropt.algorithms.primal_decomp_milp.PrimalDecompositionMILP(agent, initial_condition, enable_log=False, restriction=None, finite_time_add_restriction=0)[source]¶
Bases:
disropt.algorithms.primal_decomp.PrimalDecomposition
Distributed primal decomposition for MILPs.
- iterate_run(stepsize, M, update_runavg, event, **kwargs)[source]¶
Run a single iterate of the algorithm
- run(n_agents, iterations=1000, stepsize=0.1, M=1000.0, verbose=False, callback_iter=None, fast_mode=True, max_cutting_planes=1000, milp_solver=None, extra_allocation=None, use_runavg=False, runavg_start_iter=0, event=None, max_consensus_iterations=None, max_consensus_graph_diam=None, **kwargs)[source]¶
Run the algorithm for a given number of iterations
- Parameters
iterations (
int
) – Number of iterations. Defaults to 1000.stepsize (
Union
[float
,Callable
]) – If a float is given as input, the stepsize is constant. If a function is given, it must take an iteration k as input and output the corresponding stepsize. Defaults to 0.1.M (
float
) – Value of the parameter \(M\). Defaults to 1000.verbose (
bool
) – If True print some information during the evolution of the algorithm. Defaults to False.callback_iter (
Optional
[Callable
]) – callback function to be called at the end of each iteration. Must take an iteration k as input. Defaults to None.fast_mode (
bool
) – If True, a mixed-integer solution is computed at each iteration, otherwise only at the last iteration. Defaults to True.max_cutting_planes (
int
) – maximum number of cutting planes for local solver. Defaults to 1000.milp_solver (
Optional
[str
]) – MILP solver to use. Defaults to None (use default solver).
- Raises
- Return type
- Returns
return a tuple (x, y) with the sequence of primal solutions and allocation estimates if enable_log=True. If fast_mode=True, the primal solutions are those of the convexified problem, otherwise they are the mixed-integer estimates.
ASYMM (beta)¶
- class disropt.algorithms.asymm.ASYMM(agent, graph_diameter, initial_condition, enable_log=False, **kwargs)[source]¶
Bases:
disropt.algorithms.misc.AsynchronousLogicAnd
Asyncrhonous Distributed Method of Multipliers [FaGa19b]
See [FaGa19b] for the details.
Warning
This algorithm is currently under development
References
- FaGa19b(1,2)
Farina, F., Garulli, A., Giannitrapani, A., & Notarstefano, G. (2019). A distributed asynchronous method of multipliers for constrained nonconvex optimization. Automatica, 103, 243-253.
- FaMa17
Falsone, A., Margellos, K., Garatti, S., & Prandini, M. (2017). Dual decomposition for multi-agent distributed optimization with coupling constraints. Automatica, 84, 149-158.