# Decomposing a Large Problem¶

This chapter briefly presents methods of decomposing a large problem into a part requiring no more than the number of qubits available on the QPU and, optionally for hybrid approaches, parts suited to conventional compute platforms. If a method seems applicable to the given problem, see the Problem Decomposition section for fuller descriptions and some background on conditioning. Detailed information is available in the literature and referenced papers to guide actual implementation.

More information as well as tools are provided by Ocean software’s D-Wave Hybrid.

## Cutset Conditioning¶

Cutset conditioning conditions on a subset of variables (the “cutset”) of a graph, leaving the remaining network as a tree with complexity that is exponential in the cardinality (size) of the cutset.

Chose a subset $\setminus A$ of variables $A$ to fix throughout the algorithm so that the graph of $E(\vc{s}_A,\vc{s}_{\setminus A})$, where $\argmin_{\vc{s}_A} E(\vc{s}_A,\vc{s}_{\setminus A})$ is the optimal setting of $\vc{s}_A$ for a given $\vc{s}_{\setminus A}$, breaks into separate components that can be solved independently:

$\mathcal{E}(\vc{s}_{\setminus A}) = E_{\setminus A}(\vc{s}_{\setminus A}) + \sum_\alpha \mathcal{E}_\alpha(\vc{s}_{\setminus A})$

where $\vc{s}_A = \bigcup_\alpha \vc{s}_{A_\alpha}$ and $\vc{s}_{A_\alpha} \cap \vc{s}_{A_{\alpha'}}=\emptyset$ for $\alpha \not = \alpha'$.

Choose a cutset such that each of the remaining $\min_{\vc{s}_{A_\alpha}}$ problems is small enough to be solved on the D-Wave QPU.

To simplify outer optimization, make the number of conditioned variables as small as possible.

Software Tools

See the Open-Source Software Tools section for a list of D-Wave software tools.

Further Information

• The Cutset Conditioning section provides a more detailed description of this method.
• The Conditioning section gives background information on conditioning.
• [Dec1987] discusses this method for improving search performance in AI applications.

## Branch-and-Bound Algorithms¶

Branch-and-bound algorithms progressively condition more variables to either $s_i=-1$ or $s_i=1$, for spins, defining a split at node $s_i$. Further splits define a branching binary tree with leaves defining the $2^N$ configurations where all variables are assigned values. At each node a branch is pruned if no leaf node below it can contain the global optimum.

Software Tools

See the Open-Source Software Tools section for a list of D-Wave software tools.

Further Information

• The Branch-and-Bound Algorithms section provides a more detailed description of this method.
• The Conditioning section gives background information on conditioning.
• [Ros2016] discusses branch-and-bound heuristics in the context of the D-Wave Chimera architecture.
• [Bru1994] describes applying the branch-and-bound algorithm to the job shop scheduling problem.
• [Mar2007] considers ways to explore the search tree, including dynamic variable orderings and best-first orderings.

### Best Completion Estimate¶

Branch-and-bound can benefit from using the D-Wave system to terminate searches higher in the tree.

Condition sufficient variables so the remaining can be optimized by the QPU. Instead of exploring deeper, call the QPU to estimate the best completion from that node. As the upper bound is minimized through subsequent QPU completions, this may in turn allow for future pruning.

Note

Since the QPU solution does not come with a proof of optimality, this algorithm may not return a global minimum.

### Lower Bounds¶

The D-Wave system can also provide tight lower-bound functions at any node in the search tree.

Lagrangian relaxation finds these lower bounds by first dividing a node in the graph representing variable $s_i$ in two ($s_i^{(1)}$ and $s_i^{(2)}$) with constraint $s_i^{(1)} = s_i^{(2)}$. The original objective $E(s_i,\vc{s}_{\setminus i})$ becomes $E'(s_i^{(1)}, s_i^{(2)},\vc{s}_{\setminus i})$, leaving the problem unchanged. With sufficient divided variables to decompose $E'$ into smaller independent problems the equality constraints are softened and treated approximately:

The Lagrangian for the constrained problem is

$L(s_i^{(1)},s_i^{(2)},\vc{s}_{\setminus i}; \lambda_i) = E'(s_i^{(1)},s_i^{(2)},\mathbf{s}_{\setminus i}) + \lambda_i(s_i^{(1) }- s_i^{(2)}).$

Where $\lambda_i$ is a multiplier for the equality constraint. Maximizing the dual function with respect to $\lambda_i$,

$g(\lambda_i) = \min_{s_i^{(1)},s_i^{(2)},\vc{s}_{\setminus i}} L(s_i^{(1)},s_i^{(2)},\vc{s}_{\setminus i}; \lambda_i),$

provides the tightest possible lower bound.

Introduce enough divided variables to generate subproblems small enough to solve on the QPU and then optimize each subproblem’s dual function using a subgradient method to provide the tightest possible lower bounds.

Further Information

• [Boy2007] gives a concise introduction to subgradient methods.
• [Joh2007] gives an alternative to subgradient optimization, which examines a smooth approximation to dual function.

## Large-Neighborhood Local Search Algorithms¶

Local search algorithms improve upon a candidate solution, $\vc{s}^t$, available at iteration $t$ by searching for better solutions within some local neighborhood of $\vc{s}^t$.

Quantum annealing can be very simply combined with local search to allow the local search algorithm to explore much larger neighborhoods than the standard 1-bit-flip Hamming neighborhood.

For a problem of $N$ variables and a neighborhood around configuration $\vc{s}^t$ of all states within Hamming distance $d$ of $\vc{s}^t$, choose one of $\binom{N}{d}$, and determine the best setting for these $\vc{s}_A$ variables given the fixed context of the conditioned variables $\vc{s}^{t}_{\setminus A}$. Select $d$ small enough to solve on the QPU. If no improvement is found within the chosen subset, select another.

Software Tools

See the Open-Source Software Tools section for a list of D-Wave software tools.

Further Information

• The Large-Neighborhood Local Search Algorithms section provides a more detailed description of this method.
• The Conditioning section gives background information on conditioning.
• [Ahu2000] describes the cyclic exchange neighborhood, a generalization of the two-exchange neighborhood algorithm.
• [Glo1990] is a tutorial on the tabu search algorithm.
• [Liu2005] presents promising results for even small neighborhoods of size $d\le 4$.

## Belief Propagation¶

The belief propagation algorithm passes messages between regions and variables that represent beliefs about the minimal energy conditional on each possible value of a variable. It can be used, for example, to calculate approximate, and in some cases exact, marginal probabilities in Bayes nets.

Further Information

• [Pea2008] describes the belief propagation algorithm.
• [Cou2009] is a tutorial on the subject.
• [Bia2014] discusses belief propagation in the context of decomposing CSPs into subproblems small enough to be embedded onto the QPU.

## Pruning Reformulation/Decomposition Ancillary Variables¶

Ancillary variables introduced during reformulation and decomposing limit the size of problem you can run on the D-Wave system.

To minimize the number of introduced ancillary variables during a reduction-to-pairwise reformulation, select the pair of variables most common among the terms with more than pairwise interactions and represent the product of that pair with ancillary variable $z$. Repeat for any remaining terms of 3 or more variables. Note that subsequent iterations may involve products of ancillary variables if these ancillary pairs are the most common amongst the terms.

Software Tools

See the Open-Source Software Tools section for code to accomplish this reduction.

Further Information

[Bor2002] on general pseudo-Boolean optimization gives a reduction-to-pairwise algorithm, which can be modified to minimize the number ancillary variables.