# Reformulating a Problem¶

This chapter briefly presents several techniques for mapping a given problem to a formulation that the D-Wave system supports, Ising or QUBO, known as its native formulations. If a technique seems applicable to the given problem, see the Problem Reformulation section in the Derivations and Further Details chapter for more theoretical background and details, and the Open-Source Software Tools section in the Software Environment and Tools chapter for available software tools to automate reformulation. Detailed information is available in the literature and referenced papers to guide actual implementation.

Further Information

• [Luc2013] describes Ising formulations of many NP problems.
• [Bru1967] provides a history of the Lenz-Ising model and its uses.
• [Bar1982] shows that Ising problems are hard.
• [Ber1999] discusses a broad tractable subclass of Ising problems in the context of rounding algorithms.
• [Kol2004] characterizes binary-variables energy functions that can be minimized by graph cuts.
• [Cou2009] and [Sch2009] discuss particular cases of tractable Ising problems.

## Native Formulations: Ising and QUBO (and MIS)¶

The D-Wave QPU can be viewed as a heuristic that minimizes Ising objective functions using a physically realized version of quantum annealing. In Ising formulation, given a problem of $N$ variables corresponding to physical Ising spins, $\vc s=[s_1,...,s_N]$, and a configuration of $h_i$ local fields and $J_{i,j}$ couplings between spins, the QPU (in its solution spin values) minimizes the objective function

$\text{Ising:} \qquad E(\vc{s}|\vc{h},\vc{J}) = \left\{ \sum_{i=1}^N h_i s_i + \sum_{i<j}^N J_{i,j} s_i s_j \right\} \qquad\qquad s_i\in\{-1,+1\}$

It is straightforward to translate between Ising and QUBO representations. Similarly, it is simple to translate to these representations a problem formulated as either a maximum independent set (MIS), which seeks the largest subset of vertices of graph $G=(V,E)$, or a Max Cut, which seeks a subset of vertices that maximizes the number of edges between the subset and remaining vertices.

In QUBO formulation, $N$ binary variables are represented as an upper-diagonal matrix $\vc{Q}$, where diagonal terms are the linear coefficients and the nonzero off-diagonal terms are the quadratic coefficients. The objective function to minimize is

$\text{QUBO:} \qquad E(\vc{x}| \vc{Q}) = \sum_{i\le j}^N x_i Q_{i,j} x_j = \ip{\vc{x}}{\vc{Q}\vc{x}} \qquad\qquad x_i\in \{0,1\}$

Use the simple arithmetic mapping, $\vc{s}=2\vc{x}-\vc{1}$, where $\vc{1}$ is the vector all of whose components are 1, to translate between these native formulations:

$\ip{\vc{x}}{\vc{Q}\vc{x}} = \frac14 \ip{\vc{1}}{\vc{Q}\vc{1}} + \ip{\overbrace{\vc{Q}\vc{1}/2}^{\vc{h}}}{\vc{s}} + \ip{\vc{s}}{\overbrace{(\vc{Q}/4)}^{\vc{J}}\vc{s}}$

Thus,

$E(\vc{x}|\vc{Q}) = \ip{\vc{1}}{\vc{Q}\vc{1}} + E(\vc{s}|\vc{Q}\vc{1}/2, \vc{Q}/4).$

In MIS formulation, the objective function is:

$\text{MIS:} \qquad \text{E}(\vc x) = - \sum_{i\in V} x_i + \sum_{(i,j)\in E} M_{i,j} x_i x_{j}.$

Clearly this formulation is similar to Ising with local fields $h_i=-1$ and only positive coupling strengths $J_{i,j}$ (represented by penalties $M_{i,j}>1$).

### Illustrative Example: Map-Coloring Objective in Ising Formulation¶

For an introduction to the technique, consider the following small example.

The Simple Map Coloring Problem section of the Background chapter develops an objective function, in QUBO formulation, for a simple two-color, single-region constraint,

$\text{E}(a_i, b_{i,j}; q_i) = -q_B -q_G + 2q_B q_G,$

where $q_B$ is a qubit representing blue and $q_G$ green.

In solving this objective function, the D-Wave system minimizes the equivalent Ising function. You can view this Ising formulation using the inverse of the $\vc{s}=2\vc{x}-\vc{1}$ Ising-to-QUBO conversion. For this objective function, the variables are

$\begin{split}\vc{x} = \begin{bmatrix} q_B \\ q_G \end{bmatrix} \qquad \vc{s} = \begin{bmatrix} s_B \\ s_G \end{bmatrix}\end{split}$

The inverse conversion in scalar form, $q_i=\frac{s_i+1}{2}$, is a simple mathematical manipulation of $s_i=2x_i-1$, where the QUBO variables in scalar form are $x_1,x_2=q_B,q_G$. Substituting in the objective function for the simple map-coloring constraint yields

\begin{align}\begin{aligned} \text{E}(h_i, j_{i,j}; s_i) = -\frac{s_B+1}{2} -\frac{s_G+1}{2} + 2 \frac{s_B+1}{2} \frac{s_G+1}{2}\\= \frac{1}{2}(s_Bs_G-1).\end{aligned}\end{align}

Table 17 shows the two formulations’ identical energy functions, with minimum levels for the valid states of a single selected color, either blue ($0,1$ in a QUBO or $-1,1$ in an Ising model) or green ($1,0$ or $1,-1$).

Table 17 Ising Equivalent of QUBO
$\mathbf{q_B,q_G}$ $\mathbf{s_B,s_G}$ $\mathbf{\text{E}(a_i, b_{i,j}; q_i)=\text{E}(h_i, j_{i,j}; s_i)}$
$0,0$ $-1,-1$ $0$
$0,1$ $-1,1$ $-1$
$1,0$ $1,-1$ $-1$
$1,1$ $1,1$ $0$

For example, the state $q_B,q_G=0,1$ of the second row is represented by the QUBO penalty function,

\begin{align}\begin{aligned}\text{E}(a_i, b_{i,j}; q_i) &= -q_B -q_G + 2q_B q_G,\\&= -0 - 1 +2 \times 0 \times 1\\&= -1\end{aligned}\end{align}

while the equivalent state $s_B,s_G=-1,1$ of that same row is represented by the Ising penalty function,

\begin{align}\begin{aligned}\text{E}(h_i, j_{i,j}; s_i) &= \frac{1}{2}(s_Bs_G-1)\\&= \frac{1}{2}(-1 \times 1 -1) = \frac{-2}{2}\\&= -1\end{aligned}\end{align}

with the same resulting penalty.

Note that the constant $-\frac{1}{2}$ in the Ising formulation above, which does not affect the relative energies, does not map into the formulation used to configure the D-Wave system, and is discarded when the logical qubits $S_B,S_G$ are embedded as physical qubits (e.g., $q_0,q_4$ for blue and $q_1,q_5$ for green in the embedding used in the Simple Map Coloring Problem section).

Software Tools

See the Open-Source Software Tools section for a list of D-Wave software tools that automate processes such as translating between the native formulations.

Further Information

## Weighted MAX-2-SAT to Ising/QUBO¶

Satisfiability (SAT) uses the following terminology:

• Literal is a Boolean variable such as $x_i$ and $\overline{x_i}$.
• Clause is a disjunction of literals such as $x_i \vee \overline{x_j}$.
• conjunctive normal form (CNF) conjoins clauses by the AND operator; i.e., (clause 1) $\wedge$ (clause 2).

The SAT problem is to decide whether the literals in its clauses can be assigned values that satisfy all the clauses; i.e., produce a value of $1$. In CNF, the SAT is satisfied only if all its clauses are satisfied.

A 2-SAT has $m$ clauses of 2 literals each. A MAX-2-SAT is the problem of assigning values that maximize the number of satisfied clauses (an optimization problem; a second version is to decide whether some specified number of clauses can be satisfied). Weighted MAX-SAT assigns each clause a positive weight so the violation of clause $c$ incurs a cost $w_c$ and the problem is to maximize the weight of satisfied clauses

$\text{MAX-2-SAT:} \qquad \underbrace{(\ell_{c1,1} \vee \ell_{c1,2} ; w_{c1})}_{\text{clause 1}} \wedge \underbrace{(\ell_{c2,1} \vee \ell_{c2,2} ; w_{c2})}_{\text{clause 2}} \wedge ... \wedge \underbrace{(\ell_{cm,1} \vee \ell_{cm,2} ; w_{cm})}_{\text{clause m}}$

where $\ell_{ci,1}$ and $\ell_{ci,2}$ are the two literals of clause $ci$, and $w_{ci}$ is its weight.

To reformulate a MAX-2-SAT problem as a QUBO, note that (1) maximizing the weight of satisfied clauses is equivalent to minimizing the weight of unsatisfied clauses, and (2) by De Morgan’s laws, $\overline{x_i \vee x_j} = \overline{x_i} \wedge \overline{x_j}$. This means that the general weighted MAX-2-SAT problem can be written as a posiform (in the context of machine learning problems, a polynomial expression that conjoins AND clauses with OR operations) that minimizes the weight of unsatisfied clauses:

$\min_{\vc{x}} \sum_c w_c \overline{\ell}_{c,1} \overline{\ell}_{c,2}.$

### Illustrative Example: Three-Variable MAX-2-SAT¶

For an introduction to the technique, consider the following small example.

A weighted MAX-2-SAT on three variables, $x_1,x_2,x_3$ with weights $w_{c1}=3, w_{c2}=1, w_{c1}=4$, maximizes the number of satisfied clauses for the following example problem:

$\underbrace{(x_1\vee \overline{x}_2; 3)}_{\text{clause 1}} \wedge \underbrace{(x_3;1)}_{\text{clause 2}} \wedge \underbrace{(\overline{x}_3 \vee x_2;4)}_{\text{clause 3}}.$

Obtain the QUBO representation using the following steps:

1. Formulate as a posiform (using $\overline{x_i \vee x_j} = \overline{x_i} \wedge \overline{x_j}$ and minimizing the weight of unsatisfied clauses as explained above):
$\min_{\vc{x}} \left\{ \underbrace{3(\overline{x_1} \wedge x_2)}_{\text{clause 1}} + \underbrace{1(\overline{x_3})}_{\text{clause 2}} + \underbrace{4(x_3 \wedge \overline{x_2})}_{\text{clause 3}} \right\} = \min_{\vc{x}} \left\{3\overline{x_1} x_2 +\overline{x_3}+4x_3 \overline{x_2}\right\}.$
1. Write negated literals $\overline{x_i}$ as $(1-x_i)$:
\begin{align}\begin{aligned}\min_{\vc{x}} \left\{3(1-x_1) x_2+(1-x_3)+4x_3 (1-x_2)\right\}\\= \min_{\vc{x}} \left\{3x_2 -3x_1x_2 +1 - x_3 + 4x_3 - 4x_2x_3 \right\}\\= \min_{\vc{x}} \left\{1 + 3x_2 + 3x_3 -3x_1x_2 - 4x_2x_3 \right\}\end{aligned}\end{align}

In this last expression, the example weighted MAX-2-SAT has been reformulated as a QUBO in scalar notation,

$\text{E}_{qubo}(a_i, b_{i,j}; x_i) = \sum_{i} a_i x_i + \sum_{i<j} b_{i,j} x_i x_j,$

where $a_1=0, a_2=3, a_3=3$ are the linear coefficients and $b_{1,2}=-3, b_{2,3}=-4$ the quadratic coefficients.

QUBOs may be written without any negative coefficients by using negative literals, expressing contributions such as

$-3 x_1 x_2$

as either

$-3+3\overline{x}_2+3\overline{x}_1 x_2$

or

$-3+3\overline{x}_1+3x_1\overline{x}_2,$

clearly showing the lower bound of $-3$.

Software Tools

See the Open-Source Software Tools section for a list of D-Wave software tools.

Further Information

• SAT and Weighted MAX-2-SAT expands on the satisfiability problem and derives this weighted MAX-2-SAT formulation.
• [Bon2007] generalizes the resolution rule for satisfiability to weighted MAX-SAT, which enables the simplification of some weighted MAX-2-SAT problems.

## Non-Quadratic (Higher-Degree) Polynomials to Ising/QUBO¶

To reduce problems with interactions between more than pairs of variables to a QUBO, introduce and minimize over ancillary variables.

### Illustrative Example: Polynomial Reductions¶

For an introduction to the technique, consider the following small examples.

#### Reduction by Minimum Selection¶

The technique of reduction by minimum selection exploits the identity

$xyz = \max_w \left\{ w ( x+y+z -2) \right\}$

where $x,y,z$ are binary variables in some pseudo-binary function being minimized and $w$ is the ancillary binary variable introduced to substitute quadratic terms for cubic terms.

Table 18 shows that the cubic and quadratic formulations are identical. In this table, column $\mathbf{x,y,z}$ is all possible states of the $xyz$ term and column $\mathbf{xyz}$ is the corresponding values of the $xyz$ term; column $\mathbf{x+y+z-2}$ is an intermediate calculation, the right side of the equality without the ancillary variable and before maximization, shown for clarity; and column $\mathbf{\max_w\left\{w(x+y+z-2)\right\}}$ is the final result of calculating the right side of the equality.

Table 18 Minimum Selection Reduction.
$\mathbf{x,y,z}$ $\mathbf{xyz}$ $\mathbf{x+y+z-2}$ $\mathbf{\max_w\left\{w(x+y+z-2)\right\}}$
$0,0,0$ $0$ $-2$ $0|_{w=0}$
$0,0,1$ $0$ $-1$ $0|_{w=0}$
$0,1,0$ $0$ $-1$ $0|_{w=0}$
$0,1,1$ $0$ $0$ $0|_{w=0,1}$
$1,0,0$ $0$ $-1$ $0|_{w=0}$
$1,0,1$ $0$ $0$ $0|_{w=0,1}$
$1,1,0$ $0$ $0$ $0|_{w=0,1}$
$1,1,1$ $1$ $1$ $1|_{w=1}$

To reformulate a non-quadratic (higher-degree) polynomial to Ising/QUBO, substitute terms in the form of $axyz$, where $a$ is a real number, with one of the following quadratic terms:

\begin{align}\begin{aligned}axyz = aw ( x+y+z -2) \quad a<0\\axyz = a \left\{ w ( x+y+z -1) + (xy+yz+zx) - (x+y+z) +1 \right\} \quad a>0\end{aligned}\end{align}
  It is easy to verify, for example, that the equality for the negative coefficient, $a<0$, holds. The term $( x+y+z -2)$ is positive and non-zero only for $x,y,z=1,1,1$, so multiplying by negative $a$ yields the only negative value of $w ( x+y+z -2)$, which is preserved for $w=1$. All other states of $x,y,z$ result in $( x+y+z -2) \le 0$, so multiplied by $a<0$ yield zero or positive values, which are minimized to zero by $w=0$.

#### Reduction by Substitution¶

The technique of reduction by substitution represents Boolean constraint $z\Leftrightarrow x_1\wedge x_2$ as a quadratic penalty function,

$P(x_1,x_2;z) = x_1 x_2 - 2(x_1+x_2)z + 3z.$

Table 19 shows that this function penalizes (adds a positive non-zero energy cost to) any proposed solutions that do not set the value of the ancillary variable, $z$, identical to that of the constraint it replaces. In this table, column $\mathbf{x_1,x_2}$ is all possible states of $x_1,x_2$ variables and column $\mathbf{x_1x_2}$ is the corresponding values of the $x_1\wedge x_2$ constraint; column $\mathbf{P(z=x_1x_2)}$ is the penalty for solutions that set the value of the ancillary variable, $z$, identical to that of the constraint it replaces while column $\mathbf{P(z \ne x_1x_2)}$ is the penalty for incorrect solutions.

Table 19 Substitution Reduction.
$\mathbf{x_1,x_2}$ $\mathbf{x_1x_2}$ $\mathbf{P(z=x_1x_2)}$ $\mathbf{P(z \ne x_1x_2)}$
$0,0$ $0$ $0$ $3$
$0,1$ $0$ $0$ $1$
$1,0$ $0$ $0$ $1$
$1,1$ $1$ $0$ $1$

To reformulate a non-quadratic polynomial to Ising/QUBO, use this penalty function to reduce a triplet interaction to pairwise,

$x_1 x_2 x_3 = \min_z \bigl\{ z x_3 + M P(x_1,x_2;z) \bigr\},$

where $M>1$ is a penalty weight.

In the same manner, an interaction involving 4 or more variables can be reduced to pairwise by sequentially introducing new variables and reducing the degree of the interaction by 1 at each step.

Software Tools

See the Open-Source Software Tools section for a list of D-Wave software tools.

Further Information

• Posiforms and Higher-Order Polynomials discusses posiforms and reducing higher-order polynomials to QUBOs.
• Non-Quadratic (Higher-Order) Polynomials discusses this technique in greater detail.
• [Ish2011] discusses reduction by minimum selection and by substitution and introduces an extension of the reduction by minimum selection.
• [Bor2002] discusses posiforms as a useful representation for pseudo-Boolean functions and shows that reformulating a problem of $N$ terms requires at most $N$ ancillary variables.
• [Dat2014] and [Tan2015] discuss tricks to quickly reduce higher-order polynomials to QUBOs in the context of factoring.

## CSP to MIS¶

Given a set of constraints $\mathcal{C} = \{C_\alpha(\vc{x}_\alpha)\}_{\alpha=1}^{|\mathcal{C}|}$, this reformulation seeks a solution $\vc{x}$ that satisfies all constraints; i.e., $\bigwedge_{C_\alpha\in\mathcal{C}} C_\alpha(\vc{x})$ is true.

One standard approach of several that converts a CSP into a QUBO, defines the dual problem as follows:

Introduce a variable $y_\alpha$ for each constraint $C_\alpha$. The domain of $y_\alpha$ is the set of feasible solutions to $C_\alpha$. Coupling between dual variables arises from constraints $C_\alpha$ and $C_\beta$ having common variables in their scopes. For example, if the scope of $C_\alpha$ is $X_1,X_2$, the scope of $C_\beta$ is $X_2,X_3$ and the feasible sets are $\{(1,2),(3,1)\}$ and $\{(1,4),(2,1)\}$ respectively, then $y_\alpha\in\{(1,2),(3,1)\}$ and $y_\beta\in\{(1,4),(2,1)\}$. The two constraints share variable $X_2$ so whatever value $X_2$ is assigned it must be the same value in both constraints. This means that of the 4 possible $y_\alpha,y_\beta$ combinations only $(1,2),(2,1)$ and $(3,1),(1,4)$ are allowed because the combinations $(1,2),(1,4)$ and $(3,1),(2,1)$ disagree on the setting of $X_2$. This dual constraint formulation defines a conflict graph. The nodes of the conflict graph are the possible feasible settings for each $y_\alpha$, and we form edges between the nodes $y_\alpha,y_\beta$ if there is any conflicting setting of a shared variables. An example conflict graph is shown in Figure 49.

$C_1$ $C_2$
$X_1$ $X_2$ $X_2$ $X_3$ $Y_1$ $Y_2$
1 2 1 2 1,2 1,4
3 1 2 1 3,1 2,1
3 4     3,4 Fig. 49 Dual graph of a small CSP consisting of three variables $X_1$, $X_2$, and $X_3$ each having domain $\{1,2,3,4\}$, and two constraints $C_1(X_1,X_2)$, $C_2(X_2,X_3)$ having scopes $\{X_1, X_2\}$ and $\{X_2, X_3\}$ respectively. (a) The feasible sets for $C_1$ and $C_2$. (b) The domains of the dual variables $y_1$ and $y_2$. (c) The conflict graph defining feasible joint settings for dual variables.

If there is an independent set in the conflict graph of size $|\mathcal{C}|$ then we have solved the problem, because MIS is easily represented as a QUBO as discussed in Native Formulations: Ising and QUBO (and MIS). We select each $y_\alpha$ in the independent set which defines values for all the $X$ variables in the scope of $C_\alpha$. The edge constraint ensures no two elements that are in conflict are in the independent set.

Software Tools

See the Open-Source Software Tools section for a list of D-Wave software tools.

Further Information

• The Constraints to Penalties section gives a background on CSP.
• [Bac2002] discusses advantages and disadvantages of dual variables in modeling CSPs.
• [Fra1989] addresses the equivalence of binary and non-binary CSPs and discusses two methods of converting between these formulations through dual and hidden transformations.

## Soft CSP to MIS¶

Many CSPs used in applications are formulated with “hard” constraints—meaning constraints that cannot be broken in a viable solution—and “soft” constraints that incur a penalty if broken.

Hard and soft constraints can easily be represented by choosing two weights $w_H$ (large magnitude) and $w_S$ (small magnitude) to assign to the respective constraints. The resulting problem is to find an MIS of minimum weight.

The QUBO representation of weighted MIS for a graph $G=(V,E)$ is simply $\min_\vc{x} \bigl\{ \sum_v w_v x_v + M\sum_{(v,v')\in E} x_v x_{v'} \bigr\}$ where $w_v$ is the weight of vertex $v$.

Software Tools

See the Open-Source Software Tools section for a list of D-Wave software tools.

Further Information

The Constraints to Penalties section gives a background on CSP.

## CSP Reformulation with Penalty Functions¶

The simplest constraints are linear equalities and inequalities, and much of the mathematical programming literature is based upon such constraints. Satisfiability is an example of such a CSP. These constraints are easily mapped to penalties suitable for QUBO optimization.

Even if a penalty can be derived through conversion to a linear equality or inequality, it may still be useful to apply the penalty-deriving machinery discussed in the Nonlinear Constraints section to construct a penalty with fewer ancillary variables.

For example, the constraint $x_1\vee x_2\vee x_3$ (which is useful for solving 3-SAT) can always be expressed as the inequality $x_1+x_2+x_3\ge 1$, and converted to the penalty function

$(x_1+x_2+x_3 - 1 -a_1-2a_2)^2.$

The two ancillary variables $a_1$ and $a_2$ are necessary to represent the possible slack values 0, 1, and 2. However, the penalty function

$-3+ (\overline{x}_1+\overline{x}_2+\overline{x}_3)(1+a) + \overline{a} + x_1 x_2 + x_2 x_3 + x_3 x_1$

represents the same constraint using a single ancillary variable.

  In fact, this representation is commonly used to reduce 3-SAT to MAX-2-SAT.

Software Tools

See the Open-Source Software Tools section for a list of D-Wave software tools.

Further Information

• The CSP Conversion by Penalty Functions section gives a background on using penalty functions to convert CSPs.
• the Elementary Boolean Operations to QUBO section provides a table of penalty functions for several Boolean operations.
• [Dec2003] discusses constraint programming as a declarative framework for modeling problems that require solutions to satisfy a number of constraints.
• [Ven2015b] discusses the use of penalties versus rewards for constraints.

### Linear Equality Constraints¶

You can express $m$ equality constraints on $n$ Boolean variables, $\vc{x}$, as $\vc{C}\vc{x}=\vc{c}$, where $\vc{C}$ is an $m\times n$ matrix and $\vc{c}$ an $m\times 1$ vector. Mapping these constraints to QUBO formulation, the penalty function $\vc{C}\vc{x}-\vc{c}=0$ is simply $M \|\vc{C}\vc{x}-\vc{c}\|^2$. For $k$ independently weighted constraints, the penalty function in QUBO formulation is

$P_=(\vc{x}) = \sum_k m^=_k \bigl(\ip{\vc{C}_k}{\vc{x}}-c_k \bigr)^2,$

where $m_k^=>0$ is the weight of the equality constraint $k$, and $\vc{C}_k$ is row $k$ of $\vc{C}$. This function is nonnegative and equal to zero for feasible $\vc{x}$.

### Illustrative Example: Map-Coloring with Linear Equality Constraints¶

For an introduction to the technique, consider the following small example.

The map-coloring problem is to assign a color to each region of a map, subject to the constraint that any two regions sharing a border must have different colors. The simple problem in this section is a snippet of the Simple Map Coloring Problem section for two regions, British Columbia, BC, and Alberta, AB, and two colors, blue and green, as shown in Figure 50. Fig. 50 A snippet of the Canadian map-coloring problem: two regions must select two different colors. Regions are represented by vertices and logical qubits, later to be mapped to physical qubits on a Chimera graph.

Table 20 shows the translation to binary, described previously in the Simple Map Coloring Problem section, for this simplified problem of $C=2$ possible colors, where $q_B$ is a qubit representing blue and $q_G$ represents green.

Table 20 Translating Two Colors to Binary.
Color Naturals Unary Encoding
Blue 1 $q_B,q_G = 1,0$
Green 2 $q_B,q_G = 0,1$

For two regions, the logical qubits are denoted $q_{\{B,G\}}^{BC}$ for British Columbia and $q_{\{B,G\}}^{AB}$ for Alberta. To formulate a CSP with penalty functions, this example uses the following linear equality constraints:

(1) In a region, only one color is selected.

\begin{align}\begin{aligned}q_B^{BC} +q_G^{BC} =1\\q_B^{AB} +q_G^{AB} =1\end{aligned}\end{align}

For the simplified problem of two regions and two colors, the constraint that adjacent regions not select the same color reduces to constraint (2) that each of the two colors be selected once.

(2) Each color is selected once.

\begin{align}\begin{aligned}q_B^{BC} +q_B^{AB} =1\\q_G^{BC} +q_G^{AB} =1\end{aligned}\end{align}

This example’s $m=4$ constraints (1) and (2) can be expressed in $\vc{C}\vc{x}=\vc{c}$ formulation as

$\begin{split}\begin{bmatrix} 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \end{bmatrix} \begin{bmatrix} q_B^{BC} \\ q_G^{BC} \\ q_B^{AB} \\ q_G^{AB} \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}\end{split}$

Mapping to $M \|\vc{C}\vc{x}-\vc{c}\|^2$ with all the weights set to $m_k^= =1$ gives the penalty function as

\begin{align}\begin{aligned}P_=(\vc{x}) &= \sum_k m^=_k \bigl(\ip{\vc{C}_k}{\vc{x}}-c_k \bigr)^2\\&= (q_B^{BC} +q_G^{BC} -1)^2 + (q_B^{AB} +q_G^{AB} -1)^2 + (q_B^{BC} +q_B^{AB} -1)^2 + (q_G^{BC} +q_G^{AB} -1)^2\end{aligned}\end{align}

Using the equality $x^2=x$ for binary variables on the square terms, the penalty function can be rearranged to the standard QUBO formulation as

$P_=(\vc{x}) = -2q_B^{BC} -2q_G^{BC} -2q_B^{AB} -2q_G^{AB} +2q_B^{BC}q_G^{BC} +2q_B^{AB}q_G^{AB} +2q_B^{BC}q_B^{AB} +2q_G^{BC}q_G^{AB}+4$

Figure 51 shows the penalty function for all 16 possible states of ($q_B^{BC}, q_G^{BC}, q_B^{AB}, q_G^{AB}$) from ($0, 0, 0, 0$) to ($1, 1, 1, 1$). Fig. 51 Penalty function $P_=(\vc{x})$ showing minima when ($q_B^{BC}, q_G^{BC}, q_B^{AB}, q_G^{AB}$) is equal to ($0, 1, 1, 0$) and ($1, 0, 0, 1$), the two states that meet all 4 constraints.

The function adds a positive penalty to all possible states of the four qubits with the exception of two, where it has minima of zero: $q_B^{BC}, q_G^{BC}, q_B^{AB}, q_G^{AB}=0,1,1,0$, which selects green for BC and blue for Alberta, and $q_B^{BC}, q_G^{BC}, q_B^{AB}, q_G^{AB}=1,0,0,1$, which selects blue for BC and green for Alberta.

### Linear Inequality Constraints¶

For the slightly more difficult inequality constraints of the form $\vc{D}\vc{x}\le \vc{d}$, you can reduce the inequalities to equalities by introducing slack variables.

For example, the inequality constraint $\ip{\vc{D}_i}{\vc{x}}-d_i \le 0$, where $\vc{D}_i$ is row $i$ of $\vc{D}$, can be rewritten as the equality $\ip{\vc{D}_i}{\vc{x}}-d_i +\xi_i = 0$ by introducing a nonnegative slack variable $\xi_i\ge 0$.

The slack variable may need to take a value as large as $d_i-\min_{\vc{x}} \ip{\vc{D}_i}{\vc{x}} = d_i-\sum_j \min(D_{i,j},0)$. This slack quantity may be represented with $\lceil d_i-\sum_j \min(D_{i,j},0) \rceil$ bits. We write $\xi_i = \ip{\vc{2}}{\vc{a}_i}$ where $\vc{a}_i$ is the binary representation of $\xi_i$, and $\vc{2}$ is a vector whose elements are the powers of 2.

This binary representation enforces the requirement that $\xi_i\ge 0$. Inequality $i$ may be enforced with the penalty $\bigl(\ip{\vc{D}_i}{\vc{x}}-d_i + \ip{\vc{2}}{\vc{a}_i}\bigr)^2$ and all constraints can be represented by summing individual inequality penalties with independent weights $m_i^\le$ as

$P_\le(\vc{x},\vc{a}) = \sum_i m_i^\le \bigl(\ip{\vc{D}_i}{\vc{x}}-d_i + \ip{\vc{2}}{\vc{a}_i}\bigr)^2,$

where $\vc{a}$ is the vector collecting all binary slack variables.

### Nonlinear Constraints¶

The CSP Reformulation with Penalty Functions section shows how penalty functions for equality and inequality constraints may be constructed, with inequality constraints requiring ancillary variables.

Given sets of feasible $F$ and infeasible $\overline{F}$ configurations, and a requirement that $\min_{\vc{a}} P(\vc{x},\vc{a}) = o$ for $\mathbf{x}\in F$ and $\min_{\vc{a}} P(\mathbf{x},\vc{a}) \ge o+1$ for $\mathbf{x}\in \overline{F}$ for some constant $o$ representing the objective value of feasible configurations, you can formulate a quadratic penalty as

$\begin{split}P(\vc{x},\vc{a}) = \begin{bmatrix} \vc{x} & \vc{a} \end{bmatrix} \begin{bmatrix} \vc{Q}^{x,x} & \vc{Q}^{x,a} \\ 0 & \vc{Q}^{a,a} \end{bmatrix} \begin{bmatrix} \vc{x} \\ \vc{a} \end{bmatrix}\end{split}$

with

$\vc{Q}^{x,x} = \sum_i w_i^{x,x} \vc{M}_i^{x,x}, \qquad \vc{Q}^{x,a} = \sum_i w_i^{x,a} \vc{M}_i^{x,a}, \qquad \vc{Q}^{a,a} = \sum_i w_i^{a,a} \vc{M}_i^{a,a}.$

For Boolean indicator variables $\{\alpha_k(j)\}$ for each $1 \le j \le |F|$ and for each allowed configuration $\vc{a}_k$ you then have

$\begin{split}\langle \vc{1}, \vc{\alpha}(j)\rangle = 1 \quad \forall j\in F \\ \sum_{k'} \bigl\{ \langle \alpha_{k'}(j) \vc{w}^{x,a}, \vc{c}^{x,a}_{k'}(j) \rangle + \langle \alpha_{k'}(j) \vc{w}^{a,a}, \vc{c}^{a,a}_{k'} \rangle \bigr\} \le \langle \vc{w}^{x,a}, \vc{c}^{x,a}_k(j) \rangle + \langle \vc{w}^{a,a}, \vc{c}^{a,a}_k\rangle \quad \forall k,j.\end{split}$

To break the coupling of $\alpha_k(j)$ to $\vc{w}^{x,a}$ and $\vc{w}^{a,a}$ and obtain a linear problem, introduce $\vc{v}_k^{x,a}(j) = \alpha_k(j) \vc{w}^{x,a}$ and $\vc{v}_k^{a,a}(j) = \alpha_k(j) \vc{w}^{a,a}$. This requirement is enforced with

$\begin{split}\vc{v}_k^{x,a}(j) &\le \alpha_k(j) \\ \vc{v}_k^{a,a}(j) &\le \alpha_k(j) M \\ -\vc{v}_k^{x,a}(j) &\le \alpha_k(j) M \\ -\vc{v}_k^{a,a}(j) &\le \alpha_k(j) M \\ \sum_k \vc{v}_k^{x,a}(j) &= \vc{w}^{x,a} \\ \sum_k \vc{v}_k^{a,a}(j) &= \vc{w}^{a,a}.\end{split}$

The resulting mixed integer program can then be solved with any MIP solver. If there is no feasible solution for a given number of ancillary variables or specified connectivity, then these requirements must be relaxed by introducing additional ancillary variables of additional connectivity.

The Formulating an Integer CSP as a QUBO Problem section gives an example of using these techniques on a problem.

## CSP Reformulation with Inverse Verification¶

One approach to solving a constraint satisfaction problem, $C(\vc{x}) \equiv \bigwedge_{\alpha=1}^{|\mathcal{C}|} C_\alpha(\vc{x}_\alpha)$, exploits the fact that NP-hard decision problems can be verified in polynomial time. If the CSP corresponding to $C(\vc{x})$ is in NP, you can write the output $z \Leftrightarrow C(\vc{x})$ as a Boolean circuit whose size is polynomial in the number of Boolean input variables. The Boolean circuit $z\Leftrightarrow C(\vc{x})$ can be converted to an optimization objective using Boolean operations formulated as QUBO penalty functions, with each gate contributing the corresponding penalty function—see Elementary Boolean Operations to QUBO.

The resulting optimization objective—which also has a polynomial number of variables and at a feasible solution $\vc{x}^\star$, $z_\star=1$ evaluates to 0—may be run in reverse to infer inputs from the desired true output.

To accomplish this, clamp the output of the circuit to $z=1$ by adding $-Mz$ (for sufficiently large $M$) to the objective, and then minimize with respect to all variables. If you obtain a solution for which the objective evaluates to 0, you have a feasible solution to the CSP.

The Factoring section shows an example of using this technique to solve factoring problems as QUBOs. It shows that this formulation yields low requirements on both connectivity and precision constraints, making it suitable for the D-Wave system.

Fault Diagnosis may also benefit from this technique.

  Alternatively, substitute $z=1$ into the objective and eliminate $z$ from the problem.

## Elementary Boolean Operations to QUBO¶

As described in the CSP Reformulation with Penalty Functions section, constraints can be reformulated as penalties.

Table 21 shows several constraints useful in building Boolean circuits.

Table 21 QUBO Penalty Functions for Elementary Boolean Operations
Constraint Penalty
$z \Leftrightarrow \neg x$ $2xz-x-z+1$
$z \Leftrightarrow x_1 \vee x_2$ $x_1 x_2 + (x_1+x_2)(1-2z) + z$
$z \Leftrightarrow x_1 \wedge x_2$ $x_1 x_2 - 2(x_1+x_2)z +3z$
$z \Leftrightarrow x_1 \oplus x_2$ $2x_1x_2 -2(x_1+x_2)z - 4(x_1+x_2)a +4az +x_1+x_2+z + 4a$
  The XOR operator, $\oplus$, requires the introduction of a single ancillary variable $a$.

### Illustrative Example: Boolean OR as a Penalty¶

For an introduction to the technique, consider the following small example.

Table 22 shows how a Boolean OR constraint,

$z \Leftrightarrow x_1 \vee x_2,$

can be reformulated using a penalty function. States that violate the constraint that $z$ be equivalent to $x_1 \vee x_2$ are penalized (a positive, nonzero energy cost is added) through function

$x_1 x_2 + (x_1+x_2)(1-2z) + z.$

In Table 22, column $\mathbf{x_1,x_2}$ is all possible states of the gate’s inputs; column $\mathbf{x_1 \vee x_2}$ is the corresponding output values of a functioning gate; column $\mathbf{P(z=x_1 \vee x_2)}$ is the value the penalty function adds to the energy of the objective function when the gate is functioning (zero, a functioning gate must not be penalized) while column $\mathbf{P(z \ne x_1 \vee x_2)}$ is the value the penalty function adds when the gate is malfunctioning (nonzero, the objective function must be penalized with a higher energy).

Table 22 Boolean OR Constraint as a Penalty.
$\mathbf{x_1,x_2}$ $\mathbf{x_1 \vee x_2}$ $\mathbf{P(z=x_1 \vee x_2)}$ $\mathbf{P(z \ne x_1 \vee x_2)}$
$0,0$ $0$ $0$ $1$
$0,1$ $1$ $0$ $1$
$1,0$ $1$ $0$ $1$
$1,1$ $1$ $0$ $3$

For example, for the state $x_1, x_2=1,1$ of the fourth row, when $z=x_1 \vee x_2 = 1$,

$x_1 x_2 + (x_1+x_2)(1-2z) + z = 1+(1+1)(1-2)+1=1-2+1=0,$

not penalizing the valid solution, whereas for $z \ne x_1 \vee x_2 = 0$,

$x_1 x_2 + (x_1+x_2)(1-2z) + z = 1+(1+1)(1-0)+0=1+2+0=3,$

adding an energy cost of $3$ to the solution that violates the constraint.

Further Information