# Usage Guidelines¶

This chapter introduces some approaches for maximizing the performance of the D-Wave
QPU. For more details on some of the techniques mentioned here, see *D-Wave Problem-Solving Handbook*.

Note

For instructions on programming the system using D-Wave’s open-source Ocean tools, see the Ocean documentation.

## Decomposing Large Problems to Fit the QPU¶

To solve a problem with more variables than the available number of qubits, break the problem into subproblems, solve the subproblems, and then reconstruct an answer to the original problem from the subproblem solutions. Divide-and-conquer and dynamic programming algorithms have a rich history in computer science for problems of this type and can be adapted to map problems onto the D-Wave QPU.

Several approaches to problem decomposition, including cut-set conditioning,
branch and bound, and large-neighborhood local search, are discussed in *D-Wave Problem-Solving Handbook*.

## Mapping General Graphs to Chimera¶

The current generation of D-Wave QPU minimizes the energy of an Ising spin configuration
whose pairwise interactions lie on the edges of a Chimera graph, \(M, N, L\). To solve an
Ising spin problem with arbitrary pairwise interaction structure, the corresponding graph
must be *minor embedded* into a Chimera working graph; see *D-Wave Problem-Solving Handbook* for details.

## Circumventing Precision Limits on \(h_i\) and \(J_{i,j}\)¶

Ising problems with high-precision parameters (\(h\); \(J\)) present special challenges for quantum annealers. As described in the ICE: Dynamic Ranges in h and J Values chapter, the problem Hamiltonian delivered to the QPU is slightly different from the one specified by the user. The quality of solutions returned by the QPU depends on the magnitude of this difference at low-energy regions of the solution space. This difference is most noticeable when the application requires optimal solutions.

For problems scaled to lie within the range available on the QPU, you can limit the parameter ranges
through embedding or by simplifying the problem; see *D-Wave Problem-Solving Handbook* for details.

## Avoiding I/O Errors¶

With a readout fidelity of 99%, we recommend at least two read-anneal cycles per problem to expose possible outliers. The most cost-effective use of QPU time (that is, to best amortize the fixed cost of programming the QPU against total cost), is found by taking enough reads to at least equal the programming time.

To guard against occasional programming errors, we recommend splitting a given problem into two separate QPU requests to trigger a reprogramming cycle and reduce the probability of error to less than 1%.

Contact D-Wave for specific QPU properties, which include a readout fidelity value.

## Using Spin-Reversal Transforms¶

Applying a *spin-reversal transform* can improve results by reducing the impact of analog errors,
described in the ICE: Dynamic Ranges in h and J Values chapter, that may exist on the QPU. A spin-reversal
transform does not alter the Ising problem, in the sense that there is a direct one-to-one mapping
between all solutions to the original problem to the altered problem that preserves their
energies—solutions of the original problem and of the transformed problem have identical energies.
Rather, the transform reverses the meanings of a collection of individual spins \(s_p\) as follows:

This mapping ensures that the classical Ising spin energy given Eqn. 2.1 in the Ising Model section is unchanged; the transform simply amounts to reinterpreting spin up as spin down, and visa-versa, for a particular spin. Any number of spins may be transformed in this way, but the Ising problem remains the same and therefore we expect the same answers. A spin-reversal transform is defined according to a subset of spins for which this transform is applied.

As discussed in the ICE: Dynamic Ranges in h and J Values chapter, some sources of ICE such as ICE 3 and ICE 5 depend on the magnitude of the (\(h\), \(J\)) problem parameters. Random spin transformations can be used to avoid systematic errors arising from multiple requests of the same problem, by “breaking up” these dependencies.

From a single problem defined by a set of parameters \(\{ h_i,J_{i,j} \}\), a sequence of spin-reversal transforms \(T', T'' \ldots\), each applied to a random subset of spins, defines a sequence of equivalent problems (\(h',J'\)), (\(h'',J''\)).... All have the same energy spectrum, but their implementation on the QPU may invoke different ICE effects (some better and some worse than the original).

When studying the behavior of the QPU in solving a problem, include in the analysis a number of spin-reversal transforms of the problem. The transform samples randomly over sets of possible systematic errors. The correlations between these error sets, at least as they relate to certain error sources, can be reduced with the following considerations:

- Changing too few spins leaves most errors unchanged, and therefore has little effect.
- Changing too many spins means that most couplers connect spins that are both transformed, thus \(J_{i,j}\) does not change sign. As a result, some systematic errors associated with the couplers are unaffected.

To sample over a wide range of systematic errors, consider the number of couplers and spins involved in a particular transform.

Note

Spin-reversal transforms do not necessarily remove errors due to ICE; rather, they randomize the effects of certain errors that depend on (\(h\), \(J\)). This creates a spectrum of error magnitudes from which the best results can be selected. Some types of systematic error—ICE 1, for example—are not affected by spin-reversal transforms.

Specify the number of spin-reversal transforms using the *num_spin_reversal_transforms* parameter
when submitting a problem.

## Adjusting the Problem Scale¶

In general, use the full range of \(h\) and \(J\) values available
for the QPU when submitting a problem. To avoid having to explicitly deal
with these ranges, encode the \(h\) and \(J\) settings as arbitrary
floating-point numbers and set the *auto_scale* parameter to true (1,
the default). This setting automatically scales the problem to make maximum
use of the available ranges.

That said, using \(J < -0.8\) may reduce the benefit of quantum annealing
for some problem types. While no definitive model captures this effect, it might
be useful to scale the overall problem energy down with a prefactor of \(0.8\).
If you do so, disable the *auto_scale* parameter when submitting the problem.