next up previous contents
Next: Chaff-like Up: Reference - Search heuristics Previous: State Machines Used to   Contents


Locally Skewed, Globally Balanced

Memoized information is currently tailored for the primary search heuristic called Locally Skewed, Globally Balanced or LSGB. The weight of a SMURF transition counts the number of literals forced on the transition, plus the expected number of literals forced below that state, where a forced literal after $m$ additional choices is weighted $1/K^m$. ($K$, set experimentally, is currently 3 by default.) In Figure [*], the transition out of the start state on $\neg x_1$ has weight

\begin{displaymath}1+(\frac{1}{K}+\frac{1}{K}+\frac{1}{K}+\frac{1}{K})/4;\end{displaymath}

the transition out on $x_4$,

\begin{displaymath}0+(\frac{1}{K^2}+\frac{2}{K} + \frac{1}{K} + \frac{2}{K} + \frac{2}{K} +
\frac{1}{K})/6.\end{displaymath}

Computing these weights is expensive but they are memoized in SMURFs during preprocessing and, during search, they are looked up in a table instead of being recomputed each time they are needed.

For the special data structures defined above, the calculation above is simulated. If a disjunction $\lambda_1\vee\cdots\vee\lambda_m$ with $k$ still unassigned variables were represented as a SMURF, the weight of $\lambda_i$ is 0 (since the clause immediately becomes satisfied, nothing more can be forced), and the weight of $\neg\lambda_i$ is $1/(2K)^{k-1}$. This is directly coded in the simulated SMURF. Exclusive disjunctions are similar. Assignments are similar but break into cases; one recurrence relation is hard to solve, so weights are precomputed as a function of the number of unassigned $\lambda_i$'s and are looked up during search.

The LSGB search heuristic is similar to the ``Johnson heuristic'' on CNF formulas where $K=2$. The intuition is to branch toward forced inferences as quickly as possible to narrow the search space (or get lemmas fast). To pick the next variable to branch on: For each variable $x_i$, compute (i) the sum $S_i^+$ of the weights of transitions on $x_i$ out of all current ${\mbox{\sc Smurf}}$ states and (ii) the sum $S_i^-$ of the weights of transitions on $\neg
x_i$. A high sum represents a high ``payoff.'' For an ideal branching variable $x_i$, both $x_i$ and $\neg
x_i$ force many literals, so we branch on the variable $x_i$ where $S_i^+\cdot S_i^-$ is maximum. For that variable, branch first toward the larger of $S_i^+,S_i^-$.13

There are circumstances where other search heuristics are known to work well. LSGB was intended for applications where not much is known about, or easily determined about, the given problem. If the problem is known to have a lot of exploitable structure, it may be better to specify a different heuristic. We allow the experienced user some choice (see Sections [*] and [*] below for more information). The SMURF structure admits such heuristics as well; on a simple heuristic, it may not be needed, but (except for preprocessing time) it does not hinder either.

In Section [*], we present benchmark problems comparing SBSAT with LSGB to other solvers such as zChaff.


next up previous contents
Next: Chaff-like Up: Reference - Search heuristics Previous: State Machines Used to   Contents
John Franco 2011-09-15