next up previous contents
Next: Strengthening Up: Reference - Preprocessing Previous: Generalized Cofactor (GCF)   Contents


Branch Pruning

Branch pruning is an operation on two BDDs. The intention is to remove paths from one BDD which are made irrelevant by the other BDD. The following specifies how this is done in pseudo-C++ style:


BDD Prune (BDD f, BDD c) {
if (c == T || f == F || f == T) return f;
if (c == $\neg$f) return F;
if (c == f) return T;
// f and c have a non-trivial relationship
let $x_f$ = root(f); // $x_f$ is a variable
let $x_c$ = root(c); // $x_c$ is a variable
if (index($x_f$) > index($x_c$) return Prune(f, ExQuant(c, $x_c$));
if (Reduce$_F$($x_f$, c) == F) return Prune(Reduce$_T$($x_f$, f), Reduce$_T$($x_f$, c));
if (Reduce$_T$($x_f$, c) == F) return Prune(Reduce$_F$($x_f$, f), Reduce$_F$($x_f$, c));
let $h_{f_T}$ = Prune(Reduce$_T$($x_f$, f), Reduce$_T$($x_f$, c)); // $h_{f_T}$ is a BDD
let $h_{f_F}$ = Prune(Reduce$_F$($x_f$, f), Reduce$_F$($x_f$, c)); // $h_{f_F}$ is a BDD
if ($h_{f_T}$ == $h_{f_F}$) return $h_{f_T}$;
return find_or_add_node($x_f$, $h_{f_T}$, $h_{f_F}$);
}


The procedure Prune takes two BDDs which are top-level functions as input and returns a BDD which can replace the BDD of argument f at the top-level. Figure [*] shows an example.

Figure: Prune on one variable ordering produces no result.
Figure: Example of prune. Procedure Prune is applied to the left two BDDs and returns the right BDD.
Figure: But changing the variable order results in an inference from pruning the two functions in Figure [*].
\begin{figure}\hspace*{6mm}\epsfig{figure=Fig/branch_prune_bdd.eps, width=4.5in}...
...*{24mm}\epsfig{figure=Fig/branch_prune_example_2.eps, width=3.3in}\end{figure}

Branch pruning can reveal inferences but this depends on the variable ordering. Figure [*] shows Prune applied to two BDDs with no result. BDDs representing the same two functions but under a different variable ordering are pruned in Figure [*] revealing the inference $x_3=F$.

Branch pruning is similar to a procedure called generalized cofactor or constrain (see Section [*] for a description). Both Prune$(f,c)$ and gcf$(f,c)$ agree with $f$ on interpretations where $c$ is satisfied, but are generally somehow simpler than $f$. Both are highly dependent upon variable ordering, so both might be considered ``non-logical.'' Branch pruning is implemented in SBSAT because the BDDs produced from it tend to be smaller. In any case, unlike for gcf, BDDs can never gain in size using branch pruning.

There appear to be two gains to using branch pruning. First, it can make SMURFs smaller (see Section [*] for information about SMURFs). Second, it often appears, by avoiding duplicated information, to make the LSGB search heuristic's evidence combination rule work better.

On the negative side, it can, in odd cases, lose local information. Although it may reveal some of the inferences that strengthening would (see below), branch pruning can still cause the number of choicepoints to increase. Both these issues are related: branch pruning can spread an inference that is evident in one BDD over multiple BDDs (see Figure [*] for an example).

Figure: Example of branch pruning spreading an inference from one BDD to another. If $x_2$ is assigned 0 in $f$ then $x_4=0$ and $x_3=0$ are inferred. After applying Prune to $f$ and $c$ and replacing $f$ with $f^\prime$, to get the inference $x_3=0$ from the choice $x_2=0$ visit $c$ to get $x_4=0$ and then $f^\prime$ to get $x_3=0$. Thus, branch pruning can increase work if not used properly. In this case, pruning in the reverse direction leads to a better result.
\begin{figure}\hspace*{6mm}\epsfig{figure=Fig/restrict.eps, width=4.5in}\end{figure}


next up previous contents
Next: Strengthening Up: Reference - Preprocessing Previous: Generalized Cofactor (GCF)   Contents
John Franco 2011-09-15