next up previous contents
Next: Inferences Up: Reference - Preprocessing Previous: Branch Pruning   Contents


Strengthening

This binary operation on BDDs helps reveal inferences that are missed by branch pruning due to its sensitivity to variable ordering. Given two BDDs, $b_1$ and $b_2$, strengthening conjoins $b_1$ with the projection of $b_2$ onto the variables of $b_1$: that is, $b_1\wedge\exists{\vec x}b_2$, where $\vec x$ is the set of variables appearing in $b_2$ but not in $b_1$. Strengthening each $b_i$ against all other $b_j$s sometimes reveals additional inferences or equivalences as above. Figure [*] shows an example. The following is pseudo C++ code implementing strengthening:


BDD Strengthen (BDD $b_1$, BDD $b_2$) {
let ${\vec x}=\{x : x\in b_2, x\notin b_1\}$;
for all ($x\in {\vec x}$) $b_2$ = ExQuant($b_2$, $x$);
return $b_1\wedge b_2$;
}

Figure: ...then conjoin the two BDDs. Inference revealed is $x_3=F$.
\begin{figure}\hspace*{24mm}\epsfig{figure=Fig/strengthening_1.eps, width=2.6in}...
...\hspace*{10mm}\epsfig{figure=Fig/strengthening_2.eps, width=3.8in}\end{figure}

Strengthening is a way to pass important information from one BDD to another without causing a size explosion. No explosion can occur because before $b_1$ is conjoined with $b_2$, all variables in $b_2$ that don't occur in $b_1$ are existentially quantified away. If an inference (of the form $x=T$, $x=F$, $x=+y$, or $x=-y$) exists due to just two BDDs, then strengthening those BDDs against each other (pairwise) can ``move'' those inferences, even if originally spread across both BDDs, to one of the BDDs. Because strengthening shares information between BDDs it can be thought of as sharing intelligence and ``strengthening'' the relationships between functions; the added intelligence in these strengthened functions can be exploited by a smart search heuristic. We have found that, in general, strengthening decreases the number of choicepoints when used in conjunction with the LSGB heuristic, though in strange cases it can also increase the number of choicepoints. We believe this is due to the delicate nature of some problems where duplicating information in the BDDs leads the heuristic astray. Strengthening may be applied to CNF formulas and in this case it is the same as applying Davis-Putnam resolution selectively on some of the clauses. When used on more complex functions it is clearer how to use it effectively as the clauses being resolved are grouped with some meaning. Evidence for this comes from from examples from Bounded Model Checking (see Section [*]).


next up previous contents
Next: Inferences Up: Reference - Preprocessing Previous: Branch Pruning   Contents
John Franco 2011-09-15