The Parma Polyhedra Library (PPL) is a modern C++ library for the manipulation of numerical information that can be represented by points in some -dimensional vector space. For instance, one of the key domains the PPL supports is that of rational convex polyhedra (Section Convex Polyhedra). Such domains are employed in several systems for the analysis and verification of hardware and software components, with applications spanning imperative, functional and logic programming languages, synchronous languages and synchronization protocols, real-time and hybrid systems. Even though the PPL library is not meant to target a particular problem, the design of its interface has been largely influenced by the needs of the above class of applications. That is the reason why the library implements a few operators that are more or less specific to static analysis applications, while lacking some other operators that might be useful when working, e.g., in the field of computational geometry.
The main features of the library are the following:
x + 2*y + 5*z <= 7when you mean it;
In the following section we describe all the domains available to the PPL user. More detailed descriptions of these domains and the operations provided will be found in subsequent sections.
In the final section of this chapter (Section Using the Library), we provide some additional advice on the use of the library.
A semantic geometric descriptor is a subset of . The PPL provides several classes of semantic GDs. These are identified by their C++ class name, together with the class template parameters, if any. These classes include the simple classes:
Tis a numeric type chosen among
longlong (or any of the C99 exact width integer equivalents
int8_t, int16_t, and so forth); and
ITVis an instance of the
Other semantic GDs, the compound classes, can be constructed (also recursively) from all the GDs classes. These include:
D2 can be any semantic GD classes and
R is the reduction operation to be applied to the component domains of the product class.
A uniform set of operations is provided for creating, testing and maintaining each of the semantic GDs. However, as many of these depend on one or more syntactic GDs, we first describe the syntactic GDs.
A syntactic geometric descriptor is for defining, modifying and inspecting a semantic GD. There are three kinds of syntactic GDs: basic GDs, constraint GDs and generator GDs. Some of these are generic and some specific. A generic syntactic GD can be used (in the appropriate context) with any semantic GD; clearly, different semantic GDs will usually provide different levels of support for the different subclasses of generic GDs. In contrast, the use of a specific GD may be restricted to apply to a given subset of the semantic GDs (i.e., some semantic GDs provide no support at all for them).
The following basic GDs currently supported by the PPL are:
These classes, which are all generic syntactic GDs, are used to build the constraint and generator GDs as well as support many generic operations on the semantic GDs.
The PPL currently supports the following classes of generic constraint GDs:
Each linear constraint can be further classified to belong to one or more of the following syntactic subclasses:
Note that the subclasses are not disjoint.
Similarly, each linear congruence can be classified to belong to one or more of the following syntactic subclasses:
The library also supports systems, i.e., finite collections, of either linear constraints or linear congruences (but see the note below).
Each semantic GD provides optimal support for some of the subclasses of generic syntactic GDs listed above: here, the word "optimal" means that the considered semantic GD computes the best upward approximation of the exact meaning of the linear constraint or congruence. When a semantic GD operation is applied to a syntactic GD that is not optimally supported, it will either indicate its unsuitability (e.g., by throwing an exception) or it will apply an upward approximation semantics (possibly not the best one).
For instance, the semantic GD of topologically closed convex polyhedra provides optimal support for non-strict linear inequality and equality constraints, but it does not provide optimal support for strict inequalities. Some of its operations (e.g.,
add_congruence) will throw an exception if supplied with a non-trivial strict inequality constraint or a proper congruence; some other operations (e.g.,
refine_with_congruence) will compute an over-approximation.
Similarly, the semantic GD of rational boxes (i.e., multi-dimensional intervals) having integral values as interval boundaries provides optimal support for all interval constraints: even though the interval constraint cannot be represented exactly, it will be optimally approximated by the constraint .
The PPL currently supports two classes of generator GDs:
Rays, lines and parameters are specific of the mentioned semantic GDs and, therefore, they cannot be used by other semantic GDs. In contrast, as already mentioned above, points are basic geometric descriptors since they are also used in generic PPL operations.
test for the named properties of the semantic GD.
return the total and external memory size in bytes.
checks that the semantic GD has a valid internal representation. (Some GDs provide this method with an optional Boolean argument that, when true, requires to also check for non-emptiness.)
return, respectively, the space and affine dimensions of the GD.
modify the space dimensions of the semantic GD; where, depending on the operation, the arguments can include the number of space dimensions to be added or removed a variable or set of variables denoting the actual dimensions to be used and a partial function defining a mapping between the dimensions.
compare the semantic GD with an argument semantic GD of the same class.
modify the semantic GD, possibly with an argument semantic GD of the same class.
These find information about the bounds of the semantic GD where the argument variable or linear expression define the direction of the bound.
These perform several variations of the affine image and preimage operations where, depending on the operation, the arguments can include a variable representing the space dimension to which the transformation will be applied and linear expressions with possibly a relation symbol and denominator value that define the exact form of the transformation.
are the ascii input and output operations.
These methods assume that the given semantic GD provides optimal support for the argument syntactic GD: if that is not the case, an invalid argument exception is thrown.
add_recycled_congruences(), the only assumption that can be made on the constraint GD after return (successful or exceptional) is that it can be safely destroyed.
If the argument constraint GD is optimally supported by the semantic GD, the methods behave the same as the corresponding
add_* methods listed above. Otherwise the constraint GD is used only to a limited extent to refine the semantic GD; possibly not at all. Notice that, while repeating an add operation is pointless, this is not true for the refine operations. For example, in those cases where
raises an exception, a fragment of the form
may give more precise results than a single
Returns the indicated system of constraint GDs satisfied by the semantic GD.
Return true if and only if the semantic GD can recycle the indicated constraint GD.
This takes a constraint GD as an argument and returns the relations holding between the semantic GD and the constraint GD. The possible relations are:
NOTHING(). This operator also can take a polyhedron generator GD as an argument and returns the relation
NOTHING() that holds between the generator GD and the semantic GD.
The Parma Polyhedra Library, for those cases where an exact result cannot be computed within the specified complexity limits, computes an upward approximation of the exact result. For semantic GDs this means that the computed result is a possibly strict superset of the set of points of that constitutes the exact result. Notice that the PPL does not provide direct support to compute downward approximations (i.e., possibly strict subsets of the exact results). While downward approximations can often be computed from upward ones, the required algorithms and the conditions upon which they are correct are outside the current scope of the PPL. Beware, in particular, of the following possible pitfall: the library provides methods to compute upward approximations of set-theoretic difference, which is antitone in its second argument. Applying a difference method to a second argument that is not an exact representation or a downward approximation of reality, would yield a result that, of course, is not an upward approximation of reality. It is the responsibility of the library user to provide the PPL's method with approximations of reality that are consistent with respect to the desired results.
The Parma Polyhedra Library provides support for approximating integer computations using the geometric descriptors it provides. In this section we briefly explain these facilities.
When a geometric descriptor is used to approximate integer quantities, all the points with non-integral coordinates represent an imprecision of the description. Of course, removing all these points may be impossible (because of convexity) or too expensive. The PPL provides the operator
drop_some_non_integer_points to possibly tighten a descriptor by dropping some points with non-integer coordinates, using algorithms whose complexity is bounded by a parameter. The set of dimensions that represent integer quantities can be optionally specified. It is worth to stress the role of some in the operator name: in general no optimality guarantee is provided.
The Parma Polyhedra Library provides services that allow to compute correct approximations of bounded arithmetic as available in widespread programming languages. Supported bit-widths are 8, 16, 32 and 64 bits, with some limited support for 128 bits. Supported representations are binary unsigned and two's complement signed. Supported overflow behaviors are:
For signed integers the wrapping function is, instead,
One possibility for precisely approximating the semantics of programs that operate on bounded integer variables is to follow the approach described in [SK07]. The idea is to associate space dimensions to the unwrapped values of bounded variables. Suppose
j is a , unsigned program variable associated to a space dimension labeled by the variable . If is constrained by some numerical abstraction to take values in a set , then the program variable
j can only take values in . There are two reasons why this is interesting: firstly, this allows for the retention of relational information by using a single numerical abstraction tracking multiple program variables. Secondly, the integers modulo form a ring of equivalence classes on which addition and multiplication are well defined. This means, e.g., that assignments with affine right-hand sides and involving only variables with the same bit-width and representation can be safely modeled by affine images. While upper bounds and widening can be used without any precaution, anything that can be reconducted to intersection requires a preliminary wrapping phase, where the dimensions corresponding to bounded integer types are brought back to their natural domain. This necessity arises naturally for the analysis of conditionals and conversion operators, as well as in the realization of domain combinations.
The PPL provides a general wrapping operator that is parametric with respect to the set of space dimensions (variables) to be wrapped, the width, representation and overflow behavior of all these variables. An optional constraint system can, when given, improve the precision. This constraint system, which must only depend on variables with respect to which wrapping is performed, is assumed to represent the conditional or looping construct guard with respect to which wrapping is performed. Since wrapping requires the computation of upper bounds and due to non-distributivity of constraint refinement over upper bounds, passing a constraint system in this way can be more precise than refining the result of the wrapping operation afterwards. The general wrapping operator offered by the PPL also allows control of the complexity/precision ratio by means of two additional parameters: an unsigned integer encoding a complexity threshold, with higher values resulting in possibly improved precision; and a Boolean controlling whether space dimensions should be wrapped individually, something that results in much greater efficiency to the detriment of precision, or collectively.
Note that the PPL assumes that any space dimension subject to wrapping is being used to capture the value of bounded integer values. As a consequence the library is free to drop, from the involved numerical abstraction, any point having a non-integer coordinate that corresponds to a space dimension subject to wrapping. It must be stressed that freedom to drop such points does not constitute an obligation to remove all of them (especially because this would be extraordinarily expensive on some numerical abstractions). The PPL provides operators for the more systematic removal of points with non-integral coordinates.
The wrapping operator will only remove some of these points as a by-product of its main task and only when this comes at a negligible extra cost.
In this section we introduce convex polyhedra, as considered by the library, in more detail. For more information about the definitions and results stated here see [BRZH02b], [Fuk98], [NW88], and [Wil93].
We denote by the vector space on the field of real numbers , endowed with the standard topology. The set of all non-negative reals is denoted by . For each , denotes the component of the (column) vector . We denote by the vector of , called the origin, having all components equal to zero. A vector can be also interpreted as a matrix in and manipulated accordingly using the usual definitions for addition, multiplication (both by a scalar and by another matrix), and transposition, denoted by .
The scalar product of , denoted , is the real number
For any , the Minkowski's sum of and is:
For each vector and scalar , where , and for each relation symbol , the linear constraint defines:
Note that each hyperplane can be defined as the intersection of the two closed affine half-spaces and . Also note that, when , the constraint is either a tautology (i.e., always true) or inconsistent (i.e., always false), so that it defines either the whole vector space or the empty set .
The set is a not necessarily closed convex polyhedron (NNC polyhedron, for short) if and only if either can be expressed as the intersection of a finite number of (open or closed) affine half-spaces of or and . The set of all NNC polyhedra on the vector space is denoted .
The set is a closed convex polyhedron (closed polyhedron, for short) if and only if either can be expressed as the intersection of a finite number of closed affine half-spaces of or and . The set of all closed polyhedra on the vector space is denoted .
When ordering NNC polyhedra by the set inclusion relation, the empty set and the vector space are, respectively, the smallest and the biggest elements of both and . The vector space is also called the universe polyhedron.
In theoretical terms, is a lattice under set inclusion and is a sub-lattice of .
An NNC polyhedron is bounded if there exists a such that:
A bounded polyhedron is also called a polytope.
NNC polyhedra can be specified by using two possible representations, the constraints (or implicit) representation and the generators (or parametric) representation.
In the sequel, we will simply write ``equality'' and ``inequality'' to mean ``linear equality'' and ``linear inequality'', respectively; also, we will refer to either an equality or an inequality as a constraint.
By definition, each polyhedron is the set of solutions to a constraint system, i.e., a finite number of constraints. By using matrix notation, we have
where, for all , and , and are the number of equalities, the number of non-strict inequalities, and the number of strict inequalities, respectively.
Let be a finite set of vectors. For all scalars , the vector is said to be a linear combination of the vectors in . Such a combination is said to be
We denote by (resp., , , ) the set of all the linear (resp., positive, affine, convex) combinations of the vectors in .
Let , where . We denote by the set of all convex combinations of the vectors in such that for some (informally, we say that there exists a vector of that plays an active role in the convex combination). Note that so that, if ,
It can be observed that is an affine space, is a topologically closed convex cone, is a topologically closed polytope, and is an NNC polytope.
Let be an NNC polyhedron. Then
A point of an NNC polyhedron is a vertex if and only if it cannot be expressed as a convex combination of any other pair of distinct points in . A ray of a polyhedron is an extreme ray if and only if it cannot be expressed as a positive combination of any other pair and of rays of , where , and for all (i.e., rays differing by a positive scalar factor are considered to be the same ray).
Each NNC polyhedron can be represented by finite sets of lines , rays , points and closure points of . The 4-tuple is said to be a generator system for , in the sense that
where the symbol ' ' denotes the Minkowski's sum.
When is a closed polyhedron, then it can be represented by finite sets of lines , rays and points of . In this case, the 3-tuple is said to be a generator system for since we have
Thus, in this case, every closure point of is a point of .
For any and generator system for , we have if and only if . Also must contain all the vertices of although can be non-empty and have no vertices. In this case, as is necessarily non-empty, it must contain points of that are not vertices. For instance, the half-space of corresponding to the single constraint can be represented by the generator system such that , , , and . It is also worth noting that the only ray in is not an extreme ray of .
A constraints system for an NNC polyhedron is said to be minimized if no proper subset of is a constraint system for .
Similarly, a generator system for an NNC polyhedron is said to be minimized if there does not exist a generator system for such that , , and .
Any NNC polyhedron can be described by using a constraint system , a generator system , or both by means of the double description pair (DD pair) . The double description method is a collection of well-known as well as novel theoretical results showing that, given one kind of representation, there are algorithms for computing a representation of the other kind and for minimizing both representations by removing redundant constraints/generators.
Such changes of representation form a key step in the implementation of many operators on NNC polyhedra: this is because some operators, such as intersections and poly-hulls, are provided with a natural and efficient implementation when using one of the representations in a DD pair, while being rather cumbersome when using the other.
As indicated above, when an NNC polyhedron is necessarily closed, we can ignore the closure points contained in its generator system (as every closure point is also a point) and represent by the triple . Similarly, can be represented by a constraint system that has no strict inequalities. Thus a necessarily closed polyhedron can have a smaller representation than one that is not necessarily closed. Moreover, operators restricted to work on closed polyhedra only can be implemented more efficiently. For this reason the library provides two alternative ``topological kinds'' for a polyhedron, NNC and C. We shall abuse terminology by referring to the topological kind of a polyhedron as its topology.
In the library, the topology of each polyhedron object is fixed once for all at the time of its creation and must be respected when performing operations on the polyhedron.
Unless it is otherwise stated, all the polyhedra, constraints and/or generators in any library operation must obey the following topological-compatibility rules:
Wherever possible, the library provides methods that, starting from a polyhedron of a given topology, build the corresponding polyhedron having the other topology.
The space dimension of an NNC polyhedron (resp., a C polyhedron ) is the dimension of the corresponding vector space . The space dimension of constraints, generators and other objects of the library is defined similarly.
Unless it is otherwise stated, all the polyhedra, constraints and/or generators in any library operation must obey the following (space) dimension-compatibility rules:
While the space dimension of a constraint, a generator or a system thereof is automatically adjusted when needed, the space dimension of a polyhedron can only be changed by explicit calls to operators provided for that purpose.
A finite set of points is affinely independent if, for all , the system of equations
implies that, for each , .
The maximum number of affinely independent points in is .
A non-empty NNC polyhedron has affine dimension , denoted by , if the maximum number of affinely independent points in is .
We remark that the above definition only applies to polyhedra that are not empty, so that . By convention, the affine dimension of an empty polyhedron is 0 (even though the ``natural'' generalization of the definition above would imply that the affine dimension of an empty polyhedron is ).
An NNC polyhedron is called rational if it can be represented by a constraint system where all the constraints have rational coefficients. It has been shown that an NNC polyhedron is rational if and only if it can be represented by a generator system where all the generators have rational coefficients.
The library only supports rational polyhedra. The restriction to rational numbers applies not only to polyhedra, but also to the other numeric arguments that may be required by the operators considered, such as the coefficients defining (rational) affine transformations.
In this section we briefly describe operations on NNC polyhedra that are provided by the library.
For any pair of NNC polyhedra , the intersection of and , defined as the set intersection , is the biggest NNC polyhedron included in both and ; similarly, the convex polyhedral hull (or poly-hull) of and , denoted by , is the smallest NNC polyhedron that includes both and . The intersection and poly-hull of any pair of closed polyhedra in is also closed.
In theoretical terms, the intersection and poly-hull operators defined above are the binary meet and the binary join operators on the lattices and .
For any pair of NNC polyhedra , the convex polyhedral difference (or poly-difference) of and is defined as the smallest convex polyhedron containing the set-theoretic difference of and .
In general, even though are topologically closed polyhedra, their poly-difference may be a convex polyhedron that is not topologically closed. For this reason, when computing the poly-difference of two C polyhedra, the library will enforce the topological closure of the result.
Viewing a polyhedron as a set of tuples (its points), it is sometimes useful to consider the set of tuples obtained by concatenating an ordered pair of polyhedra. Formally, the concatenation of the polyhedra and (taken in this order) is the polyhedron such that
Another way of seeing it is as follows: first embed polyhedron into a vector space of dimension and then add a suitably renamed-apart version of the constraints defining .
The library provides two operators for adding a number of space dimensions to an NNC polyhedron , therefore transforming it into a new NNC polyhedron . In both cases, the added dimensions of the vector space are those having the highest indices.
add_space_dimensions_and_embed embeds the polyhedron into the new vector space of dimension and returns the polyhedron defined by all and only the constraints defining (the variables corresponding to the added dimensions are unconstrained). For instance, when starting from a polyhedron and adding a third space dimension, the result will be the polyhedron
In contrast, the operator
add_space_dimensions_and_project projects the polyhedron into the new vector space of dimension and returns the polyhedron whose constraint system, besides the constraints defining , will include additional constraints on the added dimensions. Namely, the corresponding variables are all constrained to be equal to 0. For instance, when starting from a polyhedron and adding a third space dimension, the result will be the polyhedron
The library provides two operators for removing space dimensions from an NNC polyhedron , therefore transforming it into a new NNC polyhedron where .
Given a set of variables, the operator
remove_space_dimensions removes all the space dimensions specified by the variables in the set. For instance, letting be the singleton set , then after invoking this operator with the set of variables the resulting polyhedron is
Given a space dimension less than or equal to that of the polyhedron, the operator
remove_higher_space_dimensions removes the space dimensions having indices greater than or equal to . For instance, letting defined as before, by invoking this operator with the resulting polyhedron will be
map_space_dimensions provided by the library maps the dimensions of the vector space according to a partial injective function such that with . Dimensions corresponding to indices that are not mapped by are removed.
If , i.e., if the function is undefined everywhere, then the operator projects the argument polyhedron onto the zero-dimension space ; otherwise the result is given by
expand_space_dimension provided by the library adds new space dimensions to a polyhedron , with , so that dimensions , , , of the result are exact copies of the -th space dimension of . More formally,
This operation has been proposed in [GDDetal04].
fold_space_dimensions provided by the library, given a polyhedron , with , folds a set of space dimensions , with and for each , into space dimension , where . The result is given by
and, for , , ,
and, finally, for , , ,
( denotes the cardinality of the finite set ).
This operation has been proposed in [GDDetal04].
For each relation , we denote by the image under of the set ; formally,
Similarly, we denote by the preimage under of , that is
If , then the relation is said to be space dimension preserving.
The relation is said to be an affine relation if there exists such that
where , , and , for each .
As a special case, the relation is an affine function if and only if there exist a matrix and a vector such that,
The set of NNC polyhedra is closed under the application of images and preimages of any space dimension preserving affine relation. The same property holds for the set of closed polyhedra, provided the affine relation makes no use of the strict relation symbols and . Images and preimages of affine relations can be used to model several kinds of transition relations, including deterministic assignments of affine expressions, (affinely constrained) nondeterministic assignments and affine conditional guards.
A space dimension preserving relation can be specified by means of a shorthand notation:
As an example, assuming , the notation , where the primed variable does not occur, is meant to specify the affine relation defined by
The same relation is specified by , since occurs with coefficient 0.
The library allows for the computation of images and preimages of polyhedra under restricted subclasses of space dimension preserving affine relations, as described in the following.
Given a primed variable and an unprimed affine expression , the affine function is defined by
and the (resp., ) occur in the st row in (resp., position in ). Thus function maps any vector to
The affine image operator computes the affine image of a polyhedron under . For instance, suppose the polyhedron to be transformed is the square in generated by the set of points . Then, if the primed variable is and the affine expression is (so that , ), the affine image operator will translate to the parallelogram generated by the set of points with height equal to the side of the square and oblique sides parallel to the line . If the primed variable is as before (i.e., ) but the affine expression is (so that ), then the resulting polyhedron is the positive diagonal of the square.
The affine preimage operator computes the affine preimage of a polyhedron under . For instance, suppose now that we apply the affine preimage operator as given in the first example using primed variable and affine expression to the parallelogram ; then we get the original square back. If, on the other hand, we apply the affine preimage operator as given in the second example using primed variable and affine expression to , then the resulting polyhedron is the stripe obtained by adding the line to polyhedron .
Observe that provided the coefficient of the considered variable in the affine expression is non-zero, the affine function is invertible.
Given a primed variable and two unprimed affine expressions and , the bounded affine relation is defined as
Let be the set of floating point numbers representables in a certain format and let be the set of real intervals with bounds in . We can define a floating-point interval linear form as:
where , for each .
Given a such linear form and a primed variable the affine form image operator computes the bounded affine image of a polyhedron under , where and are the upper and lower bound of respectively.
Similarly, the generalized affine relation , where and are affine expressions and is a relation symbol, is defined as
When and , then the above affine relation becomes equivalent to the single-update affine function (hence the name given to this operator). It is worth stressing that the notation is not symmetric, because the variables occurring in expression are interpreted as primed variables, whereas those occurring in are unprimed; for instance, the transfer relations and are not equivalent in general.
unconstrain computes the cylindrification [HMT71] of a polyhedron with respect to one of its variables. Formally, the cylindrification of an NNC polyhedron with respect to variable index is defined as follows:
Cylindrification is an idempotent operation; in particular, note that the computed result has the same space dimension of the original polyhedron. A variant of the operator above allows for the cylindrification of a polyhedron with respect to a finite set of variables.
The time-elapse operator has been defined in [HPR97]. Actually, the time-elapse operator provided by the library is a slight generalization of that one, since it also works on NNC polyhedra. For any two NNC polyhedra , the time-elapse between and , denoted , is the smallest NNC polyhedron containing the set
Note that, if are closed polyhedra, the above set is also a closed polyhedron. In contrast, when is not topologically closed, the above set might not be an NNC polyhedron.
Let be NNC polyhedra. Then:
Notice that an enlargement need not be a simplification, and vice versa; moreover, the identity function is (trivially) a meet-preserving enlargement and simplification.
The library provides a binary operator (
simplify_using_context) for the domain of NNC polyhedra that returns a polyhedron which is a meet-preserving enlargement simplification of its first argument using the second argument as context.
The concept of meet-preserving enlargement and simplification also applies to the other basic domains (boxes, grids, BD and octagonal shapes). See below for a definition of the concept of meet-preserving simplification for powerset domains.
The library provides operators for checking the relation holding between an NNC polyhedron and either a constraint or a generator.
Suppose is an NNC polyhedron and an arbitrary constraint system representing . Suppose also that is a constraint with and the set of points that satisfy . The possible relations between and are as follows.
The polyhedron subsumes the generator if adding to any generator system representing does not change .
The library provides two widening operators for the domain of polyhedra. The first one, that we call H79-widening, mainly follows the specification provided in the PhD thesis of N. Halbwachs [Hal79], also described in [HPR97]. Note that in the computation of the H79-widening of two polyhedra it is required as a precondition that (the same assumption was implicitly present in the cited papers).
The second widening operator, that we call BHRZ03-widening, is an instance of the specification provided in [BHRZ03a]. This operator also requires as a precondition that and it is guaranteed to provide a result which is at least as precise as the H79-widening.
Both widening operators can be applied to NNC polyhedra. The user is warned that, in such a case, the results may not closely match the geometric intuition which is at the base of the specification of the two widenings. The reason is that, in the current implementation, the widenings are not directly applied to the NNC polyhedra, but rather to their internal representations. Implementation work is in progress and future versions of the library may provide an even better integration of the two widenings with the domain of NNC polyhedra.
q, respectively, then the call
q.H79_widening_assign(p)will assign the polyhedron to variable
q. Namely, it is the bigger polyhedron which is overwritten by the result of the widening. The smaller polyhedron is not modified, so as to lead to an easier coding of the usual convergence test ( can be coded as
p.contains(q)). Note that, in the above context, a call such as
p.H79_widening_assign(q)is likely to result in undefined behavior, since the precondition will be missed (unless it happens that ). The same observation holds for all flavors of widenings and extrapolation operators that are implemented in the library and for all the language interfaces.
When approximating a fixpoint computation using widening operators, a common tactic to improve the precision of the final result is to delay the application of widening operators. The usual approach is to fix a parameter and only apply widenings starting from the -th iteration.
The library also supports an improved widening delay strategy, that we call widening with tokens [BHRZ03a]. A token is a sort of wild card allowing for the replacement of the widening application by the exact upper bound computation: the token is used (and thus consumed) only when the widening would have resulted in an actual precision loss (as opposed to the potential precision loss of the classical delay strategy). Thus, all widening operators can be supplied with an optional argument, recording the number of available tokens, which is decremented when tokens are used. The approximated fixpoint computation will start with a fixed number of tokens, which will be used if and when needed. When there are no tokens left, the widening is always applied.
Besides the two widening operators, the library also implements several extrapolation operators, which differ from widenings in that their use along an upper iteration sequence does not ensure convergence in a finite number of steps.
In particular, for each of the two widenings there is a corresponding limited extrapolation operator, which can be used to implement the widening ``up to'' technique as described in [HPR97]. Each limited extrapolation operator takes a constraint system as an additional parameter and uses it to improve the approximation yielded by the corresponding widening operator. Note that a convergence guarantee can only be obtained by suitably restricting the set of constraints that can occur in this additional parameter. For instance, in [HPR97] this set is fixed once and for all before starting the computation of the upward iteration sequence.
The bounded extrapolation operators further enhance each one of the limited extrapolation operators described above by intersecting the result of the limited extrapolation operation with the box obtained as a result of applying the CC76-widening to the smallest boxes enclosing the two argument polyhedra.
The PPL provides support for computations on non-relational domains, called boxes, and also the interval domains used for their representation.
An interval in is a pair of bounds, called lower and upper. Each bound can be either (1) closed and bounded, (2) open and bounded, or (3) open and unbounded. If the bound is bounded, then it has a value in . For each vector and scalar , and for each relation symbol , the constraint is said to be a interval constraint if there exist an index such that, for all , . Thus each interval constraint that is not a tautology or inconsistent has the form , , , or , with .
Letting be a sequence of intervals and be the vector in with 1 in the 'th position and zeroes in every other position; if the lower bound of the 'th interval in is bounded, the corresponding interval constraint is defined as , where is the value of the bound and is if it is a closed bound and if it is an open bound. Similarly, if the upper bound of the 'th interval in is bounded, the corresponding interval constraint is defined as , where is the value of the bound and is if it is a closed bound and if it is an open bound.
A convex polyhedron is said to be a box if and only if either is the set of solutions to a finite set of interval constraints or and . Therefore any -dimensional box in where can be represented by a sequence of intervals in and is a closed polyhedron if every bound in the intervals in is either closed and bounded or open and unbounded.
The library provides a widening operator for boxes. Given two sequences of intervals defining two -dimensional boxes, the CC76-widening applies, for each corresponding interval and bound, the interval constraint widening defined in [CC76]. For extra precision, this incorporates the widening with thresholds as defined in [BCCetal02] with as the set of default threshold values.
The PPL provides support for computations on numerical domains that, in selected contexts, can achieve a better precision/efficiency ratio with respect to the corresponding computations on a ``fully relational'' domain of convex polyhedra. This is achieved by restricting the syntactic form of the constraints that can be used to describe the domain elements.
For each vector and scalar , and for each relation symbol , the linear constraint is said to be a bounded difference if there exist two indices such that:
A convex polyhedron is said to be a bounded difference shape (BDS, for short) if and only if either can be expressed as the intersection of a finite number of bounded difference constraints or and .
For each vector and scalar , and for each relation symbol , the linear constraint is said to be an octagonal if there exist two indices such that:
A convex polyhedron is said to be an octagonal shape (OS, for short) if and only if either can be expressed as the intersection of a finite number of octagonal constraints or and .
Note that, since any bounded difference is also an octagonal constraint, any BDS is also an OS. The name ``octagonal'' comes from the fact that, in a vector space of dimension 2, a bounded OS can have eight sides at most.
By construction, any BDS or OS is always topologically closed. Under the usual set inclusion ordering, the set of all BDSs (resp., OSs) on the vector space is a lattice having the empty set and the universe as the smallest and the biggest elements, respectively. In theoretical terms, it is a meet sub-lattice of ; moreover, the lattice of BDSs is a meet sublattice of the lattice of OSs. The least upper bound of a finite set of BDSs (resp., OSs) is said to be their bds-hull (resp., oct-hull).
As far as the representation of the rational inhomogeneous term of each bounded difference or octagonal constraint is concerned, several rounding-aware implementation choices are available, including:
The user interface for BDSs and OSs is meant to be as similar as possible to the one developed for the domain of closed polyhedra: in particular, all operators on polyhedra are also available for the domains of BDSs and OSs, even though they are typically characterized by a lower degree of precision. For instance, the bds-difference and oct-difference operators return (the smallest) over-approximations of the set-theoretical difference operator on the corresponding domains. In the case of (generalized) images and preimages of affine relations, suitable (possibly not-optimal) over-approximations are computed when the considered relations cannot be precisely modeled by only using bounded differences or octagonal constraints.
For the domains of BDSs and OSs, the library provides a variant of the widening operator for convex polyhedra defined in [CH78]. The implementation follows the specification in [BHMZ05a,BHMZ05b], resulting in an operator which is well-defined on the corresponding domain (i.e., it does not depend on the internal representation of BDSs or OSs), while still ensuring convergence in a finite number of steps.
The library also implements an extension of the widening operator for intervals as defined in [CC76]. The reader is warned that such an extension, even though being well-defined on the domain of BDSs and OSs, is not provided with a convergence guarantee and is therefore an extrapolation operator.
In this section we introduce rational grids as provided by the library. See also [BDHetal05] for a detailed description of this domain.
The library supports two representations for the grids domain; congruence systems and grid generator systems. We first describe linear congruence relations which form the elements of a congruence system.
For any , denotes the congruence .
Let . For each vector and scalars , the notation stands for the linear congruence relation in defined by the set of vectors
when , the relation is said to be proper; (i.e., when ) denotes the equality . is called the frequency or modulus and the base value of the relation. Thus, provided , the relation defines the set of affine hyperplanes
if , defines the universe and the empty set, otherwise.
The set is a rational grid if and only if either is the set of vectors in that satisfy a finite system of congruence relations in or and .
We also say that is described by and that is a congruence system for .
The grid domain is the set of all rational grids described by finite sets of congruence relations in .
If the congruence system describes the , the empty grid, then we say that is inconsistent. For example, the congruence systems meaning that and , for any , meaning that the value of an expression must be both even and odd are both inconsistent since both describe the empty grid.
When ordering grids by the set inclusion relation, the empty set and the vector space (which is described by the empty set of congruence relations) are, respectively, the smallest and the biggest elements of . The vector space is also called the universe grid.
In set theoretical terms, is a lattice under set inclusion.
Let be a finite set of vectors. For all scalars , the vector is said to be a integer combination of the vectors in .
We denote by (resp., ) the set of all the integer (resp., integer and affine) combinations of the vectors in .
Let be a grid. Then
We can generate any rational grid in from a finite subset of its points, parameters and lines; each point in a grid is obtained by adding a linear combination of its generating lines to an integral combination of its parameters and an integral affine combination of its generating points.
If are each finite subsets of and
where the symbol ' ' denotes the Minkowski's sum, then is a rational grid (see Section 4.4 in [Sch99] and also Proposition 8 in [BDHetal05]). The 3-tuple is said to be a grid generator system for and we write .
Note that the grid if and only if the set of grid points . If , then where, for some , .
A minimized congruence system for is such that, if is another congruence system for , then . Note that a minimized congruence system for a non-empty grid has at most congruence relations.
Similarly, a minimized grid generator system for is such that, if is another grid generator system for , then and . Note that a minimized grid generator system for a grid has no more than a total of grid lines, parameters and points.
As for convex polyhedra, any grid can be described by using a congruence system for , a grid generator system for , or both by means of the double description pair (DD pair) . The double description method for grids is a collection of theoretical results very similar to those for convex polyhedra showing that, given one kind of representation, there are algorithms for computing a representation of the other kind and for minimizing both representations.
As for convex polyhedra, such changes of representation form a key step in the implementation of many operators on grids such as, for example, intersection and grid join.
The space dimension of a grid is the dimension of the corresponding vector space . The space dimension of congruence relations, grid generators and other objects of the library is defined similarly.
A non-empty grid has affine dimension , denoted by , if the maximum number of affinely independent points in is . The affine dimension of an empty grid is defined to be 0. Thus we have .
In general, the operations on rational grids are the same as those for the other PPL domains and the definitions of these can be found in Section Operations on Convex Polyhedra. Below we just describe those operations that have features or behavior that is in some way special to the grid domain.
As for convex polyhedra (see Single-Update Affine Functions), the library provides affine image and preimage operators for grids: given a variable and linear expression , these determine the affine transformation that transforms any point in a grid to
The affine image operator computes the affine image of a grid under . For instance, suppose the grid to be transformed is the non-relational grid in generated by the set of grid points . Then, if the considered variable is and the linear expression is (so that , ), the affine image operator will translate to the grid generated by the set of grid points which is the grid generated by the grid point and parameters ; or, alternatively defined by the congruence system . If the considered variable is as before (i.e., ) but the linear expression is (so that ), then the resulting grid is the grid containing all the points whose coordinates are integral multiples of 3 and lie on line .
The affine preimage operator computes the affine preimage of a grid under . For instance, suppose now that we apply the affine preimage operator as given in the first example using variable and linear expression to the grid ; then we get the original grid back. If, on the other hand, we apply the affine preimage operator as given in the second example using variable and linear expression to , then the resulting grid will consist of all the points in where the coordinate is an integral multiple of 3.
Observe that provided the coefficient of the considered variable in the linear expression is non-zero, the affine transformation is invertible.
Similarly to convex polyhedra (see Generalized Affine Relations), the library provides two other grid operators that are generalizations of the single update affine image and preimage operators for grids. The generalized affine image operator , where and are affine expressions and , is defined as
Note that, when and , so that the transfer function is an equality, then the above operator is equivalent to the application of the standard affine image of with respect to the variable and the affine expression .
Let be any non-empty grid and be a linear expression. Then if, for some , all the points in satisfy the congruence , then the maximum such that this holds is called the frequency of with respect to .
The frequency operator provided by the library returns both the frequency and a value where and
Observe that the above definition is also applied to other simple objects in the library like polyhedra, octagonal shapes, bd-shapes and boxes and in such cases the definition of frequency can be simplified. For instance, the frequency for an object is defined if and only if there is a unique value such that saturates the equality ; in this case the frequency is and the value returned is .
For any two grids , the time-elapse between and , denoted , is the grid
The library provides operators for checking the relation holding between a grid and a congruence, a grid generator, a constraint or a (polyhedron) generator.
Suppose is a grid and an arbitrary congruence system representing . Suppose also that is a congruence relation with . The possible relations between and are as follows.
For the relation between and a constraint, suppose that is a constraint with and the set of points that satisfy . The possible relations between and are as follows.
A grid subsumes a grid generator if adding to any grid generator system representing does not change .
A grid subsumes a (polyhedron) point or closure point if adding the corresponding grid point to any grid generator system representing does not change . A grid subsumes a (polyhedron) ray or line if adding the corresponding grid line to any grid generator system representing does not change .
wrap_assign provided by the library, allows for the wrapping of a subset of the set of space dimensions so as to fit the given bounded integer type and have the specified overflow behavior. In order to maximize the precision of this operator for grids, the exact behavior differs in some respects from the other simple classes of geometric descriptors.
Suppose is a grid and a subset of the set of space dimensions . Suppose also that the width of the bounded integer type is so that the range of values if the type is unsigned and otherwise. Consider a space dimension and a variable for dimension .
If the value in for the variable is a constant in the range , then it is unchanged. Otherwise the result of the operation on will depend on the specified overflow behavior.
The library provides grid widening operators for the domain of grids. The congruence widening and generator widening follow the specifications provided in [BDHetal05]. The third widening uses either the congruence or the generator widening, the exact rule governing this choice at the time of the call is left to the implementation. Note that, as for the widenings provided for convex polyhedra, all the operations provided by the library for computing a widening of grids require as a precondition that .
This is as for widening with tokens for convex polyhedra.
Besides the widening operators, the library also implements several extrapolation operators, which differ from widenings in that their use along an upper iteration sequence does not ensure convergence in a finite number of steps.
In particular, for each grid widening that is provided, there is a corresponding limited extrapolation operator, which can be used to implement the widening ``up to'' technique as described in [HPR97]. Each limited extrapolation operator takes a congruence system as an additional parameter and uses it to improve the approximation yielded by the corresponding widening operator. Note that, as in the case for convex polyhedra, a convergence guarantee can only be obtained by suitably restricting the set of congruence relations that can occur in this additional parameter.
The PPL provides the finite powerset construction; this takes a pre-existing domain and upgrades it to one that can represent disjunctive information (by using a finite number of disjuncts). The construction follows the approach described in [Bag98], also summarized in [BHZ04] where there is an account of generic widenings for the powerset domain (some of which are supported in the pointset powerset domain instantiation of this construction described in Section The Pointset Powerset Domain).
The domain is built from a pre-existing base-level domain which must include an entailment relation ` ', meet operation ` ', a top element ` ' and bottom element ` '.
A set is called non-redundant with respect to ` ' if and only if and . The set of finite non-redundant subsets of (with respect to ` ') is denoted by . The function , called Omega-reduction, maps a finite set into its non-redundant counterpart; it is defined, for each , by
where denotes .
As the intended semantics of a powerset domain element is that of disjunction of the semantics of , the finite set is semantically equivalent to the non-redundant set ; and elements of will be called disjuncts. The restriction to the finite subsets reflects the fact that here disjunctions are implemented by explicit collections of disjuncts. As a consequence of this restriction, for any such that , is the (finite) set of the maximal elements of .
The finite powerset domain over a domain is the set of all finite non-redundant sets of and denoted by . The domain includes an approximation ordering ` ' defined so that, for any and , if and only if
Therefore the top element is and the bottom element is the emptyset.
omega_reduce(), e.g., before performing the output of a powerset element. Note that all the documented operators automatically perform Omega-reductions on their arguments, when needed or appropriate.
In this section we briefly describe the generic operations on Powerset Domains that are provided by the library for any given base-level domain .
Given the sets and , the meet and upper bound operators provided by the library returns the set and Omega-reduced set union respectively.
Given the powerset element and the base-level element , the add disjunct operator provided by the library returns the powerset element .
If the given powerset element is not empty, then the collapse operator returns the singleton powerset consisting of an upper-bound of all the disjuncts.
The pointset powerset domain provided by the PPL is the finite powerset domain (defined in Section The Powerset Construction) whose base-level domain is one of the classes of semantic geometric descriptors listed in Section Semantic Geometric Descriptors.
In addition to the operations described for the generic powerset domain in Section Operations on the Powerset Construction, the PPL provides all the generic operations listed in Generic Operations on Semantic Geometric Descriptors. Here we just describe those operations that are particular to the pointset powerset domain.
Let , and be Omega-reduced elements of a pointset powerset domain over the same base-level domain. Then:
The library provides a binary operator (
simplify_using_context) for the pointset powerset domain that returns a powerset which is a powerset meet-preserving, powerset simplification and disjunct meet-preserving simplification of its first argument using the second argument as context.
Notice that, due to the powerset simplification property, in general a meet-preserving powerset simplification is not an enlargement with respect to the ordering defined on the powerset lattice. Because of this, the operator provided by the library is only well-defined when the base-level domain is not itself a powerset domain.
Given the pointset powersets over the same base-level domain and with the same space dimension, then we say that geometrically covers if every point (in some disjunct) of is also a point in a disjunct of . If geometrically covers and geometrically covers , then we say that they are geometrically equal.
Given the pointset powerset over a base-level semantic GD domain , then the pairwise merge operator takes pairs of distinct elements in whose upper bound (denoted here by ) in (using the PPL operator
upper_bound_assign() for ) is the same as their set-theoretical union and replaces them by their union. This replacement is done recursively so that, for each pair of distinct disjuncts in the result set, we have .
The library implements a generalization of the extrapolation operator for powerset domains proposed in [BGP99]. The operator
BGP99_extrapolation_assign is made parametric by allowing for the specification of any PPL extrapolation operator for the base-level domain. Note that, even when the extrapolation operator for the base-level domain is known to be a widening on , the
BGP99_extrapolation_assign operator cannot guarantee the convergence of the iteration sequence in a finite number of steps (for a counter-example, see [BHZ04]).
The PPL library provides support for the specification of proper widening operators on the pointset powerset domain. In particular, this version of the library implements an instance of the certificate-based widening framework proposed in [BHZ03b].
A finite convergence certificate for an extrapolation operator is a formal way of ensuring that such an operator is indeed a widening on the considered domain. Given a widening operator on the base-level domain , together with the corresponding convergence certificate, the BHZ03 framework is able to lift this widening on to a widening on the pointset powerset domain; ensuring convergence in a finite number of iterations.
Being highly parametric, the BHZ03 widening framework can be instantiated in many ways. The current implementation provides the templatic operator
BHZ03_widening_assign<Certificate, Widening> which only exploits a fraction of this generality, by allowing the user to specify the base-level widening function and the corresponding certificate. The widening strategy is fixed and uses two extrapolation heuristics: first, the upper bound operator for the base-level domain is tried; second, the BGP99 extrapolation operator is tried, possibly applying pairwise merging. If both heuristics fail to converge according to the convergence certificate, then an attempt is made to apply the base-level widening to the upper bound of the two arguments, possibly improving the result obtained by means of the difference operator for the base-level domain. For more details and a justification of the overall approach, see [BHZ03b] and [BHZ04].
The library provides several convergence certificates. Note that, for the domain of Polyhedra, while Parma_Polyhedra_Library::BHRZ03_Certificate the "BHRZ03_Certificate" is compatible with both the BHRZ03 and the H79 widenings, H79_Certificate is only compatible with the latter. Note that using different certificates will change the results obtained, even when using the same base-level widening operator. It is also worth stressing that it is up to the user to see that the widening operator is actually compatible with a given convergence certificate. If such a requirement is not met, then an extrapolation operator will be obtained.
This section describes the PPL abstract domains that are used for approximating floating point computations in software analysis. We follow the approch described in [Min04] and more detailedly in [Min05]. We will denote by the set of all floating point variables in the analyzed program. We will also denote by the set of floating point numbers in the format used by the analyzer (that is, the machine running the PPL) and by the set of floating point numbers in the format used by the machine that is expected to run the analyzed program. Recall that floating point numbers include the infinities and .
Generic concrete floating point expressions on are represented by the
Floating_Point_Expression abstract class. Its concrete derivate classes are:
Opposite_Floating_Point_Expression, that is the negation (unary minus) of a floating point expression,
Sum_Floating_Point_Expression, that is the sum of two floating point expressions,
Difference_Floating_Point_Expression, that is the difference of two floating point expressions,
Multiplication_Floating_Point_Expression, that is the product of two floating point expressions, and
Division_Floating_Point_Expression, that is the division of two floating point expressions.
The set of all the possible values in of a floating point expression at a given program point in a given abstract store can be overapproximated by a linear form with interval coefficients, that is a linear expression of this kind:
where all are free floating point variables and and all are elements of , defined as the set of all intervals with boundaries in . This operation is called linearization and is performed by the method linearize of floating point expression classes.
Even though the intervals may be open, we will always use closed intervals in the documentation for the sake of simplicity, with the exception of unbounded intervals that have boundaries. We denote the set of all linear forms on by .
Linear_Form class provides common algebraic operations on linear forms: you can add or subtract two linear forms, and multiply or divide a linear form by a scalar. We are writing only about interval linear forms in this section, so our scalars will always be intervals with floating point boundaries. The operations on interval linear forms are intuitively defined as follows:
Where and are the corresponding operations on intervals. Note that these operations always round the interval's lower bound towards and the upper bound towards in order to obtain a correct overapproximation.
A (composite) floating point abstract store is used to associate each floating point variable with its currently known approximation. The store is composed by two parts:
An interval abstract store is represented by a
Box with floating point boundaries, while a linear form abstract store is a map of the Standard Template Library. The
linearize method requires both stores as its arguments. Please see the documentation of floating point expression classes for more information.
The linearization of a floating point expression in the composite abstract store will be denoted by . There are two ways a linearization attempt can fail:
Three of the other abstract domains of the PPL (
Octagonal_Shape , and
Polyhedron ) provide a few optimized methods to be used in the analysis of floating point computations. They are recognized by the fact that they take interval linear forms and/or an interval abstract stores as their parameters.
Please see the methods' documentation for more information.
When adopting the double description method for the representation of convex polyhedra, the implementation of most of the operators may require an explicit conversion from one of the two representations into the other one, leading to algorithms having a worst-case exponential complexity. However, thanks to the adoption of lazy and incremental computation techniques, the library turns out to be rather efficient in many practical cases.
In earlier versions of the library, a number of operators were introduced in two flavors: a lazy version and an eager version, the latter having the operator name ending with
_and_minimize. In principle, only the lazy versions should be used. The eager versions were added to help a knowledgeable user obtain better performance in particular cases. Basically, by invoking the eager version of an operator, the user is trading laziness to better exploit the incrementality of the inner library computations. Starting from version 0.5, the lazy and incremental computation techniques have been refined to achieve a better integration: as a consequence, the lazy versions of the operators are now almost always more efficient than the eager versions.
One of the cases when an eager computation might still make sense is when the well-known fail-first principle comes into play. For instance, if you have to compute the intersection of several polyhedra and you strongly suspect that the result will become empty after a few of these intersections, then you may obtain a better performance by calling the eager version of the intersection operator, since the minimization process also enforces an emptiness check. Note anyway that the same effect can be obtained by interleaving the calls of the lazy operator with explicit emptiness checks.
_and_minimize) of these operators is deprecated; this is in preparation of their complete removal, which will occur starting from version 0.11.
For future versions of the PPL library all practical instantiations for the disjuncts for a pointset_powerset and component domains for the partially_reduced_product domains will be fully supported. However, for version 0.10, these compound domains should not themselves occur as one of their argument domains. Therefore their use comes with the following warning.
Partially_Reduced_Product<D1, D2, R>should only be used with the following instantiations for the disjunct domain template
PSETand component domain templates
The PPL library is mainly a collection of so-called ``concrete data types'': while providing the user with a clean and friendly interface, these types are not meant to — i.e., they should not — be used polymorphically (since, e.g., most of the destructors are not declared
virtual). In practice, this restriction means that the library types should not be used as public base classes to be derived from. A user willing to extend the library types, adding new functionalities, often can do so by using containment instead of inheritance; even when there is the need to override a
protected method, non-public inheritance should suffice.
Most operators of the library depend on one or more parameters that are declared ``const'', meaning that they will not be changed by the application of the considered operator. Due to the adoption of lazy computation techniques, in many cases such a const-correctness guarantee only holds at the semantic level, whereas it does not necessarily hold at the implementation level. For a typical example, consider the extraction from a polyhedron of its constraint system representation. While this operation is not going to change the polyhedron, it might actually invoke the internal conversion algorithm and modify the generators representation of the polyhedron object, e.g., by reordering the generators and removing those that are detected as redundant. Thus, any previously computed reference to the generators of the polyhedron (be it a direct reference object or an indirect one, such as an iterator) will no longer be valid. For this reason, code fragments such as the following should be avoided, as they may result in undefined behavior:
As a rule of thumb, if a polyhedron plays any role in a computation (even as a const parameter), then any previously computed reference to parts of the polyhedron may have been invalidated. Note that, in the example above, the computation of the constraint system could have been placed after the uses of the iterator
i and the reference
p. Anyway, if really needed, it is always possible to take a copy of, instead of a reference to, the parts of interest of the polyhedron; in the case above, one may have taken a copy of the generator system by replacing the second line of code with the following:
The same observations, modulo syntactic sugar, apply to the operators defined in the C interface of the library.
J. M. Bjorndalen and O. Anshus. Lessons learned in benchmarking - Floating point benchmarks: Can you trust them? In Proceedings of the Norsk informatikkonferanse 2005 (NIK 2005), pages 89-100, Bergen, Norway, 2005. Tapir Akademisk Forlag.
B. Blanchet, P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, D. Monniaux, and X. Rival. Design and implementation of a special-purpose static program analyzer for safety-critical real-time embedded software. In T. Æ. Mogensen, D. A. Schmidt, and I. Hal Sudborough, editors, The Essence of Computation, Complexity, Analysis, Transformation. Essays Dedicated to Neil D. Jones [on occasion of his 60th birthday], volume 2566 of Lecture Notes in Computer Science, pages 85-108. Springer-Verlag, Berlin, 2002.
R. Bagnara, K. Dobson, P. M. Hill, M. Mundell, and E. Zaffanella. A linear domain for analyzing the distribution of numerical values. Report 2005.06, School of Computing, University of Leeds, UK, 2005.
R. Bagnara, K. Dobson, P. M. Hill, M. Mundell, and E. Zaffanella. A practical tool for analyzing the distribution of numerical values, 2006. Available at http://www.comp.leeds.ac.uk/hill/Papers/papers.html.
R. Bagnara, K. Dobson, P. M. Hill, M. Mundell, and E. Zaffanella. Grids: A domain for analyzing the distribution of numerical values. In G. Puebla, editor, Logic-based Program Synthesis and Transformation, 16th International Symposium, volume 4407 of Lecture Notes in Computer Science, pages 219-235, Venice, Italy, 2007. Springer-Verlag, Berlin.
T. Bultan, R. Gerber, and W. Pugh. Model-checking concurrent systems with unbounded integer variables: Symbolic representations, approximations, and experimental results. ACM Transactions on Programming Languages and Systems, 21(4):747-789, 1999.
R. Bagnara, P. M. Hill, E. Mazzi, and E. Zaffanella. Widening operators for weakly-relational numeric abstractions. Report
arXiv:cs.PL/0412043, 2004. Extended abstract. Contribution to the International workshop on “Numerical & Symbolic Abstract Domains” (NSAD'05, Paris, January 21, 2005). Available at http://arxiv.org/ and http://bugseng.com/products/ppl/.
R. Bagnara, P. M. Hill, E. Mazzi, and E. Zaffanella. Widening operators for weakly-relational numeric abstractions. Quaderno 399, Dipartimento di Matematica, Università di Parma, Italy, 2005. Available at http://www.cs.unipr.it/Publications/.
R. Bagnara, P. M. Hill, E. Mazzi, and E. Zaffanella. Widening operators for weakly-relational numeric abstractions. In C. Hankin and I. Siveroni, editors, Static Analysis: Proceedings of the 12th International Symposium, volume 3672 of Lecture Notes in Computer Science, pages 3-18, London, UK, 2005. Springer-Verlag, Berlin.
R. Bagnara, P. M. Hill, E. Ricci, and E. Zaffanella. Precise widening operators for convex polyhedra. In R. Cousot, editor, Static Analysis: Proceedings of the 10th International Symposium, volume 2694 of Lecture Notes in Computer Science, pages 337-354, San Diego, California, USA, 2003. Springer-Verlag, Berlin.
R. Bagnara, P. M. Hill, E. Ricci, and E. Zaffanella. Precise widening operators for convex polyhedra. Quaderno 312, Dipartimento di Matematica, Università di Parma, Italy, 2003. Available at http://www.cs.unipr.it/Publications/.
R. Bagnara, P. M. Hill, and E. Zaffanella. A new encoding and implementation of not necessarily closed convex polyhedra. Quaderno 305, Dipartimento di Matematica, Università di Parma, Italy, 2002. Available at http://www.cs.unipr.it/Publications/.
R. Bagnara, P. M. Hill, and E. Zaffanella. A new encoding of not necessarily closed convex polyhedra. In M. Carro, C. Vacheret, and K.-K. Lau, editors, Proceedings of the 1st CoLogNet Workshop on Component-based Software Development and Implementation Technology for Computational Logic Systems, pages 147-153, Madrid, Spain, 2002. Published as TR Number CLIP4/02.0, Universidad Politécnica de Madrid, Facultad de Informática.
R. Bagnara, P. M. Hill, and E. Zaffanella. A new encoding and implementation of not necessarily closed convex polyhedra. In M. Leuschel, S. Gruner, and S. Lo Presti, editors, Proceedings of the 3rd Workshop on Automated Verification of Critical Systems, pages 161-176, Southampton, UK, 2003. Published as TR Number DSSE-TR-2003-2, University of Southampton.
R. Bagnara, P. M. Hill, and E. Zaffanella. Widening operators for powerset domains. In B. Steffen and G. Levi, editors, Verification, Model Checking and Abstract Interpretation: Proceedings of the 5th International Conference (VMCAI 2004), volume 2937 of Lecture Notes in Computer Science, pages 135-148, Venice, Italy, 2003. Springer-Verlag, Berlin.
R. Bagnara, P. M. Hill, and E. Zaffanella. Widening operators for powerset domains. Quaderno 349, Dipartimento di Matematica, Università di Parma, Italy, 2004. Available at http://www.cs.unipr.it/Publications/.
R. Bagnara, P. M. Hill, and E. Zaffanella. The Parma Polyhedra Library: Toward a complete set of numerical abstractions for the analysis and verification of hardware and software systems. Quaderno 457, Dipartimento di Matematica, Università di Parma, Italy, 2006. Available at http://www.cs.unipr.it/Publications/. Also published as
arXiv:cs.MS/0612085, available from http://arxiv.org/.
R. Bagnara, P. M. Hill, and E. Zaffanella. Widening operators for powerset domains. Software Tools for Technology Transfer, 8(4/5):449-466, 2006. In the printed version of this article, all the figures have been improperly printed (rendering them useless). See [BHZ07c].
R. Bagnara, P. M. Hill, and E. Zaffanella. Applications of polyhedral computations to the analysis and verification of hardware and software systems. Quaderno 458, Dipartimento di Matematica, Università di Parma, Italy, 2007. Available at http://www.cs.unipr.it/Publications/. Also published as
arXiv:cs.CG/0701122, available from http://arxiv.org/.
R. Bagnara, P. M. Hill, and E. Zaffanella. An improved tight closure algorithm for integer octagonal constraints. Quaderno 467, Dipartimento di Matematica, Università di Parma, Italy, 2007. Available at http://www.cs.unipr.it/Publications/. Also published as
arXiv:0705.4618v2 [cs.DS], available from http://arxiv.org/.
R. Bagnara, P. M. Hill, and E. Zaffanella. Widening operators for powerset domains. Software Tools for Technology Transfer, 9(3/4):413-414, 2007. Erratum to [BHZ06b] containing all the figures properly printed.
R. Bagnara, P. M. Hill, and E. Zaffanella. An improved tight closure algorithm for integer octagonal constraints. In F. Logozzo, D. Peled, and L. Zuck, editors, Verification, Model Checking and Abstract Interpretation: Proceedings of the 9th International Conference (VMCAI 2008), volume 4905 of Lecture Notes in Computer Science, pages 8-21, San Francisco, USA, 2008. Springer-Verlag, Berlin.
R. Bagnara, P. M. Hill, and E. Zaffanella. The Parma Polyhedra Library: Toward a complete set of numerical abstractions for the analysis and verification of hardware and software systems. Science of Computer Programming, 72(1-2):3-21, 2008.
R. Bagnara, P. M. Hill, and E. Zaffanella. Applications of polyhedral computations to the analysis and verification of hardware and software systems. Theoretical Computer Science, 410(46):4672-4691, 2009.
R. Bagnara, P. M. Hill, and E. Zaffanella. Exact join detection for convex polyhedra and other numerical abstractions. Quaderno 492, Dipartimento di Matematica, Università di Parma, Italy, 2009. Available at http://www.cs.unipr.it/Publications/. A corrected and improved version (corrected an error in the statement of condition (3) of Theorem 3.6, typos corrected in statement and proof of Theorem 6.8) has been published in [BHZ09c].
R. Bagnara, P. M. Hill, and E. Zaffanella. Exact join detection for convex polyhedra and other numerical abstractions. Report
arXiv:cs.CG/0904.1783, 2009. Available at http://arxiv.org/ and http://bugseng.com/products/ppl/.
F. Besson, T. P. Jensen, and J.-P. Talpin. Polyhedral analysis for synchronous languages. In A. Cortesi and G. Filé, editors, Static Analysis: Proceedings of the 6th International Symposium, volume 1694 of Lecture Notes in Computer Science, pages 51-68, Venice, Italy, 1999. Springer-Verlag, Berlin.
V. Balasundaram and K. Kennedy. A technique for summarizing data access and its use in parallelism enhancing transformations. In B. Knobe, editor, Proceedings of the ACM SIGPLAN'89 Conference on Programming Language Design and Implementation (PLDI), volume 24(7) of ACM SIGPLAN Notices, pages 41-53, Portland, Oregon, USA, 1989. ACM Press.
R. Bagnara, F. Mesnard, A. Pescetti, and E. Zaffanella. The automatic synthesis of linear ranking functions: The complete unabridged version. Quaderno 498, Dipartimento di Matematica, Università di Parma, Italy, 2010. Superseded by [BMPZ12a].
R. Bagnara, F. Mesnard, A. Pescetti, and E. Zaffanella. The automatic synthesis of linear ranking functions: The complete unabridged version. Report
arXiv:cs.PL/1004.0944v2, 2012. Available at http://arxiv.org/ and http://bugseng.com/products/ppl/. Improved version of [BMPZ10].
R. Bagnara, E. Ricci, E. Zaffanella, and P. M. Hill. Possibly not closed convex polyhedra and the Parma Polyhedra Library. In M. V. Hermenegildo and G. Puebla, editors, Static Analysis: Proceedings of the 9th International Symposium, volume 2477 of Lecture Notes in Computer Science, pages 213-229, Madrid, Spain, 2002. Springer-Verlag, Berlin.
R. Bagnara, E. Ricci, E. Zaffanella, and P. M. Hill. Possibly not closed convex polyhedra and the Parma Polyhedra Library. Quaderno 286, Dipartimento di Matematica, Università di Parma, Italy, 2002. See also [BRZH02c]. Available at http://www.cs.unipr.it/Publications/.
P. Cousot and R. Cousot. Static determination of dynamic properties of programs. In B. Robinet, editor, Proceedings of the Second International Symposium on Programming, pages 106-130, Paris, France, 1976. Dunod, Paris, France.
P. Cousot and R. Cousot. Systematic design of program analysis frameworks. In Proceedings of the Sixth Annual ACM Symposium on Principles of Programming Languages, pages 269-282, San Antonio, TX, USA, 1979. ACM Press.
P. Cousot and R. Cousot. Comparing the Galois connection and widening/narrowing approaches to abstract interpretation. In M. Bruynooghe and M. Wirsing, editors, Proceedings of the 4th International Symposium on Programming Language Implementation and Logic Programming, volume 631 of Lecture Notes in Computer Science, pages 269-295, Leuven, Belgium, 1992. Springer-Verlag, Berlin.
P. Cousot and N. Halbwachs. Automatic discovery of linear restraints among variables of a program. In Conference Record of the Fifth Annual ACM Symposium on Principles of Programming Languages, pages 84-96, Tucson, Arizona, 1978. ACM Press.
N. V. Chernikova. Algorithm for finding a general formula for the non-negative solutions of system of linear equations. U.S.S.R. Computational Mathematics and Mathematical Physics, 4(4):151-158, 1964.
N. V. Chernikova. Algorithm for finding a general formula for the non-negative solutions of system of linear inequalities. U.S.S.R. Computational Mathematics and Mathematical Physics, 5(2):228-233, 1965.
K. Fukuda and A. Prodon. Double description method revisited. In M. Deza, R. Euler, and Y. Manoussakis, editors, Combinatorics and Computer Science, 8th Franco-Japanese and 4th Franco-Chinese Conference, Brest, France, July 3-5, 1995, Selected Papers, volume 1120 of Lecture Notes in Computer Science, pages 91-111. Springer-Verlag, Berlin, 1996.
K. Fukuda. Polyhedral computation FAQ. Swiss Federal Institute of Technology, Lausanne and Zurich, Switzerland, available at http://www.ifor.math.ethz.ch/~fukuda/polyfaq/polyfaq.html, 1998.
D. Gopan, F. DiMaio, N. Dor, T. W. Reps, and M. Sagiv. Numeric domains with summarized dimensions. In K. Jensen and A. Podelski, editors, Tools and Algorithms for the Construction and Analysis of Systems, 10th International Conference, TACAS 2004, volume 2988 of Lecture Notes in Computer Science, pages 512-529, Barcelona, Spain, 2004. Springer-Verlag, Berlin.
E. Gawrilow and M. Joswig.
polymake: An approach to modular software design in computational geometry. In Proceedings of the 17th Annual Symposium on Computational Geometry, pages 222-231, Medford, MA, USA, 2001. ACM.
P. Granger. Static analysis of linear congruence equalities among variables of a program. In S. Abramsky and T. S. E. Maibaum, editors, TAPSOFT'91: Proceedings of the International Joint Conference on Theory and Practice of Software Development, Volume 1: Colloquium on Trees in Algebra and Programming (CAAP'91), volume 493 of Lecture Notes in Computer Science, pages 169-192, Brighton, UK, 1991. Springer-Verlag, Berlin.
P. Granger. Static analyses of congruence properties on rational numbers (extended abstract). In P. Van Hentenryck, editor, Static Analysis: Proceedings of the 4th International Symposium, volume 1302 of Lecture Notes in Computer Science, pages 278-292, Paris, France, 1997. Springer-Verlag, Berlin.
N. Halbwachs. Détermination Automatique de Relations Linéaires Vérifiées par les Variables d'un Programme. Thèse de 3ème cycle d'informatique, Université scientifique et médicale de Grenoble, Grenoble, France, March 1979.
N. Halbwachs. Delay analysis in synchronous programs. In C. Courcoubetis, editor, Computer Aided Verification: Proceedings of the 5th International Conference (CAV'93), volume 697 of Lecture Notes in Computer Science, pages 333-346, Elounda, Greece, 1993. Springer-Verlag, Berlin.
T. A. Henzinger and P.-H. Ho. A note on abstract interpretation strategies for hybrid automata. In P. J. Antsaklis, W. Kohn, A. Nerode, and S. Sastry, editors, Hybrid Systems II, volume 999 of Lecture Notes in Computer Science, pages 252-264. Springer-Verlag, Berlin, 1995.
N. Halbwachs, Y.-E. Proy, and P. Raymond. Verification of linear hybrid systems by means of convex approximations. In B. Le Charlier, editor, Static Analysis: Proceedings of the 1st International Symposium, volume 864 of Lecture Notes in Computer Science, pages 223-237, Namur, Belgium, 1994. Springer-Verlag, Berlin.
T. A. Henzinger, J. Preussig, and H. Wong-Toi. Some lessons from the hytech experience. In Proceedings of the 40th Annual Conference on Decision and Control, pages 2887-2892. IEEE Computer Society Press, 2001.
J. Jaffar, M. J. Maher, P. J. Stuckey, and R. H. C. Yap. Beyond finite domains. In A. Borning, editor, Principles and Practice of Constraint Programming: Proceedings of the Second International Workshop, volume 874 of Lecture Notes in Computer Science, pages 86-94, Rosario, Orcas Island, Washington, USA, 1994. Springer-Verlag, Berlin.
F. Masdupuy. Array operations abstraction using semantic analysis of trapezoid congruences. In Proceedings of the 6th ACM International Conference on Supercomputing, pages 226-235, Washington, DC, USA, 1992. ACM Press.
A. Miné. A new numerical abstract domain based on difference-bound matrices. In O. Danvy and A. Filinski, editors, Proceedings of the 2nd Symposium on Programs as Data Objects (PADO 2001), volume 2053 of Lecture Notes in Computer Science, pages 155-172, Aarhus, Denmark, 2001. Springer-Verlag, Berlin.
A. Miné. A few graph-based relational numerical abstract domains. In M. V. Hermenegildo and G. Puebla, editors, Static Analysis: Proceedings of the 9th International Symposium, volume 2477 of Lecture Notes in Computer Science, pages 117-132, Madrid, Spain, 2002. Springer-Verlag, Berlin.
A. Miné. Relational abstract domains for the detection of floating-point run-time errors. In D. Schmidt, editor, Programming Languages and Systems: Proceedings of the 13th European Symposium on Programming, volume 2986 of Lecture Notes in Computer Science, pages 3-17, Barcelona, Spain, 2004. Springer-Verlag, Berlin.
T. S. Motzkin, H. Raiffa, G. L. Thompson, and R. M. Thrall. The double description method. In H. W. Kuhn and A. W. Tucker, editors, Contributions to the Theory of Games - Volume II, number 28 in Annals of Mathematics Studies, pages 51-73. Princeton University Press, Princeton, New Jersey, 1953.
T. Nakanishi, K. Joe, C. D. Polychronopoulos, and A. Fukuda. The modulo interval: A simple and practical representation for program analysis. In Proceedings of the 1999 International Conference on Parallel Architectures and Compilation Techniques, pages 91-96, Newport Beach, California, USA, 1999. IEEE Computer Society.
G. Nelson and D. C. Oppen. Fast decision algorithms based on Union and Find. In Proceedings of the 18th Annual Symposium on Foundations of Computer Science (FOCS'77), pages 114-119, Providence, RI, USA, 1977. IEEE Computer Society Press. The journal version of this paper is [NO80].
G. Nelson and D. C. Oppen. Fast decision procedures based on congruence closure. Journal of the ACM, 27(2):356-364, 1980. An earlier version of this paper is [NO77].
V. R. Pratt. Two easy theories whose combination is hard. Memo sent to Nelson and Oppen concerning a preprint of their paper [NO77], September 1977.
T. W. Reps, G. Balakrishnan, and J. Lim. Intermediate-representation recovery from low-level code. In J. Hatcliff and F. Tip, editors, Proceedings of the 2006 ACM SIGPLAN Workshop on Partial Evaluation and Semantics-based Program Manipulation, pages 100-111, Charleston, South Carolina, USA, 2006. ACM Press.
A. Simon and A. King. Taming the wrapping of integer arithmetic. In H. Riis Nielson and G. Filé, editors, Static Analysis: Proceedings of the 14th International Symposium, volume 4634 of Lecture Notes in Computer Science, pages 121-136, Kongens Lyngby, Denmark, 2007. Springer-Verlag, Berlin.
R. Sen and Y. N. Srikant. Executable analysis using abstract interpretation with circular linear progressions. In Proceedings of the 5th IEEE/ACM International Conference on Formal Methods and Models for Co-Design (MEMOCODE 2007), pages 39-48, Nice, France, 2007. IEEE Computer Society Press.
R. Sen and Y. N. Srikant. Executable analysis with circular linear progressions. Technical Report IISc-CSA-TR-2007-3, Department of Computer Science and Automation, Indian Institute of Science, Bangalore, India, 2007.
H. Weyl. Elementare theorie der konvexen polyeder. Commentarii Mathematici Helvetici, 7:290-306, 1935. English translation in [Wey50].
H. Weyl. The elementary theory of convex polyhedra. In H. W. Kuhn, editor, Contributions to the Theory of Games - Volume I, number 24 in Annals of Mathematics Studies, pages 3-18. Princeton University Press, Princeton, New Jersey, 1950. Translated from [Wey35] by H. W. Kuhn.
D. K. Wilde. A library for doing polyhedral operations. Master's thesis, Oregon State University, Corvallis, Oregon, December 1993. Also published as IRISA Publication interne 785, Rennes, France, 1993.
A finite set of points is linearly independent if, for all , the set of equations
implies that, for each , , , .
The maximum number of linearly independent points in is . Note that linear independence implies affine independence, but the converse is not true.
If is an matrix, the maximum number of linearly independent rows of , viewed as vectors of , equals the maximum number of linearly independent columns of , viewed as vectors of .
The maximum number of linearly independent rows (columns) of a matrix is the rank of and is denoted by .
A polyhedron is a convex set.
Let be a non-empty polyhedron where . Let be the set of vertices and the set of extreme rays of . Let also be the set of convex combinations of and the set of positive combinations of . Then
Informally, this theorem states that, whenever a polyhedron has a vertex, there exists a decomposition such that
The conditions that is not empty and are equivalent to the condition that has a vertex. (See also Nemhauser and Wolsey - Integer and Combinatorial Optimization - propositions 4.1 and 4.2 on pages 92 and 93).
Under the same hypotheses of Minkowski's theorem, if is a rational polyhedron then all the vertices in have rational coefficients and we can consider a set of extreme rays having rational coefficients only.
The second theorem, called Weyl's theorem, states that any system of generators having rational coefficients defines a rational polyhedron:
If is a rational matrix, is a rational matrix and
then is a rational polyhedron.
In fact, since consists of the sum of convex combinations of the rows of with positive combinations of the rows of , we can think of as the matrix of vertices and as the matrix of rays.
A set is a cone if
The polyhedron is a convex cone and is called polyhedral cone.
A polyhedral cone is either pointed, having the origin as its only vertex, or has no vertices at all.
Given a polyhedron , the lineality space of is the set
and it is denoted by .
To simplify the operations on polyhedra, each polyhedron is first transformed to a homogeneous cone in which the original polyhedron is embedded.
The transformation changes the inhomogeneous system of constraints in variables, representing a polyhedron , into a homogeneous system in variables, representing a polyhedral cone , so that each point corresponds to a point where . That is,
where: ; is the matrix having, for its first rows, the submatrix ; and, for the ( )'st row, where . We call the corresponding polyhedral cone for .
The ( )'st row represents the positivity constraint .
Note that is contained in since the intersection of with the hyperplane defined by the equality is . Therefore, it is always possible to transform a polyhedron to its corresponding polyhedral cone and then recover by means of this intersection.
As always includes the origin and, hence, is non-empty, by Minkowski's theorem, it can also be represented by a system of generators.
The systems of generators for and are such that:
Thus, in the cone , a ray derived from a vertex in differs from a ray derived from a ray in only in that, for a vertex, the ( )'st term is different from zero and, for a ray, it is zero.
Let be a polyhedron and the corresponding polyhedral cone. Then the dual representations, the systems of constraints and generators representing , form the double description for .
Note that, in a double description for a non-empty polyhedron, the system of constraints subsumes the positivity constraint while the system of generators (which has only rays and lines corresponding to the vertices, rays and lines for ) implicitly assumes the origin in as a point so that the cone represented by the generators is non-empty.
In the PPL, a polyhedron is represented by one or both of the representations in its double description. Thus, in the sequel, by PPL representation of a polyhedra, we are referring to the corresponding representation of its corresponding polyhedral cone.
Let be a convex polyhedron (or polytope) in . For a real -vector and a real number , a linear inequality (briefly denoted by ) is called valid for if it is satisfied by all points .
Given a polyhedron generated by vertices, rays and lines, we say that:
Note that, in the PPL representation of a polyhedron , vertices are represented as rays so that this concept of a redundant ray also applies to the vertices of .
If is a valid inequality for , and , is called a face of and we say that the inequality represents . A face is said to be proper if and .
When is non-empty, we say that supports .
The empty polyhedron and the universe polyhedron both have no proper faces, because the only face of an empty polyhedron is itself, while the faces of the universe polyhedron are itself and the emptyset.
Let be a non-empty polyhedron. The set
where is a point of and the symbol ' ' denotes the Minkowski's sum, is a minimal proper face of the polyhedron if is a proper face of .
A proper face of is a facet (or maximal proper face) of if it is not strictly included into any other proper face of . The affine dimension of a facet is equal to .
Let a polyhedron in . The set of all faces is a lattice under inclusion: the minimal face is the emptyset, while the maximal face is the polyhedron.
Let be a polyhedron in and be the polyhedral cone in obtained from by homogenization, then:
Given the decomposition of a polyhedron the set is called the ray space of and denoted by .
Thus a polyhedron can be always decomposed in its and its .
Note that, since and are polyhedra, their affine dimensions can be computed using the definition of affine dimension given for polyhedra.
The spaces defined are connected by some consistency rules shown below.
The proofs of these properties can be obtained considering the definitions of affine dimension and the decomposition of a polyhedron.
Let us consider a ray and an inequality where . Then we say that:
Similarly, considering an equality :
A constraint (i.e., an equality or an inequality) is satisfied by a ray if the ray saturates or verifies the constraint.
Let be a polyhedral cone and . If the sets with are proper faces of , is equal to if and only if the set of constraints that are saturated by is equal to the set of constraints that are saturated by .
A saturation matrix is a bit matrix that represents the connection between constraints and generators of a polyhedron. There are two kinds of saturation matrices, one having rows indexed by constraints and columns indexed by generators (sat_g), and one (that is the transposed version of the previous one) having rows indexed by generators and columns indexed by constraints (sat_c).
For instance, in the saturation matrix sat_g, the elements are defined as follows:
For efficiency reasons, the PPL uses both the sat_g and sat_c matrices.
In an -dimensional ,
These rules are a consequence of the saturation concept.
Let be a polyhedral cone. Then the minimal proper face of in an -dimensional space can also be represented as
To see this, note that the minimal proper face of a polyhedral cone is equal to its lineality space. This for definition is composed by all of that satisfies
Let be representing matrix of constraints of a cone and the set of rays that generate . Then two rays and are adjacent rays if
To remove redundant constraints/generators we will use the following characterization:
It is useful to note that:
Floating point numbers can be used to represent finite families of integer numbers. In this section we collect some closure properties of these families that are exploited in the PPL.
In order not to depend on the particular family of floating point numbers considered, we consider an abstraction that is parametric in the number of bits in the mantissa and gives no limit to the magnitude of the exponent . For let
Let denote the function defined by
Notice that is an odd function, that is, it satisfies for all . For , with , we also write
These are the integer division and remainder function as defined by the C99 standard [ISO/IEC 9899:1999(E), Programming Languages - C (ISO and ANSI C99 Standard)].
Proposition A If , and , then .
The proof is given in the next three lemmas.
Lemma 1 Let . Then . Furthermore, if then there exist and such that .
Proof Let . There is a non negative integer such that . Then with and . Here so that . The same argument shows that odd integers larger than do not in fact belong to , since the corresponding value of would exceed the bound in the definition.
For the second part, let . Let with odd and . Then is an odd integer that belongs to since , using the first part. Hence we may take which is non negative since otherwise would not be an integer as assumed.
Lemma 2 If , and does not divide , then .
Proof By Lemma 1 above we may assume that and with , odd integers, and , . Let . The goal is to prove that : we may assume that , that is, that for otherwise and there is nothing to prove.
In other words, this integer is and therefore it is smaller than .
In all cases, we wrote as the product of a power of 2 and an element of , and this product is another element of .
Lemma 3 For , with , we have
Proof Throughout the proof we write and . First, assume that and that . Let , by the property above. We have
Next, assume that and that . Let . We have
Finally, assume that and that . Let , again by the property above. We have
This completes the proof.
Lemma 4 If , then .
Proof Let and with , odd integers, and , . Then , and therefore it belongs to , since so that it belongs to .
Lemma 5 If , , then .
Proof With the same notation as in the previous Lemma, both and : but all positive odd integers up to and including belong to , so that does as well. By Lemma 1 .