Bibliography

Here are some papers that you may want to read for a full understanding of the Parma Polyhedra Library, the theoretical principles on which it is based, the algorithms and data structures it employs, and the history of all this.

The corresponding BibTeX source is also available.

[Anc91] C. Ancourt. Génération automatique de codes de transfert pour multiprocesseurs à mémoires locales. PhD thesis, Université de Paris VI, Paris, France, March 1991.
Parallel tasks generated by automatic parallelizers do not take advantage of supercomputer memory hierarchies. This thesis presents algorithms to transform a parallel task into an equivalent one that uses data with fast access memory. Algorithms to automatically generate code to move data between two different memory levels of (super)computer are presented. These copy codes should move back and forth array elements that are accessed when an elementary processor execute an array reference located in a set of loops. This set of array elements is characterized by a set of integer points in Zp that is not necessarily a convex polyhedron. In the case of data transfers from global memory to local memory, it is possible to copy a superset of accessed elements, for instance its convex hull. A trade-off has to be made between local memory space, transfer volume and loop bound complexity. To copy data back from local memory to global memory is more difficult because global memory consistency must be preserved. Each processor (or processus) should only copy its own results to avoid errors and, secondarily, to decrease global memory traffic. The input of our main algorithm is an integer convex polyhedron defining the computation space and an affine function representing the index expressions. Its output is set of nested loops containing a new array reference whose execution copies exactly accessed elements. Each element is copied only once. Loop bound expressions use integer divisions to generate non-convex sets. For most practical programs this algorithm provides optimal code in number of data movements and control overhead. Associated with a dependence analysis phase, it can be used to generate data movements in distributed memory multiprocessors. When data partitionning in local memories is specified, it eliminates most of execution guards used to compute only local values on each processor.

[Bag97] R. Bagnara. Data-Flow Analysis for Constraint Logic-Based Languages. PhD thesis, Dipartimento di Informatica, Università di Pisa, Pisa, Italy, March 1997. Printed as Report TD-1/97.
We aim at the the development of precise, practical, and theoretically well-founded data-flow analyzers for constraint logic-based languages. The design and development of such an analyzer fostered a number of research problems that we had to address. A hierarchy of constraint systems is introduced that is suitable for designing and combining abstract domains. The rational construction of a generic domain for the structural analysis of CLP programs is presented. We also address the problem of the “missing occur-check” in many implemented languages. We introduce a new family of domains, based on constraint propagation techniques, for the abstraction of the numerical leaves that occur in the terms of CLP languages. Despite the fact that groundness analysis for logic-based languages is a widely studied subject, a novel domain for groundness analysis is presented that outperforms the existing domains from several points of view. Finally, we present a bottom-up analysis technique for CLP languages that allows for the precise derivation of both call- and success-patterns preserving the connection between them.

[Bag98] R. Bagnara. A hierarchy of constraint systems for data-flow analysis of constraint logic-based languages. Science of Computer Programming, 30(1-2):119-155, 1998. [ .pdf ]
Many interesting analyses for constraint logic-based languages are aimed at the detection of monotonic properties, that is to say, properties that are preserved as the computation progresses. Our basic claim is that most, if not all, of these analyses can be described within a unified notion of constraint domains. We present a class of constraint systems that allows for a smooth integration within an appropriate framework for the definition of non-standard semantics of constraint logic-based languages. Such a framework is also presented and motivated. We then show how such domains can be built, as well as construction techniques that induce a hierarchy of domains with interesting properties. In particular, we propose a general methodology for domain combination with asynchronous interaction (i.e., the interaction is not necessarily synchronized with the domains' operations). By following this methodology, interesting combinations of domains can be expressed with all the the semantic elegance of concurrent constraint programming languages.

[BDH+05] R. Bagnara, K. Dobson, P. M. Hill, M. Mundell, and E. Zaffanella. A linear domain for analyzing the distribution of numerical values. Report 2005.06, School of Computing, University of Leeds, UK, 2005. [ .pdf ]
This paper explores the abstract domain of grids, a domain that is able to represent sets of equally spaced points and hyperplanes over an n-dimensional vector space. Such a domain is useful for the static analysis of the patterns of distribution of the values program variables can take. Besides the bare abstract domain, we present a complete set of operations on grids that includes all that is necessary to define the abstract semantics and the widening operators required to compute it in a finite number of steps. The definition of the domain and its operations exploit well-known techniques from linear algebra as well as a dual representation that allows, among other things, for a concise and efficient implementation.

[BDH+07] R. Bagnara, K. Dobson, P. M. Hill, M. Mundell, and E. Zaffanella. Grids: A domain for analyzing the distribution of numerical values. In G. Puebla, editor, Logic-based Program Synthesis and Transformation, 16th International Symposium, volume 4407 of Lecture Notes in Computer Science, pages 219-235, Venice, Italy, 2007. Springer-Verlag, Berlin. [ .pdf ]
This paper explores the abstract domain of grids, a domain that is able to represent sets of equally spaced points and hyperplanes over an n-dimensional vector space. Such a domain is useful for the static analysis of the patterns of distribution of the values program variables can take. We present the domain, its representation and the basic operations on grids necessary to define the abstract semantics. We show how the definition of the domain and its operations exploit well-known techniques from linear algebra as well as a dual representation that allows, among other things, for a concise and efficient implementation.

[BDH+06] R. Bagnara, K. Dobson, P. M. Hill, M. Mundell, and E. Zaffanella. A practical tool for analyzing the distribution of numerical values, 2006. Available at http://www.comp.leeds.ac.uk/hill/Papers/papers.html.
The abstract domain of grids (or lattices) is a domain that is able to represent sets of equally spaced points and hyperplanes over an n-dimensional vector space. Such a domain is useful for the static analysis of the patterns of distribution of the values that program variables can take. This paper explores how this domain may be used in program analysis, describing grid operations such as affine image, affine preimage and widenings needed by such an application. The paper also shows how any grid may be approximated by a less precise non-relational grid and describes how such an approximation can be computed. Illustrative examples show how the domain may be used in the analysis of programs containing simple assignment statements, while loops and recursive procedures.

[BHMZ04] R. Bagnara, P. M. Hill, E. Mazzi, and E. Zaffanella. Widening operators for weakly-relational numeric abstractions. Report arXiv:cs.PL/0412043, 2004. Extended abstract. Contribution to the International workshop on “Numerical & Symbolic Abstract Domains” (NSAD'05, Paris, January 21, 2005). Available at http://arxiv.org/ and http://bugseng.com/products/ppl/. [ .pdf ]
We discuss the divergence problems recently identified in some extrapolation operators for weakly-relational numeric domains. We identify the cause of the divergences and point out that resorting to more concrete, syntactic domains can be avoided by researching suitable algorithms for the elimination of redundant constraints in the chosen representation.

[BHZ02b] R. Bagnara, P. M. Hill, and E. Zaffanella. A new encoding of not necessarily closed convex polyhedra. In M. Carro, C. Vacheret, and K.-K. Lau, editors, Proceedings of the 1st CoLogNet Workshop on Component-based Software Development and Implementation Technology for Computational Logic Systems, pages 147-153, Madrid, Spain, 2002. Published as TR Number CLIP4/02.0, Universidad Politécnica de Madrid, Facultad de Informática. [ .pdf ]
[BHZ02a] R. Bagnara, P. M. Hill, and E. Zaffanella. A new encoding and implementation of not necessarily closed convex polyhedra. Quaderno 305, Dipartimento di Matematica, Università di Parma, Italy, 2002. Available at http://www.cs.unipr.it/Publications/.
Convex polyhedra, commonly employed for the analysis and verification of both hardware and software, may be defined either by a finite set of linear inequality constraints or by finite sets of generating points and rays of the polyhedron. Although most implementations of the polyhedral operations assume that the polyhedra are topologically closed (i.e., all the constraints defining them are non-strict), several analyzers and verifiers need to compute on a domain of convex polyhedra that are not necessarily closed (NNC). The usual approach to implementing NNC polyhedra is to embed them into closed polyhedra in a vector space having one extra dimension and reuse the tools and techniques already available for closed polyhedra. Previously, this embedding has been designed so that a constant number of constraints and a linear number of generators have to be added to the original NNC specification of the polyhedron. In this paper we explore an alternative approach: while still using an extra dimension to represent the NNC polyhedron by a closed polyhedron, the new embedding adds a linear number of constraints and a constant number of generators. We discuss the relative benefits of these two implementations and how the choice of representation can affect the efficiency of the polyhedral operations. As far as the issue of providing a non-redundant description of the NNC polyhedron is concerned, we generalize the results established in a previous paper so that they apply to both encodings.

[BHRZ05] R. Bagnara, P. M. Hill, E. Ricci, and E. Zaffanella. Precise widening operators for convex polyhedra. Science of Computer Programming, 58(1-2):28-56, 2005. [ .pdf ]
In the context of static analysis via abstract interpretation, convex polyhedra constitute the most used abstract domain among those capturing numerical relational information. Since the domain of convex polyhedra admits infinite ascending chains, it has to be used in conjunction with appropriate mechanisms for enforcing and accelerating the convergence of fixpoint computations. Widening operators provide a simple and general characterization for such mechanisms. For the domain of convex polyhedra, the original widening operator proposed by Cousot and Halbwachs amply deserves the name of standard widening since most analysis and verification tools that employ convex polyhedra also employ that operator. Nonetheless, there is an unfulfilled demand for more precise widening operators. In this paper, after a formal introduction to the standard widening where we clarify some aspects that are often overlooked, we embark on the challenging task of improving on it. We present a framework for the systematic definition of new widening operators that are never less precise than a given widening. The framework is then instantiated on the domain of convex polyhedra so as to obtain a new widening operator that improves on the standard widening by combining several heuristics. A preliminary experimental evaluation has yielded promising results. We also suggest an improvement to the well-known widening delay technique that allows to gain precision while preserving its overall simplicity.

[BHMZ05a] R. Bagnara, P. M. Hill, E. Mazzi, and E. Zaffanella. Widening operators for weakly-relational numeric abstractions. Quaderno 399, Dipartimento di Matematica, Università di Parma, Italy, 2005. Available at http://www.cs.unipr.it/Publications/. [ .pdf ]
We discuss the construction of proper widening operators on several weakly-relational numeric abstractions. Our proposal differs from previous ones in that we actually consider the semantic abstract domains, whose elements are geometric shapes, instead of the (more concrete) syntactic abstract domains of constraint networks and matrices. Since the closure by entailment operator preserves geometric shapes, but not their syntactic expressions, our widenings are immune from the divergence issues that could be faced by the previous approaches when interleaving the applications of widening and closure. The new widenings, which are variations of the standard widening for convex polyhedra defined by Cousot and Halbwachs, can be made as precise as the previous proposals working on the syntactic domains. The implementation of each new widening relies on the availability of an effective reduction procedure for the considered constraint description: we provide such an algorithm for the domain of octagonal shapes.

[BHMZ05b] R. Bagnara, P. M. Hill, E. Mazzi, and E. Zaffanella. Widening operators for weakly-relational numeric abstractions. In C. Hankin and I. Siveroni, editors, Static Analysis: Proceedings of the 12th International Symposium, volume 3672 of Lecture Notes in Computer Science, pages 3-18, London, UK, 2005. Springer-Verlag, Berlin.
We discuss the construction of proper widening operators on several weakly-relational numeric abstractions. Our proposal differs from previous ones in that we actually consider the semantic abstract domains, whose elements are geometric shapes, instead of the (more concrete) syntactic abstract domains of constraint networks and matrices. Since the closure by entailment operator preserves geometric shapes, but not their syntactic expressions, our widenings are immune from the divergence issues that could be faced by the previous approaches when interleaving the applications of widening and closure. The new widenings, which are variations of the standard widening for convex polyhedra defined by Cousot and Halbwachs, can be made as precise as the previous proposals working on the syntactic domains. The implementation of each new widening relies on the availability of an effective reduction procedure for the considered constraint description: we provide such an algorithm for the domain of octagonal shapes.

[BMPZ10] R. Bagnara, F. Mesnard, A. Pescetti, and E. Zaffanella. The automatic synthesis of linear ranking functions: The complete unabridged version. Quaderno 498, Dipartimento di Matematica, Università di Parma, Italy, 2010. Superseded by [BMPZ12a].
The classical technique for proving termination of a generic sequential computer program involves the synthesis of a ranking function for each loop of the program. Linear ranking functions are particularly interesting because many terminating loops admit one and algorithms exist to automatically synthesize it. In this paper we present two such algorithms: one based on work dated 1991 by Sohn and Van Gelder; the other, due to Podelski and Rybalchenko, dated 2004. Remarkably, while the two algorithms will synthesize a linear ranking function under exactly the same set of conditions, the former is mostly unknown to the community of termination analysis and its general applicability has never been put forward before the present paper. In this paper we thoroughly justify both algorithms, we prove their correctness, we compare their worst-case complexity and experimentally evaluate their efficiency, and we present an open-source implementation of them that will make it very easy to include termination-analysis capabilities in automatic program verifiers.

[BMPZ12a] R. Bagnara, F. Mesnard, A. Pescetti, and E. Zaffanella. The automatic synthesis of linear ranking functions: The complete unabridged version. Report arXiv:cs.PL/1004.0944v2, 2012. Available at http://arxiv.org/ and http://bugseng.com/products/ppl/. Improved version of [BMPZ10].
The classical technique for proving termination of a generic sequential computer program involves the synthesis of a ranking function for each loop of the program. Linear ranking functions are particularly interesting because many terminating loops admit one and algorithms exist to automatically synthesize it. In this paper we present two such algorithms: one based on work dated 1991 by Sohn and Van Gelder; the other, due to Podelski and Rybalchenko, dated 2004. Remarkably, while the two algorithms will synthesize a linear ranking function under exactly the same set of conditions, the former is mostly unknown to the community of termination analysis and its general applicability has never been put forward before the present paper. In this paper we thoroughly justify both algorithms, we prove their correctness, we compare their worst-case complexity and experimentally evaluate their efficiency, and we present an open-source implementation of them that will make it very easy to include termination-analysis capabilities in automatic program verifiers.

[BMPZ12b] R. Bagnara, F. Mesnard, A. Pescetti, and E. Zaffanella. A new look at the automatic synthesis of linear ranking functions. Information and Computation, 215:47-67, 2012.
The classical technique for proving termination of a generic sequential computer program involves the synthesis of a ranking function for each loop of the program. Linear ranking functions are particularly interesting because many terminating loops admit one and algorithms exist to automatically synthesize it. In this paper we present two such algorithms: one based on work dated 1991 by Sohn and Van Gelder; the other, due to Podelski and Rybalchenko, dated 2004. Remarkably, while the two algorithms will synthesize a linear ranking function under exactly the same set of conditions, the former is mostly unknown to the community of termination analysis and its general applicability has never been put forward before the present paper. In this paper we thoroughly justify both algorithms, we prove their correctness, we compare their worst-case complexity and experimentally evaluate their efficiency, and we present an open-source implementation of them that will make it very easy to include termination-analysis capabilities in automatic program verifiers.

[BRZH02a] R. Bagnara, E. Ricci, E. Zaffanella, and P. M. Hill. Possibly not closed convex polyhedra and the Parma Polyhedra Library. In M. V. Hermenegildo and G. Puebla, editors, Static Analysis: Proceedings of the 9th International Symposium, volume 2477 of Lecture Notes in Computer Science, pages 213-229, Madrid, Spain, 2002. Springer-Verlag, Berlin. [ .pdf ]
The domain of convex polyhedra is employed in several systems for the analysis and verification of hardware and software components. Current applications span imperative, functional and logic languages, synchronous languages and synchronization protocols, real-time and hybrid systems. Since the seminal work of P. Cousot and N. Halbwachs, convex polyhedra have thus played an important role in the formal methods community and several critical tasks rely on their software implementations. Despite this, existing libraries for the manipulation of convex polyhedra are still research prototypes and suffer from limitations that make their usage problematic, especially in critical applications. Furthermore, there is inadequate support for polyhedra that are not necessarily closed (NNC), i.e., polyhedra that are described by systems of constraints where strict inequalities are allowed to occur. This paper presents the Parma Polyhedra Library, a new, robust and complete implementation of NNC convex polyhedra, concentrating on the distinctive features of the library and on the novel theoretical underpinnings.

[BRZH02b] R. Bagnara, E. Ricci, E. Zaffanella, and P. M. Hill. Possibly not closed convex polyhedra and the Parma Polyhedra Library. Quaderno 286, Dipartimento di Matematica, Università di Parma, Italy, 2002. See also [BRZH02c]. Available at http://www.cs.unipr.it/Publications/.
The domain of convex polyhedra is employed in several systems for the analysis and verification of hardware and software components. Current applications span imperative, functional and logic languages, synchronous languages and synchronization protocols, real-time and hybrid systems. Since the seminal work of P. Cousot and N. Halbwachs, convex polyhedra have thus played an important role in the formal methods community and several critical tasks rely on their software implementations. Despite this, existing libraries for the manipulation of convex polyhedra are still research prototypes and suffer from limitations that make their usage problematic, especially in critical applications. These limitations concern inaccuracies in the documentation of the underlying theory, code and interfaces; numeric overflow and underflow; use of not fully dynamic data-structures and poor mechanisms for error handling and recovery. In addition, there is inadequate support for polyhedra that are not necessarily closed (NNC), i.e., polyhedra that are described by systems of constraints where strict inequalities are allowed to occur. This paper presents the Parma Polyhedra Library, a new, robust and complete implementation of NNC convex polyhedra, concentrating on the distinctive features of the library and on the novel theoretical underpinnings.

[BRZH02c] R. Bagnara, E. Ricci, E. Zaffanella, and P. M. Hill. Errata for technical report “Quaderno 286”. Available at http://www.cs.unipr.it/Publications/, 2002. See [BRZH02b].
[BHRZ03a] R. Bagnara, P. M. Hill, E. Ricci, and E. Zaffanella. Precise widening operators for convex polyhedra. In R. Cousot, editor, Static Analysis: Proceedings of the 10th International Symposium, volume 2694 of Lecture Notes in Computer Science, pages 337-354, San Diego, California, USA, 2003. Springer-Verlag, Berlin. [ .pdf ]
Convex polyhedra constitute the most used abstract domain among those capturing numerical relational information. Since the domain of convex polyhedra admits infinite ascending chains, it has to be used in conjunction with appropriate mechanisms for enforcing and accelerating convergence of the fixpoint computation. Widening operators provide a simple and general characterization for such mechanisms. For the domain of convex polyhedra, the original widening operator proposed by Cousot and Halbwachs amply deserves the name of standard widening since most analysis and verification tools that employ convex polyhedra also employ that operator. Nonetheless, there is an unfulfilled demand for more precise widening operators. In this paper, after a formal introduction to the standard widening where we clarify some aspects that are often overlooked, we embark on the challenging task of improving on it. We present a framework for the systematic definition of new and precise widening operators for convex polyhedra. The framework is then instantiated so as to obtain a new widening operator that combines several heuristics and uses the standard widening as a last resort so that it is never less precise. A preliminary experimental evaluation has yielded promising results.

[BHRZ03b] R. Bagnara, P. M. Hill, E. Ricci, and E. Zaffanella. Precise widening operators for convex polyhedra. Quaderno 312, Dipartimento di Matematica, Università di Parma, Italy, 2003. Available at http://www.cs.unipr.it/Publications/.
Convex polyhedra constitute the most used abstract domain among those capturing numerical relational information. Since the domain of convex polyhedra admits infinite ascending chains, it has to be used in conjunction with appropriate mechanisms for enforcing and accelerating convergence of the fixpoint computation. Widening operators provide a simple and general characterization for such mechanisms. For the domain of convex polyhedra, the original widening operator proposed by Cousot and Halbwachs amply deserves the name of standard widening since most analysis and verification tools that employ convex polyhedra also employ that operator. Nonetheless, there is demand for more precise widening operators that still has not been fulfilled. In this paper, after a formal introduction to the standard widening where we clarify some aspects that are often overlooked, we embark on the challenging task of improving on it. We present a framework for the systematic definition of new and precise widening operators for convex polyhedra. The framework is then instantiated so as to obtain a new widening operator that combines several heuristics and uses the standard widening as a last resort so that it is never less precise. A preliminary experimental evaluation has yielded promising results. We also suggest an improvement to the well-known widening delay technique that allows to gain precision while preserving its overall simplicity.

[BHZ03a] R. Bagnara, P. M. Hill, and E. Zaffanella. A new encoding and implementation of not necessarily closed convex polyhedra. In M. Leuschel, S. Gruner, and S. Lo Presti, editors, Proceedings of the 3rd Workshop on Automated Verification of Critical Systems, pages 161-176, Southampton, UK, 2003. Published as TR Number DSSE-TR-2003-2, University of Southampton. [ .pdf ]
Convex polyhedra, commonly employed for the analysis and verification of both hardware and software, may be defined either by a finite set of linear inequality constraints or by finite sets of generating points and rays of the polyhedron. Although most implementations of the polyhedral operations assume that the polyhedra are topologically closed (i.e., all the constraints defining them are non-strict), several analyzers and verifiers need to compute on a domain of convex polyhedra that are not necessarily closed (NNC). The usual approach to implementing NNC polyhedra is to embed them into closed polyhedra in a vector space having one extra dimension and reuse the tools and techniques already available for closed polyhedra. Previously, this embedding has been designed so that a constant number of constraints and a linear number of generators have to be added to the original NNC specification of the polyhedron. In this paper we explore an alternative approach: while still using an extra dimension to represent the NNC polyhedron by a closed polyhedron, the new embedding adds a linear number of constraints and a constant number of generators. We discuss the relative benefits of these two implementations and how the choice of representation can affect the efficiency of the polyhedral operations. As far as the issue of providing a non-redundant description of the NNC polyhedron is concerned, we generalize the results established in a previous paper so that they apply to both encodings.

[BHZ03b] R. Bagnara, P. M. Hill, and E. Zaffanella. Widening operators for powerset domains. In B. Steffen and G. Levi, editors, Verification, Model Checking and Abstract Interpretation: Proceedings of the 5th International Conference (VMCAI 2004), volume 2937 of Lecture Notes in Computer Science, pages 135-148, Venice, Italy, 2003. Springer-Verlag, Berlin. [ .pdf ]
The finite powerset construction upgrades an abstract domain by allowing for the representation of finite disjunctions of its elements. In this paper we define two generic widening operators for the finite powerset abstract domain. Both widenings are obtained by lifting any widening operator defined on the base-level abstract domain and are parametric with respect to the specification of a few additional operators. We illustrate the proposed techniques by instantiating our widenings on powersets of convex polyhedra, a domain for which no non-trivial widening operator was previously known.

[BHZ04] R. Bagnara, P. M. Hill, and E. Zaffanella. Widening operators for powerset domains. Quaderno 349, Dipartimento di Matematica, Università di Parma, Italy, 2004. Available at http://www.cs.unipr.it/Publications/.
The finite powerset construction upgrades an abstract domain by allowing for the representation of finite disjunctions of its elements. In this paper we define two generic widening operators for the finite powerset abstract domain. Both widenings are obtained by lifting any widening operator defined on the base-level abstract domain and are parametric with respect to the specification of a few additional operators. We illustrate the proposed techniques by instantiating our widenings on powersets of convex polyhedra, a domain for which no non-trivial widening operator was previously known.

[BHZ05] R. Bagnara, P. M. Hill, and E. Zaffanella. Not necessarily closed convex polyhedra and the double description method. Formal Aspects of Computing, 17(2):222-257, 2005.
Since the seminal work of Cousot and Halbwachs, the domain of convex polyhedra has been employed in several systems for the analysis and verification of hardware and software components. Although most implementations of the polyhedral operations assume that the polyhedra are topologically closed (i.e., all the constraints defining them are non-strict), several analyzers and verifiers need to compute on a domain of convex polyhedra that are not necessarily closed (NNC). The usual approach to implementing NNC polyhedra is to embed them into closed polyhedra in a higher dimensional vector space and reuse the tools and techniques already available for closed polyhedra. In this work we highlight and discuss the issues underlying such an embedding for those implementations that are based on the double description method, where a polyhedron may be described by a system of linear constraints or by a system of generating rays and points. Two major achievements are the definition of a theoretically clean, high-level user interface and the specification of an efficient procedure for removing redundancies from the descriptions of NNC polyhedra.

[BHZ06b] R. Bagnara, P. M. Hill, and E. Zaffanella. Widening operators for powerset domains. Software Tools for Technology Transfer, 8(4/5):449-466, 2006. In the printed version of this article, all the figures have been improperly printed (rendering them useless). See [BHZ07c]. [ .pdf ]
The finite powerset construction upgrades an abstract domain by allowing for the representation of finite disjunctions of its elements. While most of the operations on the finite powerset abstract domain are easily obtained by “lifting” the corresponding operations on the base-level domain, the problem of endowing finite powersets with a provably correct widening operator is still open. In this paper we define three generic widening methodologies for the finite powerset abstract domain. The widenings are obtained by lifting any widening operator defined on the base-level abstract domain and are parametric with respect to the specification of a few additional operators that allow all the flexibility required to tune the complexity/precision trade-off. As far as we know, this is the first time that the problem of deriving non-trivial, provably correct widening operators in a domain refinement is tackled successfully. We illustrate the proposed techniques by instantiating our widening methodologies on powersets of convex polyhedra, a domain for which no non-trivial widening operator was previously known.

[BHZ07c] R. Bagnara, P. M. Hill, and E. Zaffanella. Widening operators for powerset domains. Software Tools for Technology Transfer, 9(3/4):413-414, 2007. Erratum to [BHZ06b] containing all the figures properly printed.
[BHZ06a] R. Bagnara, P. M. Hill, and E. Zaffanella. The Parma Polyhedra Library: Toward a complete set of numerical abstractions for the analysis and verification of hardware and software systems. Quaderno 457, Dipartimento di Matematica, Università di Parma, Italy, 2006. Available at http://www.cs.unipr.it/Publications/. Also published as arXiv:cs.MS/0612085, available from http://arxiv.org/. [ .pdf ]
Since its inception as a student project in 2001, initially just for the handling (as the name implies) of convex polyhedra, the Parma Polyhedra Library has been continuously improved and extended by joining scrupulous research on the theoretical foundations of (possibly non-convex) numerical abstractions to a total adherence to the best available practices in software development. Even though it is still not fully mature and functionally complete, the Parma Polyhedra Library already offers a combination of functionality, reliability, usability and performance that is not matched by similar, freely available libraries. In this paper, we present the main features of the current version of the library, emphasizing those that distinguish it from other similar libraries and those that are important for applications in the field of analysis and verification of hardware and software systems.

[BHZ07a] R. Bagnara, P. M. Hill, and E. Zaffanella. Applications of polyhedral computations to the analysis and verification of hardware and software systems. Quaderno 458, Dipartimento di Matematica, Università di Parma, Italy, 2007. Available at http://www.cs.unipr.it/Publications/. Also published as arXiv:cs.CG/0701122, available from http://arxiv.org/. [ .pdf ]
Convex polyhedra are the basis for several abstractions used in static analysis and computer-aided verification of complex and sometimes mission critical systems. For such applications, the identification of an appropriate complexity-precision trade-off is a particularly acute problem, so that the availability of a wide spectrum of alternative solutions is mandatory. We survey the range of applications of polyhedral computations in this area; give an overview of the different classes of polyhedra that may be adopted; outline the main polyhedral operations required by automatic analyzers and verifiers; and look at some possible combinations of polyhedra with other numerical abstractions that have the potential to improve the precision of the analysis. Areas where further theoretical investigations can result in important contributions are highlighted.

[BHZ07b] R. Bagnara, P. M. Hill, and E. Zaffanella. An improved tight closure algorithm for integer octagonal constraints. Quaderno 467, Dipartimento di Matematica, Università di Parma, Italy, 2007. Available at http://www.cs.unipr.it/Publications/. Also published as arXiv:0705.4618v2 [cs.DS], available from http://arxiv.org/. [ .pdf ]
Integer octagonal constraints (a.k.a. Unit Two Variables Per Inequality or UTVPI integer constraints) constitute an interesting class of constraints for the representation and solution of integer problems in the fields of constraint programming and formal analysis and verification of software and hardware systems, since they couple algorithms having polynomial complexity with a relatively good expressive power. The main algorithms required for the manipulation of such constraints are the satisfiability check and the computation of the inferential closure of a set of constraints. The latter is called tight closure to mark the difference with the (incomplete) closure algorithm that does not exploit the integrality of the variables. In this paper we present and fully justify an O(n3) algorithm to compute the tight closure of a set of UTVPI integer constraints.

[BHZ08a] R. Bagnara, P. M. Hill, and E. Zaffanella. An improved tight closure algorithm for integer octagonal constraints. In F. Logozzo, D. Peled, and L. Zuck, editors, Verification, Model Checking and Abstract Interpretation: Proceedings of the 9th International Conference (VMCAI 2008), volume 4905 of Lecture Notes in Computer Science, pages 8-21, San Francisco, USA, 2008. Springer-Verlag, Berlin. [ .pdf ]
Integer octagonal constraints(a.k.a. Unit Two Variables Per Inequality or UTVPI integer constraints) constitute an interesting class of constraints for the representation and solution of integer problems in the fields of constraint programming and formal analysis and verification of software and hardware systems, since they couple algorithms having polynomial complexity with a relatively good expressive power. The main algorithms required for the manipulation of such constraints are the satisfiability check and the computation of the inferential closure of a set of constraints. The latter is called tight closure to mark the difference with the (incomplete) closure algorithm that does not exploit the integrality of the variables. In this paper we present and fully justify an O(n3) algorithm to compute the tight closure of a set of UTVPI integer constraints.

[BHZ08b] R. Bagnara, P. M. Hill, and E. Zaffanella. The Parma Polyhedra Library: Toward a complete set of numerical abstractions for the analysis and verification of hardware and software systems. Science of Computer Programming, 72(1-2):3-21, 2008. [ .pdf ]
Since its inception as a student project in 2001, initially just for the handling (as the name implies) of convex polyhedra, the Parma Polyhedra Library has been continuously improved and extended by joining scrupulous research on the theoretical foundations of (possibly non-convex) numerical abstractions to a total adherence to the best available practices in software development. Even though it is still not fully mature and functionally complete, the Parma Polyhedra Library already offers a combination of functionality, reliability, usability and performance that is not matched by similar, freely available libraries. In this paper, we present the main features of the current version of the library, emphasizing those that distinguish it from other similar libraries and those that are important for applications in the field of analysis and verification of hardware and software systems.

[BHZ10] R. Bagnara, P. M. Hill, and E. Zaffanella. Exact join detection for convex polyhedra and other numerical abstractions. Computational Geometry: Theory and Applications, 43(5):453-473, 2010. [ .pdf ]
Deciding whether the union of two convex polyhedra is itself a convex polyhedron is a basic problem in polyhedral computations; having important applications in the field of constrained control and in the synthesis, analysis, verification and optimization of hardware and software systems. In such application fields though, general convex polyhedra are just one among many, so-called, numerical abstractions, which range from restricted families of (not necessarily closed) convex polyhedra to non-convex geometrical objects. We thus tackle the problem from an abstract point of view: for a wide range of numerical abstractions that can be modeled as bounded join-semilattices -that is, partial orders where any finite set of elements has a least upper bound-, we show necessary and sufficient conditions for the equivalence between the lattice-theoretic join and the set-theoretic union. For the case of closed convex polyhedra -which, as far as we know, is the only one already studied in the literature- we improve upon the state-of-the-art by providing a new algorithm with a better worst-case complexity. The results and algorithms presented for the other numerical abstractions are new to this paper. All the algorithms have been implemented, experimentally validated, and made available in the Parma Polyhedra Library.

[BHZ09a] R. Bagnara, P. M. Hill, and E. Zaffanella. Applications of polyhedral computations to the analysis and verification of hardware and software systems. Theoretical Computer Science, 410(46):4672-4691, 2009. [ .pdf ]
Convex polyhedra are the basis for several abstractions used in static analysis and computer-aided verification of complex and sometimes mission critical systems. For such applications, the identification of an appropriate complexity-precision trade-off is a particularly acute problem, so that the availability of a wide spectrum of alternative solutions is mandatory. We survey the range of applications of polyhedral computations in this area; give an overview of the different classes of polyhedra that may be adopted; outline the main polyhedral operations required by automatic analyzers and verifiers; and look at some possible combinations of polyhedra with other numerical abstractions that have the potential to improve the precision of the analysis. Areas where further theoretical investigations can result in important contributions are highlighted.

[BHZ09d] R. Bagnara, P. M. Hill, and E. Zaffanella. Weakly-relational shapes for numeric abstractions: Improved algorithms and proofs of correctness. Formal Methods in System Design, 35(3):279-323, 2009. [ .pdf ]
Weakly-relational numeric constraints provide a compromise between complexity and expressivity that is adequate for several applications in the field of formal analysis and verification of software and hardware systems. We address the problems to be solved for the construction of full-fledged, efficient and provably correct abstract domains based on such constraints. We first propose to work with semantic abstract domains, whose elements are geometric shapes, instead of the (more concrete) syntactic abstract domains of constraint networks and matrices on which the previous proposals are based. This allows to solve, once and for all, the problem whereby closure by entailment, a crucial operation for the realization of such domains, seemed to impede the realization of proper widening operators. In our approach, the implementation of widenings relies on the availability of an effective reduction procedure for the considered constraint description: one for the domain of bounded difference shapes already exists in the literature; we provide algorithms for the significantly more complex cases of rational and integer octagonal shapes. We also improve upon the state-of-the-art by presenting, along with their proof of correctness, closure by entailment algorithms of reduced complexity for domains based on rational and integer octagonal constraints. The consequences of implementing weakly-relational numerical domains with floating point numbers are also discussed.

[BHZ09b] R. Bagnara, P. M. Hill, and E. Zaffanella. Exact join detection for convex polyhedra and other numerical abstractions. Quaderno 492, Dipartimento di Matematica, Università di Parma, Italy, 2009. Available at http://www.cs.unipr.it/Publications/. A corrected and improved version (corrected an error in the statement of condition (3) of Theorem 3.6, typos corrected in statement and proof of Theorem 6.8) has been published in [BHZ09c]. [ .pdf ]
Deciding whether the union of two convex polyhedra is a convex polyhedron is a basic problem in polyhedral computation having important applications in the field of constrained control and in the synthesis, analysis, verification and optimization of hardware and software systems. In these application fields, though, general convex polyhedra are only one among many so-called numerical abstractions: these range from restricted families of (not necessarily closed) convex polyhedra to non-convex geometrical objects. We thus tackle the problem from an abstract point of view: for a wide range of numerical abstractions that can be modeled as bounded join-semilattices -that is, partial orders where any finite set of elements has a least upper bound-, we show necessary and sufficient conditions for the equivalence between the lattice-theoretic join and the set-theoretic union. For the case of closed convex polyhedra -which, as far as we know, is the only one already studied in the literature- we improve upon the state-of-the-art by providing a new algorithm with a better worst-case complexity. The results and algorithms presented for the other numerical abstractions are new to this paper. All the algorithms have been implemented, experimentally validated, and made available in the Parma Polyhedra Library.

[BHZ09c] R. Bagnara, P. M. Hill, and E. Zaffanella. Exact join detection for convex polyhedra and other numerical abstractions. Report arXiv:cs.CG/0904.1783, 2009. Available at http://arxiv.org/ and http://bugseng.com/products/ppl/. [ .pdf ]
Deciding whether the union of two convex polyhedra is itself a convex polyhedron is a basic problem in polyhedral computations; having important applications in the field of constrained control and in the synthesis, analysis, verification and optimization of hardware and software systems. In such application fields though, general convex polyhedra are just one among many, so-called, numerical abstractions, which range from restricted families of (not necessarily closed) convex polyhedra to non-convex geometrical objects. We thus tackle the problem from an abstract point of view: for a wide range of numerical abstractions that can be modeled as bounded join-semilattices -that is, partial orders where any finite set of elements has a least upper bound-, we show necessary and sufficient conditions for the equivalence between the lattice-theoretic join and the set-theoretic union. For the case of closed convex polyhedra -which, as far as we know, is the only one already studied in the literature- we improve upon the state-of-the-art by providing a new algorithm with a better worst-case complexity. The results and algorithms presented for the other numerical abstractions are new to this paper. All the algorithms have been implemented, experimentally validated, and made available in the Parma Polyhedra Library.

[BK89] V. Balasundaram and K. Kennedy. A technique for summarizing data access and its use in parallelism enhancing transformations. In B. Knobe, editor, Proceedings of the ACM SIGPLAN'89 Conference on Programming Language Design and Implementation (PLDI), volume 24(7) of ACM SIGPLAN Notices, pages 41-53, Portland, Oregon, USA, 1989. ACM Press.
[BFT00] A. Bemporad, K. Fukuda, and F. D. Torrisi. Convexity recognition of the union of polyhedra. Report AUT00-13, Automatic Control Laboratory, ETHZ, Zurich, Switzerland, 2000. [ http ]
In this paper we consider the following basic problem in polyhedral computation: Given two polyhedra in Rd, P and Q, decide whether their union is convex, and, if so, compute it. We consider the three natural specializations of the problem: (1) when the polyhedra are given by half-spaces (H-polyhedra) (2) when they are given by vertices and extreme rays (V-polyhedra) (3) when both H- and V-polyhedral representations are available. Both the bounded (polytopes) and the unbounded case are considered. We show that the first two problems are polynomially solvable, and that the third problem is strongly-polynomially solvable.

[BFT01] A. Bemporad, K. Fukuda, and F. D. Torrisi. Convexity recognition of the union of polyhedra. Computational Geometry: Theory and Applications, 18(3):141-154, 2001.
In this paper we consider the following basic problem in polyhedral computation: Given two polyhedra in Rd, P and Q, decide whether their union is convex, and, if so, compute it. We consider the three natural specializations of the problem: (1) when the polyhedra are given by halfspaces (H-polyhedra), (2) when they are given by vertices and extreme rays (V-polyhedra), and (3) when both H- and V-polyhedral representations are available. Both the bounded (polytopes) and the unbounded case are considered. We show that the first two problems are polynomially solvable, and that the third problem is strongly-polynomially solvable.

[BFM11] M. Benerecetti, M. Faella, and S. Minopoli. Towards efficient exact synthesis for linear hybrid systems. In Proceedings of 2nd International Symposium on Games, Automata, Logics and Formal Verification (GandALF 2011), volume 54 of Electronic Proceedings in Theoretical Computer Science, pages 263-277, Minori, Amalfi Coast, Italy, 2011.
We study the problem of automatically computing the controllable region of a Linear Hybrid Automaton, with respect to a safety objective. We describe the techniques that are needed to effectively and efficiently implement a recently-proposed solution procedure, based on polyhedral abstractions of the state space. Supporting experimental results are presented, based on an implementation of the proposed techniques on top of the tool PHAVer.

[BFM13] M. Benerecetti, M. Faella, and S. Minopoli. Automatic synthesis of switching controllers for linear hybrid systems: Safety control. Theoretical Computer Science, 493:116-138, 2013.
In this paper we study the problem of automatically generating switching controllers for the class of Linear Hybrid Automata, with respect to safety objectives. While the same problem has been already considered in the literature, no sound and complete solution has been provided so far. We identify and solve inaccuracies contained in previous characterizations of the problem, providing a sound and complete symbolic fixpoint procedure to compute the set of states from which a controller can keep the system in a given set of desired states. While the overall procedure may not terminate, we prove the termination of each iteration, thus paving the way to an effective implementation. The techniques needed to effectively and efficiently implement the proposed solution procedure, based on polyhedral abstractions of the state space, are thoroughly illustrated and discussed. Finally, some supporting and promising experimental results, based on the implementation of the proposed techniques on top of the tool PHAVer, are presented.

[BJT99] F. Besson, T. P. Jensen, and J.-P. Talpin. Polyhedral analysis for synchronous languages. In A. Cortesi and G. Filé, editors, Static Analysis: Proceedings of the 6th International Symposium, volume 1694 of Lecture Notes in Computer Science, pages 51-68, Venice, Italy, 1999. Springer-Verlag, Berlin.
We define an operational semantics for the Signal language and design an analysis which allows to verify properties pertaining to the relation between values of the numeric and boolean variables of a reactive system. A distinguished feature of the analysis is that it is expressed and proved correct with respect to the source program rather than on an intermediate representation of the program. The analysis calculates a safe approximation to the set of reachable states by a symbolic fixed point computation in the domain of convex polyhedra using a novel widening operator based on the convex hull representation of polyhedra.

[BA05] J. M. Bjørndalen and O. Anshus. Lessons learned in benchmarking - Floating point benchmarks: Can you trust them? In Proceedings of the Norsk informatikkonferanse 2005 (NIK 2005), pages 89-100, Bergen, Norway, 2005. Tapir Akademisk Forlag. [ .pdf ]
Benchmarks are important tools for understanding the implication of design choices for systems, and for studying increasingly complex hardware architectures and software systems. One of the assumptions for benchmarking within systems research seems to be that the execution time of floating point operations do not change much with different input values. We report on a problem where a textbook benchmark showed significant variation in execution time depending on the input values, and how a small fraction of denormalized floating point values (a representation automatically used by the CPU to represent values close to zero) in the benchmark could lead to the wrong conclusions about the relative efficiency of PowerPC and Intel P4 machines. Furthermore, a parallel version of the same benchmark is demonstrated to incorrectly indicate scalability problems in the application or communication subsystem. There is significant overhead in handling these exceptions on-chip on modern Intel hardware, even if the program can continue uninterrupted. We have observed that the execution time of benchmarks can increase by up to two orders of magnitude. In one benchmark, 3.88% denormalized numbers in a matrix slowed down the benchmark by a factor 3.83. We suggest some remedies and guidelines for avoiding the problem.

[BCC+02] B. Blanchet, P. Cousot, R. Cousot, J. Feret, L. Mauborgne, A. Miné, D. Monniaux, and X. Rival. Design and implementation of a special-purpose static program analyzer for safety-critical real-time embedded software. In T. Æ. Mogensen, D. A. Schmidt, and I. Hal Sudborough, editors, The Essence of Computation, Complexity, Analysis, Transformation. Essays Dedicated to Neil D. Jones [on occasion of his 60th birthday], volume 2566 of Lecture Notes in Computer Science, pages 85-108. Springer-Verlag, Berlin, 2002.
We report on a successful preliminary experience in the design and implementation of a special-purpose Abstract Interpretation based static program analyzer for the verification of safety critical embedded real-time software. The analyzer is both precise (zero false alarm in the considered experiment) and efficient (less than one minute of analysis for 10,000 lines of code). Even if it is based on a simple interval analysis, many features have been added to obtain the desired precision: expansion of small arrays, widening with several thresholds, loop unrolling, trace partitioning, relations between loop counters and other variables. The efficiency of the tool mainly comes from a clever representation of abstract environments based on balanced binary search trees.

[BGP99] T. Bultan, R. Gerber, and W. Pugh. Model-checking concurrent systems with unbounded integer variables: Symbolic representations, approximations, and experimental results. ACM Transactions on Programming Languages and Systems, 21(4):747-789, 1999. [ .ps ]
Model checking is a powerful technique for analyzing large, finite-state systems. In an infinite state system, however, many basic properties are undecidable. In this article, we present a new symbolic model checker which conservatively evaluates safety and liveness properties on programs with unbounded integer variables. We use Presburger formulas to symbolically encode a program's transition system, as well as its model-checking computations. All fixpoint calculations are executed symbolically, and their convergence is guaranteed by using approximation techniques. We demonstrate the promise of this technology on some well-known infinite-state concurrency problems.

[Che64] N. V. Chernikova. Algorithm for finding a general formula for the non-negative solutions of system of linear equations. U.S.S.R. Computational Mathematics and Mathematical Physics, 4(4):151-158, 1964.
[Che65] N. V. Chernikova. Algorithm for finding a general formula for the non-negative solutions of system of linear inequalities. U.S.S.R. Computational Mathematics and Mathematical Physics, 5(2):228-233, 1965.
The present note proposes a computational scheme for finding a general formula for the non-negative solutions of a system of linear inequalities analogous to the scheme described in [Che64] for finding a general formula for the non-negative solutions of a system of linear equations.

[Che68] N. V. Chernikova. Algorithm for discovering the set of all solutions of a linear programming problem. U.S.S.R. Computational Mathematics and Mathematical Physics, 8(6):282-293, 1968.
In this paper two versions of a canonical algorithm for discovering all the optimal solutions of a linear programming problem with the condition of non-negativeness of the variables are presented: the first for the case of canonical notation, the second for the standard notation.

[CC76] P. Cousot and R. Cousot. Static determination of dynamic properties of programs. In B. Robinet, editor, Proceedings of the Second International Symposium on Programming, pages 106-130, Paris, France, 1976. Dunod, Paris, France. [ .pdf ]
In high level languages, compile time type verifications are usually incomplete, and dynamic coherence checks must be inserted in object code. For example, in PASCAL one must dynamically verify that the values assigned to subrange type variables, or index expressions lie between two bounds, or that pointers are not nil, ... We present here a general algorithm allowing most of these certifications to be done at compile time.

[CC79] P. Cousot and R. Cousot. Systematic design of program analysis frameworks. In Proceedings of the Sixth Annual ACM Symposium on Principles of Programming Languages, pages 269-282, San Antonio, TX, USA, 1979. ACM Press. [ .pdf ]
Semantic analysis of programs is essential in optimizing compilers and program verification systems. It encompasses data flow analysis, data type determination, generation of approximate invariant assertions, etc. Several recent papers (among others Cousot & Cousot[77a], Graham & Wegman[76], Kam & Ullmann[76], Killdall[73], Rosen[78], Tarjan[76], Wegbreit[75]) have introduced abstract approaches to program analysis which are tantamount to the use of a program analysis framework (A, t, γ) where A is a lattice of (approximate) assertions, t is an (approximate) predicate transformer and γ is an often implicit function specifying the meaning of the elements of A. This paper is devoted to the systematic and correct design of program analysis frameworks with respect to a formal semantics.

[CC92] P. Cousot and R. Cousot. Comparing the Galois connection and widening/narrowing approaches to abstract interpretation. In M. Bruynooghe and M. Wirsing, editors, Proceedings of the 4th International Symposium on Programming Language Implementation and Logic Programming, volume 631 of Lecture Notes in Computer Science, pages 269-295, Leuven, Belgium, 1992. Springer-Verlag, Berlin. [ .pdf ]
The use of infinite abstract domains with widening and narrowing for accelerating the convergence of abstract interpretations is shown to be more powerful than the Galois connection approach restricted to finite lattices (or lattices satisfying the chain condition).

[CH78] P. Cousot and N. Halbwachs. Automatic discovery of linear restraints among variables of a program. In Conference Record of the Fifth Annual ACM Symposium on Principles of Programming Languages, pages 84-96, Tucson, Arizona, 1978. ACM Press. [ .pdf ]
The model of abstract interpretation of programs developed by Cousot & Cousot [1976] and Cousot & Cousot [1977] is applied to the static determination of linear equality or inequality relations among variables of programs.

[Dan63] G. B. Dantzig. Linear Programming and Extensions. Princeton University Press, Princeton, NJ, 1963.
[Fea88] P. Feautrier. Parametric integer programming. RAIRO Recherche Opérationnelle, 22(3):243-268, 1988.
When analysing computer programs (especially numerical programs in which arrays are used extensively), one is often confronted with integer programming problems. These problems have three peculiarities: feasible points are ranked according to lexicographic order rather than the usual linear economic function; the feasible set depends on integer parameters; one is interested only in exact solutions. The difficulty is somewhat alleviated by the fact that problems sizes are usually quite small. In this paper we show that: the classical simplex algorithm has no difficulty in handling lexicographic ordering; the algorithm may be executed in symbolic mode, thus giving the solution of continuous parametric problems; the method may be extended to problems in integers. We prove that the resulting algorithm always terminate and give an estimate of its complexity.

[FCB07] P. Feautrier, J.-F. Collard, and C. Bastoul. PIP/PipLib: A Solver for Parametric Integer Programming Problems, 5.0 edition, July 2007. Distributed with PIP/PipLib 1.4.0.
This manual is for PIP and PipLib version 1.4.0, a software which solves Parametric Integer Programming problems. That is, PIP finds the lexicographic minimum of the set of integer points which lie inside a convex polyhedron, when that polyhedron depends linearly on one or more integral parameters.

[Fuk98] K. Fukuda. Polyhedral computation FAQ. Swiss Federal Institute of Technology, Lausanne and Zurich, Switzerland, available at http://www.ifor.math.ethz.ch/~fukuda/polyfaq/polyfaq.html, 1998.
This is an FAQ to answer some basic questions arising from certain geometric computation in general dimensional (mostly Euclidean) space. The main areas to be covered are the convex hull computation of a finite point set, the vertex enumeration for a convex polytope, the computation of Voronoi diagram and Delaunay triangulation, in Rd. We illustrate typical solution processes with small examples and publicly available codes such as cdd+ and lrs.

[FP96] K. Fukuda and A. Prodon. Double description method revisited. In M. Deza, R. Euler, and Y. Manoussakis, editors, Combinatorics and Computer Science, 8th Franco-Japanese and 4th Franco-Chinese Conference, Brest, France, July 3-5, 1995, Selected Papers, volume 1120 of Lecture Notes in Computer Science, pages 91-111. Springer-Verlag, Berlin, 1996. [ .ps.gz ]
The double description method is a simple and useful algorithm for enumerating all extreme rays of a general polyhedral cone in Rd, despite the fact that we can hardly state any interesting theorems on its time and space complexities. In this paper, we reinvestigate this method, introduce some new ideas for efficient implementations, and show some empirical results indicating its practicality in solving highly degenerate problems.

[GJ00] E. Gawrilow and M. Joswig. polymake: A framework for analyzing convex polytopes. In G. Kalai and G. M. Ziegler, editors, Polytopes - Combinatorics and Computation, pages 43-74. Birkhäuser, 2000.
polymake is a software tool designed for the algorithmic treatment of polytopes and polyhedra. We give an overview of the functionally as well as for the structure. This paper can be seen as a first approximation to a polymake handbook. The tutorial starts with the very basic and ends with a few polymake applications to research problems. Then we present the main features of the system including the interfaces to other software products.

[GJ01] E. Gawrilow and M. Joswig. polymake: An approach to modular software design in computational geometry. In Proceedings of the 17th Annual Symposium on Computational Geometry, pages 222-231, Medford, MA, USA, 2001. ACM.
polymake is a software package designed for the study of the combinatorics and the geometry of convex polytopes and polyhedra. It offers access to a wide variety of algorithms and tools within a common framework. As a key design feature it allows to incorporate the functionality of a great variety of other software packages in a modular way.

[GDD+04] D. Gopan, F. DiMaio, N. Dor, T. W. Reps, and M. Sagiv. Numeric domains with summarized dimensions. In K. Jensen and A. Podelski, editors, Proceedings of the 10th International Conference Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2004), volume 2988 of Lecture Notes in Computer Science, pages 512-529, Barcelona, Spain, 2004. Springer-Verlag, Berlin.
We introduce a systematic approach to designing summarizing abstract numeric domains from existing numeric domains. Summarizing domains use summary dimensions to represent potentially unbounded collections of numeric objects. Such domains are of benefit to analyses that verify properties of systems with an unbounded number of numeric objects, such as shape analysis, or systems in which the number of numeric objects is bounded, but large.

[Gra91] P. Granger. Static analysis of linear congruence equalities among variables of a program. In S. Abramsky and T. S. E. Maibaum, editors, TAPSOFT'91: Proceedings of the International Joint Conference on Theory and Practice of Software Development, Volume 1: Colloquium on Trees in Algebra and Programming (CAAP'91), volume 493 of Lecture Notes in Computer Science, pages 169-192, Brighton, UK, 1991. Springer-Verlag, Berlin.
In this paper, a new kind of static (or semantic) analysis is defined: congruence analysis, which is conceived to discover the properties of the following type: “the integer valued variable X is congruent to c modulo m”, where c and m are automatically determined integers. This analysis is then related to an algebraic framework and wholly characterized. Moreover, we show an example how it can be useful for automatic vectorization. Finally, we present some extensions of it, namely its combination with the analysis of bounds, and also some analysis defined when the modulus of congruences is given a priori.

[Gra97] P. Granger. Static analyses of congruence properties on rational numbers (extended abstract). In P. Van Hentenryck, editor, Static Analysis: Proceedings of the 4th International Symposium, volume 1302 of Lecture Notes in Computer Science, pages 278-292, Paris, France, 1997. Springer-Verlag, Berlin.
We present several new static analysis frameworks applying to rational numbers, and more precisely, designed for discovering congruence properties satisfied by rational (or real) variables of programs. Two of them deal with additive congruence properties and generalize linear equation analysis [M. Karr, Affine Relationships among Variables of a Program, Acta Informatica, 6:133-151, 1976] and congruence analysis on integer numbers [P. Granger, Static Analysis of Arithmetical Congruences, International Journal of Computer Mathematics, 30:165-190, 1989], [P. Granger, Static Analysis of Linear Congruence Equalities among Variables of a Program, TAPSOFT'91: Proceedings of the International Joint Conference on Theory and Practice of Software Development, Volume 1: Colloquium on Trees in Algebra and Programming (CAAP'91), Lecture Notes in Computer Science, 493, pp. 169-192]. The others are based on multiplicative congruence properties in the set of positive rational numbers. Among other potential applications, we exemplify the interest of all these analyses for optimizing the representation of rational or real valued variables.

[GR77] D. Goldfarb and J. K. Reid. A practical steepest-edge simplex algorithm. Mathematical Proramming, 12(1):361-371, 1977.
It is shown that suitable recurrences may be used in order to implement in practice the steepest-edge simplex linear programming algorithm. In this algorithm each iteration is along an edge of the polytope of feasible solutions on which the objective function decreases most rapidly with respect to distance in the space of all the variables. Results of computer comparisons on medium-scale problems indicate that the resulting algorithm requires less iterations but about the same overall time as the algorithm of Harris [8], which may be regarded as approximating the steepest-edge algorithm. Both show a worthwhile advantage over the standard algorithm.

[Hal79] N. Halbwachs. Détermination Automatique de Relations Linéaires Vérifiées par les Variables d'un Programme. Thèse de 3ème cycle d'informatique, Université scientifique et médicale de Grenoble, Grenoble, France, March 1979.
[Hal93] N. Halbwachs. Delay analysis in synchronous programs. In C. Courcoubetis, editor, Computer Aided Verification: Proceedings of the 5th International Conference (CAV'93), volume 697 of Lecture Notes in Computer Science, pages 333-346, Elounda, Greece, 1993. Springer-Verlag, Berlin.
Linear relation analysis [CH78, Hal79] has been proposed a long time ago as an abstract interpretation which permits to discover linear relations invariantly satisfied by the variables of a program. Here, we propose to apply this general method to variables used to count delays in synchronous programs. The “regular” behavior of these counters makes the results of the analysis especially precise. These results can be applied to code optimization and to the verification of real-time properties of programs.

[HPR94] N. Halbwachs, Y.-E. Proy, and P. Raymond. Verification of linear hybrid systems by means of convex approximations. In B. Le Charlier, editor, Static Analysis: Proceedings of the 1st International Symposium, volume 864 of Lecture Notes in Computer Science, pages 223-237, Namur, Belgium, 1994. Springer-Verlag, Berlin. [ .html ]
We present a new application of the abstract interpretation by means of convex polyhedra, to a class of hybrid systems, i.e., systems involving both discrete and continuous variables. The result is an efficient automatic tool for approximate, but conservative, verification of reachability properties of these systems.

[HKP95] N. Halbwachs, A. Kerbrat, and Y.-E. Proy. POLyhedra INtegrated Environment. Verimag, France, version 1.0 of POLINE edition, September 1995. Documentation taken from source code.
[HPR97] N. Halbwachs, Y.-E. Proy, and P. Roumanoff. Verification of real-time systems using linear relation analysis. Formal Methods in System Design, 11(2):157-185, 1997.
Linear Relation Analysis [11] is an abstract interpretation devoted to the automatic discovery of invariant linear inequalities among numerical variables of a program. In this paper, we apply such an analysis to the verification of quantitative time properties of two kinds of systems: synchronous programs and linear hybrid systems.

[HMT71] L. Henkin, J. D. Monk, and A. Tarski. Cylindric Algebras: Part I. North-Holland, Amsterdam, 1971.
[HH95] T. A. Henzinger and P.-H. Ho. A note on abstract interpretation strategies for hybrid automata. In P. J. Antsaklis, W. Kohn, A. Nerode, and S. Sastry, editors, Hybrid Systems II, volume 999 of Lecture Notes in Computer Science, pages 252-264. Springer-Verlag, Berlin, 1995. [ .html ]
We report on several abstract interpretation strategies that are designed to improve the performance of HyTech, a symbolic model checker for linear hybrid systems. We (1) simultaneously compute the target region from different directions, (2) conservatively approximate the target region by dropping constraints, and (3) iteratively refine the approximation until sufficient precision is obtained. We consider the standard abstract convex-hull operator and a novel abstract extrapolation operator.

[HPWT01] T. A. Henzinger, J. Preussig, and H. Wong-Toi. Some lessons from the hytech experience. In Proceedings of the 40th Annual Conference on Decision and Control, pages 2887-2892. IEEE Computer Society Press, 2001. [ .html ]
We provide an overview of the current status of the tool HyTech, and reflect on some of the lessons learned from our experiences with the tool. HyTech is a symbolic model checker for mixed discrete-continuous systems that are modeled as automata with piecewise-constant polyhedral differential inclusions. The use of a formal input language and automated procedures for state-space traversal lay the foundation for formally verifying properties of hybrid dynamical systems. We describe some recent experiences analyzing three hybrid systems. We point out the successes and limitations of the tool. The analysis procedure has been extended in a number of ways to address some of the tool's shortcomings. We evaluate these extensions, and conclude with some desiderata for verification tools for hybrid systems.

[HHL90] L. Huelsbergen, D. Hahn, and J. Larus. Exact dependence analysis using data access descriptors. Technical Report 945, Department of Computer Science, University of Wisconsin, Madison, 1990.
Data Access Descriptors provide a method for summarizing and representing the portion of an array accessed by a program statement. A Data Access Descriptor does not, however, indicate if its characterization is exact or conservative, nor does it record the temporal order of accesses. Exactness is necessary to expose maximal parallelism. Temporal information is necessary to calculate direction vectors for inter-loop dependences. This paper presents an extension to basic Data Access Descriptors that identies exact representations. We illustrate the value of extended Data Access Descriptors by showing how to calculate information typically provided by direction vectors and by refining potential conflicts between statements with array kill information.

[JMSY94] J. Jaffar, M. J. Maher, P. J. Stuckey, and R. H. C. Yap. Beyond finite domains. In A. Borning, editor, Principles and Practice of Constraint Programming: Proceedings of the Second International Workshop, volume 874 of Lecture Notes in Computer Science, pages 86-94, Rosario, Orcas Island, Washington, USA, 1994. Springer-Verlag, Berlin.
[KBB+06] L. Khachiyan, E. Boros, K. Borys, K. Elbassioni, and V. Gurvich. Generating all vertices of a polyhedron is hard. Discrete and Computational Geometry, 2006. Invited contribution.
We show that generating all negative cycles of a weighted graph is a hard enumeration problem, in both the directed and undirected cases. More precisely, given a family of such cycles, it is NP-complete problem to decide whether this family can be extended or there is no other negative directed cycles in the graph, implying that directed negative cycles cannot be generated in polynomial output time, unless P=NP. As a corollary, we solve in the negative two well-known generating problems from linear programming: (i) Given an (infeasible) system of linear inequalities, generating all minimal infeasible subsystems is hard. Yet, for generating maximal feasible subsystems the complexity remains open. (ii) Given a (feasible) system of linear inequalities, generating all vertices of the corresponding polyhedron is hard. Yet, in case of bounded polyhedra the complexity remains open.

[NJPF99] T. Nakanishi, K. Joe, C. D. Polychronopoulos, and A. Fukuda. The modulo interval: A simple and practical representation for program analysis. In Proceedings of the 1999 International Conference on Parallel Architectures and Compilation Techniques, pages 91-96, Newport Beach, California, USA, 1999. IEEE Computer Society.
In this paper, the modulo interval, an extension of the traditional interval on real numbers, and its useful mathematical properties are presented as a representation for program analysis Only with two additional parameters to the interval on real numbers, namely the modulus and the residue, the modulo interval can represent information on program having cyclicity such as loop indices, array subscripts etc. at reasonable complexity and more accuracy. Well-defined arithmetic and set operations on the modulo interval make implementation of compilers simple and reliable. Moreover, application of the modulo interval to program analysis for parallelizing compilers is discussed in this paper.

[NF01] T. Nakanishi and A. Fukuda. Modulo interval arithmetic and its application to program analysis. Transactions of Information Processing Society of Japan, 42(4):829-837, 2001.
[Jea02] B. Jeannet. Convex Polyhedra Library, release 1.1.3c edition, March 2002. Documentation of the “New Polka” library.
[Kuh56] H. W. Kuhn. Solvability and consistency for linear equations and inequalities. American Mathematical Monthly, 63:217-232, 1956.
[Le 92] H. Le Verge. A note on Chernikova's algorithm. Publication interne 635, IRISA, Campus de Beaulieu, Rennes, France, 1992. [ Source code ]
This paper describes an implementation of Chernikova's algorithm for finding an irredundant set of vertices for a given polyhedron defined by a set of linear inequalities and equations. This algorithm can also be used for the dual problem: given a set of extremal rays and vertices, find the associated irredundant set of facet supporting hyperplanes. The method is an extension of initial Chernikova's algorithm (non negative domain), and is mainly based on the polyhedral cone duality principle. A new enhancement for extremal ray detection together with its effects on a class of polyhedra.

[HLW94] V. Van Dongen H. Le Verge and D. K. Wilde. Loop nest synthesis using the polyhedral library. Publication interne 830, IRISA, Campus de Beaulieu, Rennes, France, 1994.
A new method to synthesis loop nests given a polyhedral domain, the context domain, and the loop nesting order is described. The method is based on functions in the IRISA polyhedral library.

[LW97] V. Loechner and D. K. Wilde. Parameterized polyhedra and their vertices. International Journal of Parallel Programming, 25(6):525-549, 1997.
Algorithms specified for parametrically sized problems are more general purpose and more reusable than algorithms for fixed sized problems. For this reason, there is a need for representing and symbolically analyzing linearly parameterized algorithms. An important class of parallel algorithms can be described as systems of parameterized affine recurrence equations (PARE). In this representation, linearly parameterized polyhedra are used to described the domains of variables. This paper describes an algorithm which computes the set of parameterized vertices of a polyhedron, given its representation as a system of parameterized inequalities. This provides an important tool for the symbolic analysis of the parameterized domains used to define variables and computation domains in PARE's. A library of operations on parameterized polyhedra based on the Polyhedral Library has been written in C and is freely distributed.

[Loe99] V. Loechner. PolyLib: A library for manipulating parameterized polyhedra. Available at http://icps.u-strasbg.fr/~loechner/polylib/, March 1999. Declares itself to be a continuation of [Wil93].
[Mas92] F. Masdupuy. Array operations abstraction using semantic analysis of trapezoid congruences. In Proceedings of the 6th ACM International Conference on Supercomputing, pages 226-235, Washington, DC, USA, 1992. ACM Press.
With the growing use of vector supercomputers, efficient and accurate data structure analyses are needed. What we propose in this paper is to use the quite general framework of Cousot's abstract interpretation for the particular analysis of multi-dimensional array indexes. While such indexes are integer tuples, a relational integer analysis is first required. This analysis results of a combination of existing ones that are interval and congruence based. Two orthogonal problems are directly concerned with the results of such an analysis, that are the parallelization/vectorization with the dependence analysis and the data locality problem used for array storage management. After introducing the analysis algorithm, this paper describes on a complete example how to use it in order to optimize array storage.

[Mas93] F. Masdupuy. Array Indices Relational Semantic Analysis Using Rational Cosets and Trapezoids. Thèse d'informatique, École Polytechnique, Palaiseau, France, December 1993.
Semantic analysis of program numerical variables consists in statically and automatically discovering properties verified at execution time. Different sets of properties (equality, inequality and congruence relations) have already been studied. This thesis proposes a generalization of some of the below patterns. More specifically, the abstract interpretation is used to design on the one hand a set of properties generalizing intervals and cosets on Z and on the other hand, a generalization of trapezoids and linear congruence equation systems on Zn. A rational abstraction of these properties is defined to get safe approximations, with a polynomial complexity in the number of the considered variables, of the integer properties operators. Those analyses, more precise than the combination of the analysis they come from in general, allow to dynamically choose the kind of properties (inequality or congruence relations) leading to relevant information for the considered program. The described relationnal analysis corresponds to numerous patterns encountered in the field of scientific computation. It is very well adapted to the analysis of array indices variables and also to the abstract description of integer arrays.

[Min01a] A. Miné. A new numerical abstract domain based on difference-bound matrices. In O. Danvy and A. Filinski, editors, Proceedings of the 2nd Symposium on Programs as Data Objects (PADO 2001), volume 2053 of Lecture Notes in Computer Science, pages 155-172, Aarhus, Denmark, 2001. Springer-Verlag, Berlin.
This paper presents a new numerical domain for static analysis by abstract interpretation. This domain allows us to represent invariants of the form and , where and are variables values and is an integer or real constant. Abstract elements are represented by Difference-Bound Matrices, widely used by model-checkers, but we had to design new operators to meet the needs of abstract interpretation. The result is a complete lattice of infinite height featuring widening, narrowing and common transfer functions. We focus on giving an efficient representation and graph-based algorithms-where is the number of variables-and claim that this domain always performs more precisely than the well-known interval domain. To illustrate the precision/cost tradeoff of this domain, we have implemented simple abstract interpreters for toy imperative and parallel languages which allowed us to prove some non-trivial algorithms correct.

[Min01b] A. Miné. The octagon abstract domain. In Proceedings of the Eighth Working Conference on Reverse Engineering (WCRE'01), pages 310-319, Stuttgart, Germany, 2001. IEEE Computer Society Press.
This article presents a new numerical abstract domain for static analysis by abstract interpretation. It extends our previously proposed DBM-based numerical abstract domain and allows us to represent invariants of the form (+/- x +/- y <=c), where x and y are program variables and c is a real constant. We focus on giving an efficient representation based on Difference-Bound Matrices-O(n2) memory cost, where n is the number of variables-and graph-based algorithms for all common abstract operators-O(n3) time cost. This includes a normal form algorithm to test equivalence of representation and a widening operator to compute least fixpoint approximations.

[Min02] A. Miné. A few graph-based relational numerical abstract domains. In M. V. Hermenegildo and G. Puebla, editors, Static Analysis: Proceedings of the 9th International Symposium, volume 2477 of Lecture Notes in Computer Science, pages 117-132, Madrid, Spain, 2002. Springer-Verlag, Berlin.
This article presents the systematic design of a class of relational numerical abstract domains from non-relational ones. Constructed domains represent sets of invariants of the form (vj-viinC), where vj and vi are two variables, and C lives in an abstraction of P(Z), P(Q), or P(R). We will call this family of domains weakly relational domains. The underlying concept allowing this construction is an extension of potential graphs and shortest-path closure algorithms in exotic-like algebras. Example constructions are given in order to retrieve well-known domains as well as new ones. Such domains can then be used in the Abstract Interpretation framework in order to design various static analyses. A major benefit of this construction is its modularity, allowing to quickly implement new abstract domains from existing ones.

[Min04] A. Miné. Relational abstract domains for the detection of floating-point run-time errors. In D. Schmidt, editor, Programming Languages and Systems: Proceedings of the 13th European Symposium on Programming, volume 2986 of Lecture Notes in Computer Science, pages 3-17, Barcelona, Spain, 2004. Springer-Verlag, Berlin.
We present a new idea to adapt relational abstract domains to the analysis of IEEE 754-compliant floating-point numbers in order to statically detect, through Abstract Interpretation-based static analyses, potential floating-point run-time exceptions such as overflows or invalid operations. In order to take the non-linearity of rounding into account, expressions are modeled as linear forms with interval coefficients. We show how to extend already existing numerical abstract domains, such as the octagon abstract domain, to efficiently abstract transfer functions based on interval linear forms. We discuss specific fixpoint stabilization techniques and give some experimental results.

[Min05] A. Miné. Weakly Relational Numerical Abstract Domains. PhD thesis, École Polytechnique, Paris, France, March 2005.
The goal of this thesis is to design techniques related to the automatic analysis of computer programs. One major application is the creation of tools to discover bugs before they actually happen, an important goal in a time when critical yet complex tasks are performed by computers. We will work in the Abstract Interpretation framework, a theory of sound approximations of program semantics. We will focus, in particular, on numerical abstract domains that specialise in the automatic discovery of properties of the numerical variables of programs. In this thesis, we introduce new numerical abstract domains: the zone abstract domain (that can discover invariants of the form X - Y <=c), the zone congruence domain (X Y + a [b]), and the octagon domain (+/- X +/- Y <=c), among others. These domains rely on the classical notions of potential graphs, difference bound matrices, and algorithms for the shortest-path closure computation. They are in-between, in terms of cost and precision, between nonrelational domains (such as the interval domain), that are very imprecise, and classical relational domains (such as the polyhedron domain), that are very costly. We will call them “weakly relational”. We also introduce some methods to apply relational domains to the analysis of floating-point numbers, which was previously only possible using imprecise, non-relational, domains. Finally, we introduce the so-called linearisation and symbolic constant propagation generic symbolic methods to enhance the precision of any numerical domain, for only a slight increase in cost. The techniques presented in this thesis have been integrated within Astrée, an analyser for critical embedded avionic software, and were instrumental in proving the absence of run-time errors in fly-by-wire softwares used in Airbus A340 and A380 planes. Experimental results show the usability of our methods of real-life applications.

[MRTT53] T. S. Motzkin, H. Raiffa, G. L. Thompson, and R. M. Thrall. The double description method. In H. W. Kuhn and A. W. Tucker, editors, Contributions to the Theory of Games - Volume II, number 28 in Annals of Mathematics Studies, pages 51-73. Princeton University Press, Princeton, New Jersey, 1953.
The purpose of this paper is to present a computational method for the determination of the value and of all solutions of a two-person zero-sum game with a finite number of pure strategies, and for the solution of general finite systems of linear inequalities and corresponding maximization problems.

[NO77] G. Nelson and D. C. Oppen. Fast decision algorithms based on Union and Find. In Proceedings of the 18th Annual Symposium on Foundations of Computer Science (FOCS'77), pages 114-119, Providence, RI, USA, 1977. IEEE Computer Society Press. The journal version of this paper is [NO80].
[NO80] G. Nelson and D. C. Oppen. Fast decision procedures based on congruence closure. Journal of the ACM, 27(2):356-364, 1980. An earlier version of this paper is [NO77].
The notion of the congruence closure of a relation on a graph is defined and several algorithms for computing it are surveyed. A simple proof is given that the congruence closure algorithm provides a decision procedure for the quantifier-free theory of equality. A decision procedure is then given for the quantifier-free theory of LISP list structure based on the congruence closure algorithm. Both decision procedures determine the satisfiability of a conjunction of literals of length n in average time O(n logn) using the fastest known congruence closure algorithm. It is also shown that if the axiomatization of the theory of list structure is changed slightly, the problem of determining the satisfiability of a conjunction of literals becomes NP-complete. The decision procedures have been implemented in the authors' simplifier for the Stanford Pascal Verifier.

[NW88] G. L. Nemhauser and L. A. Wolsey. Integer and Combinatorial Optimization. Wiley Interscience Series in Discrete Mathematics and Optimization. John Wiley & Sons, 1988.
[NR00] S. P. K. Nookala and T. Risset. A library for Z-polyhedral operations. Publication interne 1330, IRISA, Campus de Beaulieu, Rennes, France, 2000.
Polyhedra are commonly used for representing iteration domains of loop nests with unit stride: the iteration domain of a loop is associated with the set of integer points contained in a polyhedron. Z-polyhedra are natural extension of polyhedra, in the sense that they represent iteration domains of loop nests with non-unit stride (they are polyhedra intersected with integral lattices). The polyhedral library (Polylib) has been developed for computing on polyhedra, it is now widely used in the automatic parallelization research community. This report describes the implementation of the extension of Polylib to Z-polyhedra. We describe algorithms used for computing on lattices and Z-polyhedra, and we provide technical documentation for the Z-polyhedral library (data structures, functions available).

[PS98] C. H. Papadimitriou and K. Steiglitz. Combinatorial Optimization: Algorithms and Complexity. Dover Publications, second edition, 1998.
[Pra77] V. R. Pratt. Two easy theories whose combination is hard. Memo sent to Nelson and Oppen concerning a preprint of their paper [NO77], September 1977.
We restrict attention to the validity problem for unquantified disjunctions of literals (possibly negated atomic formulae) over the domain of integers, or what is just as good, the satisfiability problem for unquantified conjunctions. When = is the only predicate symbol and all function symbols are left uninterpreted, or when >= is the only predicate symbol (taking its standard interpretation on the integers) and the only terms are variables and integers, then satisfiability is decidable in polynomial time. However when >= and uninterpreted function symbols are allowed to appear together, satisfiability becomes an NP-complete problem. This combination of the two theories can arise for example when reasoning about arrays (the uninterpreted function symbols) and subscript manipulation (where >= arises in considering subscript bounds). These results are unaffected by the presence of successor, which also arises commonly in reasoning about subscript manipulation.

[QRW00] F. Quilleré, S. V. Rajopadhye, and D. Wilde. Generation of efficient nested loops from polyhedra. International Journal of Parallel Programming, 28(5):469-498, 2000.
Automatic parallelization in the polyhedral model is based on affine transformations from an original computation domain (iteration space) to a target space-time domain, often with a different transformation for each variable. Code generation is an often ignored step in this process that has a significant impact on the quality of the final code. It involves making a trade-off between code size and control code simplification/optimization. Previous methods of doing code generation are based on loop splitting, however they have nonoptimal behavior when working on parameterized programs. We present a general parameterized method for code generation based on dual representation of polyhedra. Our algorithm uses a simple recursion on the dimensions of the domains, and enables fine control over the tradeoff between code size and control overhead.

[QRR96] P. Quinton, S. Rajopadhye, and T. Risset. On manipulating Z-polyhedra. Technical Report 1016, IRISA, Campus Universitaire de Bealieu, Rennes, France, July 1996.
We address the problem of computation upon Z-Polyhedra which are intersections of polyhedra and integral lattices. We introduce a canonic representation for Z-polyhedra which allow to perform comparisons and transformations of Z-polyhedra with the help of a computational kernal on polyhedra. This contribution is a step towards the manipulation of images of polyhedra by affine functions, and has application in the domain of automatic parallelization and parallel VLSI synthesis.

[QRR97] P. Quinton, S. Rajopadhye, and T. Risset. On manipulating Z-polyhedra using a canonic representation. Parallel Processing Letters, 7(2):181-194, 1997.
Z-Polyhedra are intersections of polyhedra and integral lattices. They arise in the domain of automatic parallelization and VLSI array synthesis. In this paper, we address the problem of computation on Z-polyhedra. We introduce a canonical representation for Z-polyhedra which allows one to perform comparisons and transformations of Z-polyhedra with the help of a computational kernal on polyhedra.

[RBL06] T. W. Reps, G. Balakrishnan, and J. Lim. Intermediate-representation recovery from low-level code. In J. Hatcliff and F. Tip, editors, Proceedings of the 2006 ACM SIGPLAN Workshop on Partial Evaluation and Semantics-based Program Manipulation, pages 100-111, Charleston, South Carolina, USA, 2006. ACM Press.
The goal of our work is to create tools that an analyst can use to understand the workings of COTS components, plugins, mobile code, and DLLs, as well as memory snapshots of worms and virus-infected code. This paper describes how static analysis provides techniques that can be used to recover intermediate representations that are similar to those that can be created for a program written in a high-level language.

[Ric02] E. Ricci. Rappresentazione e manipolazione di poliedri convessi per l'analisi e la verifica di programmi. Laurea dissertation, University of Parma, Parma, Italy, July 2002. In Italian. [ .pdf ]
[SS07a] R. Sen and Y. N. Srikant. Executable analysis using abstract interpretation with circular linear progressions. In Proceedings of the 5th IEEE/ACM International Conference on Formal Methods and Models for Co-Design (MEMOCODE 2007), pages 39-48, Nice, France, 2007. IEEE Computer Society Press.
We propose a new abstract domain for static analysis of executable code. Concrete states are abstracted using Circular Linear Progressions (CLPs). CLPs model computations using a finite word length as is seen in any real life processor. The finite abstraction allows handling over-flow scenarios in a natural and straight-forward manner. Abstract transfer functions have been defined for a wide range of operations which makes this domain easily applicable for analyzing code for a wide range of ISAs. CLPs combine the scalability of interval domains with the discreteness of linear congruence domains. We also present a novel, lightweight method to track linear equality relations between static objects that is used by the analysis to improve precision. The analysis is efficient, the total space and time overhead being quadratic in the number of static objects being tracked.

[SS07b] R. Sen and Y. N. Srikant. Executable analysis with circular linear progressions. Technical Report IISc-CSA-TR-2007-3, Department of Computer Science and Automation, Indian Institute of Science, Bangalore, India, 2007.
We propose a new abstract domain for static analysis of executable code. Concrete state is abstracted using Circular Linear Progressions (CLPs). CLPs model computations using a finite word length as is seen in any real life processor. The finite abstraction allows handling overflow scenarios in a natural and straight-forward manner. Abstract transfer functions have been defined for a wide range of operations which makes this domain easily applicable for analysing code for a wide range of ISAs. CLPs combine the scalability of interval domains with the discreteness of linear congruence domains. We also present a novel, lightweight method to track linear equality relations between static objects that is used by the analysis to improve precision. The analysis is efficient, the total space and time overhead being quadratic in the number of static objects being tracked.

[Sho81] R. E. Shostak. Deciding linear inequalities by computing loop residues. Journal of the ACM, 28(4):769-779, 1981.
V. R. Pratt has shown that the real and integer feasibility of sets of linear inequalities of the form x <=y + c can be decided quickly by examining the loops in certain graphs. Pratt's method is generalized, first to real feasibility of inequalities in two variables and arbitrary coefficients, and ultimately to real feasibility of arbitrary sets of linear inequalities. The method is well suited to applications in program verification.

[Sch99] A. Schrijver. Theory of Linear and Integer Programming. Wiley Interscience Series in Discrete Mathematics and Optimization. John Wiley & Sons, 1999.
[SK07] A. Simon and A. King. Taming the wrapping of integer arithmetic. In H. Riis Nielson and G. Filé, editors, Static Analysis: Proceedings of the 14th International Symposium, volume 4634 of Lecture Notes in Computer Science, pages 121-136, Kongens Lyngby, Denmark, 2007. Springer-Verlag, Berlin.
Variables in programs are usually confined to a fixed number of bits and results that require more bits are truncated. Due to the use of 32-bit and 64-bit variables, inadvertent overflows are rare. However, a sound static analysis must reason about overflowing calculations and conversions between unsigned and signed integers; the latter remaining a common source of subtle programming errors. Rather than polluting an analysis with the low-level details of modelling two's complement wrapping behaviour, this paper presents a computationally light-weight solution based on polyhedral analysis which eliminates the need to check for wrapping when evaluating most (particularly linear) assignments.

[Sri93] D. Srivastava. Subsumption and indexing in constraint query languages with linear arithmetic constraints. Annals of Mathematics and Artificial Intelligence, 8(3-4):315-343, 1993. [ .ps ]
Bottom-up evaluation of a program-query pair in a constraint query language (CQL) starts with the facts in the database and repeatedly applies the rules of the program, in iterations, to compute new facts, until we have reached a fixpoint. Checking if a fixpoint has been reached amounts to checking if any “new” facts were computed in an iteration. Such a check also enhances efficiency in that subsumed facts can be discarded, and not be used to make any further derivations in subsequent iterations, if we use Semi-naive evaluation. We show that the problem of subsumption in CQLs with linear arithmetic constraints is co-NP complete, and present a deterministic algorithm, based on the divide and conquer strategy, for this problem. We also identify polynomial-time sufficient conditions for subsumption and non-subsumption in CQLs with linear arithmetic constraints. We adapt indexing strategies from spatial databases for efficiently indexing facts in such a CQL: such indexing is crucial for performance in the presence of large databases. Based on a recent algorithm by Lassez and Lassez [LL] for quantifier elimination, we present an incremental version of the algorithm to check for subsumption in CQLs with linear arithmetic constraints.

[SW70] J. Stoer and C. Witzgall. Convexity and Optimization in Finite Dimensions I. Springer-Verlag, Berlin, 1970.
[War03] H. S. Warren, Jr. Hacker's Delight. Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA, 2003.
[Wil93] D. K. Wilde. A library for doing polyhedral operations. Master's thesis, Oregon State University, Corvallis, Oregon, December 1993. Also published as IRISA Publication interne 785, Rennes, France, 1993. [ .ps ]
Polyhedra are geometric representations of linear systems of equations and inequalities. Since polyhedra are used to represent the iteration domains of nested loop programs, procedures for operating on polyhedra are useful for doing loop transformations and other program restructuring transformations which are needed in parallelizing compilers. Thus a need for a library of polyhedral operations has recently been recognized in the parallelizing compiler community. Polyhedra are also used in the definition of domains of variables in systems of affine recurrence equations (SARE). Alpha is a language which is based on the SARE formalism in which all variables are declared over finite unions of polyhedra. This report describes a library of polyhedral functions which was developed to support the Alpha language environment, and which is general enough to satisfy the needs of researchers doing parallelizing compilers. This report describes the data structures used to represent domains, gives motivations for the major design decisions, and presents the algorithms used for doing polyhedral operations. This library has been written and tested, and has been in use since the beginning of 1993 by research facilities in Europe and Canada. The library is freely distributed by ftp.

[Wey35] H. Weyl. Elementare theorie der konvexen polyeder. Commentarii Mathematici Helvetici, 7:290-306, 1935. English translation in [Wey50].
[Wey50] H. Weyl. The elementary theory of convex polyhedra. In H. W. Kuhn, editor, Contributions to the Theory of Games - Volume I, number 24 in Annals of Mathematics Studies, pages 3-18. Princeton University Press, Princeton, New Jersey, 1950. Translated from [Wey35] by H. W. Kuhn.