Local Information

CADE 2017


From parallel theorem proving to parallel SAT-solving and back
Maria Paola Bonacina (Università degli Studi di Verona)
In this talk we touch upon selected discoveries and ideas in parallel theorem proving, revisiting the key role played by the Clause-Diffusion method in moving parallel reasoning from the parallelization of inferences to the parallelization of search, which is the dominating paradigm today. We explore both historic and conceptual connections between parallel theorem proving and parallel satisfiability solving, and envision the potential for new forays in parallel model-based theorem proving. No less surprising are the connections between parallel reasoning and sequential settings that nonetheless require the cooperation of inference systems, such as satisfiability modulo combinations of theories.

Parallel Answer Set Programming
Andrea Formisano (Università di Perugia)
In this talk we explore the main techniques and approaches presented in the literature and aimed at enabling the use of parallelism in Answer Set Programming solvers. This survey explores the approaches along two dimensions: (1) one dimension is the underlying language, ranging from Datalog to full ASP; (2) the second dimension is the granularity of parallelism, from fine-grained GPU-based parallelism to very large grained parallelism on distributed platforms.

Parallel QBF Solving: State of the Art Techniques and Future Perspectives
Florian Lonsing (TU Wien)
Quantified Boolean formulas (QBF) are a formalism which extends propositional logic by explicit existential or universal quantification of propositional variables. Due to the PSPACE-completeness of the QBF satisfiability problem, QBFs are naturally applicable to encode problems from the complexity class PSPACE or from any subclasses in the polynomial hierarchy.

We review the development of parallel approaches to solve QBFs starting from its origins in the early 2000s. Although there has been considerable inspiration by related techniques from parallel SAT solving, parallel QBF solving has received relatively little attention compared to SAT. One reason for this phenomenon is the presence of nested quantifiers in QBFs. The resulting quantifier ordering imposes limitations on the way how parallel computing resources can be exploited and complicates the implementation of robust parallel QBF solvers.

From a general perspective, parallel QBF solving has considerable potential. In contrast to sequential SAT solving, where conflict-driven clause learning (CDCL) is omnipresent in state of the art solvers, for sequential QBF solving there exist different solving paradigms with individual strengths. That variety in the sequential QBF solver landscape opens promising directions to improve the state of the art of parallel QBF solving.

Solving very hard problems: Cube-and-Conquer, a hybrid SAT solving method
Marijn Heule (University of Texas at Austin)
Many search problems, from artificial intelligence to combinatorics, explore large search spaces to determine the presence or absence of a certain object. These problems are hard due to combinatorial explosion, and have traditionally been called infeasible. The brute-force method, which at least implicitly explores all possibilities, is a general approach to search systematically through such spaces. Brute force has long been regarded as suitable only for simple problems. This has changed in the last two decades, due to the progress in satisfiability (SAT) solving, which renders brute force into a powerful approach to deal with many problems easily and automatically.

We illustrate the strength of SAT via the Boolean Pythagorean Triples problem, which has been a long-standing open problem in Ramsey Theory. Our parallel SAT solver allowed us to solve the problem on a cluster in about two days using 800 cores, demonstrating its linear time speedup on many hard problems. Due to the general interest in this mathematical problem, our result requires a formal proof. Exploiting recent progress in unsatisfiability proof checking, we produced and verified a clausal proof of the smallest counterexample, which is almost 200 terabytes in size. These techniques show great promise for attacking a variety of challenging problems arising in mathematics and computer science.

Parallel Model-based Diagnosis on Multi-Core Computers
Dietmar Jannach (TU Dortmund)
Model-Based Diagnosis (MBD) is a principled and domain-independent way of analyzing why a system under examination is not behaving as expected. Given an abstract description (model) of the system's components and their behavior when functioning normally, MBD techniques rely on observations about the actual system behavior to reason about possible causes when there are discrepancies between the expected and observed behavior.

Due to its generality, MBD has been successfully applied in a variety of application domains over the years. In many cases, constraint reasoning is used simulate the system's behavior and to test different hypotheses about the reasons for an observed failure. Despite the advances in constraint reasoning, the generation of diagnoses can be computationally challenging because traditional diagnosis algorithms do not exploit the capabilities of modern multi-core computers. In this talk I will discuss different recently proposed schemes of parallelizing the diagnostic reasoning process and present the results of a series of empirical evaluations for different application scenarios.

Concurrent Data Structures for Multi-Core Model Checking
Alfons Laarman (Universiteit Leiden)
Model checking was long considered to require too much memory bandwidth for efficient parallelization on modern multi-cores. Novel concurrent data structures, however, have shown that the procedure can scale linearly and often ideally up to at least 64 cores. The corner stone of this solution is a concurrent hash table, which was carefully designed to maximize locality and minimize contention. We discuss how this design has since been used to parallelize state-of-the-art compression techniques in binary trees and decision diagrams (BDDs and MDDs).

P3: Model Checking using distributed PDR
Matteo Marescotti (Università della Svizzera italiana)
Property Directed Reachability (PDR) is an efficient model checking technique. However, real world verification problems are often too hard, causing PDR solvers to fail. Since sequential CPU speed has stopped growing and solvers are already highly optimized, parallelization is the next logical direction. However, this area of research is still poorly investigated. This talk introduces P3 (Parallelly Performed PDR), an algorithm that enhances the power offered by three distinct parallel techniques for PDR with the aim of exploiting the massive computational power offered by distributed computing environments. The involved techniques are 1) partitioning of the input problem, 2) exchanging of learned reachability information, and 3) algorithm portfolios. We implemented P3 using the SMT-based model checker Spacer and experimented over SV-COMP benchmarks. Our experimentation shows up to an order of magnitude speedup and twice as many instances solved within a timeout with respect to the sequential implementation.

Harnessing over a Million CPU Cores to Solve a Single Hard Mixed Integer Programming Problem on a Supercomputer
Yuji Shinano (Zuse Institute Berlin)
Mixed integer programming (MIP) is a general form to model combinatorial optimization problems and has many industrial applications. The performance of MIP solvers, software packages to solve MIPs, has improved tremendously in the last two decades and these solvers have been used to solve many real-word problems. However, against the backdrop of modern computer technology, parallelization is of pivotal importance. In this way, ParaSCIP is the most successful parallel MIP solver in terms of solving previously unsolvable instances from the well-known benchmark instance set MIPLIB by using supercomputers. It solved two instances from MIPLIB2003 and 12 from MIPLIB2010 for the first time to optimality by using up to 80,000 cores of supercomputers. Additionally, a specialized version of ParaSCIP for solving Steiner tree problems called SCIP-Jack solved three open instances from the Steiner tree test benchmark set PUC. ParaSCIP has been developed by using the Ubiquity Generator (UG) framework, which is a general software package to parallelize any state-of-the-art branch-and-bound based solvers. The UG framework is currently being used to develop ParaXpress, a distributed memory parallelization of the commercial MIP solver Xpress. Moreover, it is being used to parallelize PIPS-SBB, a solver for stochastic MIPs. Since Xpress is a multi-threaded solver and ParaSCIP can run at least 80,000 processes in parallel for solving a single MIP, ParaXpress could handle over a million CPU cores. Furthermore, the parallelization of PIPS-SBB by the UG framework has the potential to also handle over a million CPU cores. In this talk, a ground design of the UG framework and its latest extensions to harness over a million CPU cores will be presented and preliminary computational results will be provided.

Parallel SAT Solving: Past, Present and Future
Carsten Sinz (Karlsruhe Institute of Technology)
In this talk we will give an overview of the history of parallelization approaches for SAT solvers, ranging from techniques like guiding-paths in DPLL to diversification and lemma exchange in recent parallel solvers. We will also review theoretical limits of parallelizability and some effects that can be observed in parallel solvers. The talk closes with an outlook on current challenges and upcoming techniques.