Formal Methods Seminar#

The seminar usually takes place on Tuesday at 2 p.m. It is of course open to anyone interested. If you want to give a talk or propose a speaker, please contact .


Upcoming talks


Previous talks

Thursday, March 2#

Solvability of Matrix-Exponential Equations. Amaury Pouly (MPI-SWS, Saarbrücken)

We consider the problem of checking whether a switching system can reach a particular state. We show that even this problem is undecidable but decidable in some restricted cases. A switching system is a particular form of hybrid system very commonly used to model electrical systems with several states. It consists of a finite number of states and in each state, the system evolves according to a linear differential equation. This type of model is also used in modelisation to over-approximate the possible behaviours of a physical system when the transition rules between states are unclear. Our result shows a surprising connection with a number of theorems from algebraic and transcendental number theory, and Hilbert's Tenth Problem. One possible interpretation of this work is that switching systems, despite their innocent look, are already too powerful a model of computation.


14h00, room A006

Tuesday, January 17#

Minimal probabilistic automata have to make irrational choices. Mahsa Shirmohammadi (Oxford Univ.)

In this talk, we answer the question of whether, given a probabilistic automaton (PA) with rational transition probabilities, there always exists a minimal equivalent PA that also has rational transition probabilities. The PA and its minimal equivalent PA accept all finite words with equal probabilities.

We approach this problem by exhibiting a connection with a longstanding open question concerning nonnegative matrix factorization (NMF). NMF is the problem of decomposing a given nonnegative n×m matrix M into a product of a nonnegative n×d matrix W and a nonnegative d×m matrix H. In 1993 Cohen and Rothblum posed the problem of whether a rational matrix M always has an NMF of minimal inner dimension d whose factors W and H are also rational. We answer this question negatively, by exhibiting a matrix for which W and H require irrational entries. As a corollary we obtain a negative answer to the question on rationality of minimal probabilistic automata.

This talk is based on two papers in ICALP 2016 and SODA 2017.


14h00, room A008

Thursday, January 12#

Towards SAT Solving in Batches. Sophie Tourret (NII Tokyo)

In this presentation, I will talk about my ongoing work on how to perform SAT solving by batches, i.e., by making multiple decisions at once in the CDCL loop. There are theoretical obstacles to this because the admissible strategy of CDCL assumes that conflict resolution and unit propagation have a higher priority than decisions and in the current CDCL framework if this strategy is not followed, a deadlock can occur where no rule can be applied and the satisfiability of the input formula is still unknown. A new formalism will be presented to solve this problem, that is inspired by results from [1,2]. The considered method to verify this claim (Isabelle) and the practical context motivating this approach (SAT solving on GPU) will also be introduced.

[1] Weidenbach, C. Automated reasoning building blocks Correct System Design, Springer, 2015, 172-188

[2] Blanchette, J. C., Fleury, M. & Weidenbach, C. A Verified SAT Solver Framework with Learn, Forget, Restart, and Incrementality, IJCAR, 2016


14h00, room B011

Tuesday, October 4#

Bit-Exact Automated Reasoning About Floating-Point Arithmetic. Martin Brain (Oxford Univ.)

The majority of new instrumentation and control systems, even those that are safety critical, make use of floating-point numbers. Thus the value of computer-generated assurance evidence (including verification, test coverage, etc.) depends on the correctness, completeness and efficiency of automated reasoning about floating-point expressions. In this talk we will review the SMT-LIB Theory of Floating-Point, its semantics and the rationale behind key design decisions as well as surveying the current state-of-the-art in solver technology and future research directions. We aim to provide system integrators with sufficient information to integrate floating-point support into SMT interfaces and solver developer enough ideas to work on the next generation of floating-point reasoning.


14h00, room A006

Wednesday, March 23#

Vers la synthèse de composants sécurisés. Yannick Chevalier (Univ. Toulouse III)

Lors d'Usenix Enigma 2016, le directeur de l'équipe d'"accès sur mesure" de la NSA a fait une liste des failles privilégiées permettant d'entrer dans un système. En dehors de l'ingénierie sociale, le point d'entrée privilégié dans les systèmes informatiques est dans l'écart qui existe entre les composants (protocoles, logiciels,...) que les administrateurs systèmes ont voulu déployer et les logiciels réellement existants.

Cette différence vient de la perpétuation de composants obsolètes, de la non-application de patches, mais aussi de composants n'implémentant pas fidèlement leur spécification. Cet exposé portera sur ce dernier point, et sur les travaux ouvrant la voie vers la synthèse sécurisée de composants communiquants, parmi lesquels des protocoles cryptographiques ou des services et applications Web.

Cet exposé portera dans un premier temps sur la définition d'une sémantique opérationnelle à partir d'une déclaration d'échanges de messages (travail commun avec Michaël Rusinowitch), et dans un second temps de techniques de synthèse de service lorsqu'il est fait abstraction des messages (travail commun avec MR et Walid Belkhir). Les travaux futurs consisteront à combiner ces deux approches pour générer automatiquement des implémentations de protocoles et de services à partir de composants sécurisés déjà prouvés.


14h00, room C103

Tuesday, March 22#

Behavioral verification of higher-order programs. Sylvain Salvati (Inria Bordeaux, LaBRI)

Higher-order constructions make their way into main stream programming languages like Java, C++, python, rust... These constructions bring new challenges to the verification of programs as they make their control flow more complex.

In this talk, I will present how methods coming from denotational semantics can prove decidable the verification of certain properties of higher-order programs. These properties are expressed by means of finite state automata of the possibly infinite execution trees generated by the programs and can capture safety properties but also liveness and fairness properties.


14h00, room C103

Monday, March 21#

The transitivity problem of Turing machines. Nicolas Ollinger (Univ. Orléans, LIFO)

A Turing machine is topologically transitive if every partial configuration -- that is a state, a head position, plus a finite portion of the tape -- can reach any other partial configuration, provided that they are completed into proper configurations. In this talk, we study the computational complexity of transitivity combining a reduction technique called embedding that preserves dynamical properties with a remarkable small minimal aperiodic Turing machine called SMART. We further study the computational complexity of minimality, the property of every configuration reaching every partial configuration.

This is joint work with Julien Cassaigne from I2M in Marseille, Anahí Gajardo and Rodrigo Torres-Avilés from the University of Concepción, Chile.


14h00, room C103

Preuve formelle pour essaims de robots : rassemblement dans le plan réel. Xavier Urbain (ENSIIE)

Les réseaux de robots mobiles reçoivent depuis quelques années une attention croissante de la part de la communauté de l'algorithmique distribuée. L'analyse de la faisabilité de certaines tâches en coopération par des essaims de robots autonomes est toutefois extrêmement ardue ; le modèle formel Pactole a été conçu pour aider à la vérification formelle mécanique (avec coq) de ce modèle émergent. Il permet à la fois de certifier des résultats d'impossibilité et de prouver la correction de protocoles distribués. À titre d'illustration je présenterai dans cet exposé un algorithme original universel pour le rassemblement de robots oublieux dans le plan réel. Cet algorithme est prouvé correct grâce à notre développement formel.


15h00, room C103

Tuesday, March 15#

Automated Verification of Asynchronously Communicating Systems. Gwen Salaün (Ensimag)

Recent software is mostly constructed by reusing and composing existing components abstracted as finite state machines. Asynchronous communication is a classic interaction mechanism used for such software systems. However, analyzing communicating systems interacting asynchronously via FIFO buffers is an undecidable problem. A typical approach is to make finite the corresponding state space by limiting the presence of communication cycles in behavioral models or by fixing buffer sizes. In this talk, our focus is on systems that are likely to be unbounded and therefore result in infinite systems. We do not want to restrict the system by imposing any arbitrary bounds. We first present the synchronizability property and then introduce a notion of stability. We prove that once the system is stable for a specific buffer bound, it remains stable whatever larger bounds are chosen for buffers. This enables us to check certain properties on the system for that bound and to ensure that the system will preserve them whatever larger bounds are used for buffers. We also prove that computing this bound is undecidable but we show how we succeed in computing these bounds for many typical examples using heuristics and equivalence checking. We will also overview several properties of interest that communicating systems must satisfy when obtained via projection from choreography specifications.


14h00, room A006

Monday, March 7#

Incremental Language-Independent Program Verification. Vlad Rusu (Inria Lille)

Reachability Logic (RL) is a recently introduced formalism for defining the operational semantics of programming languages and for specifying program properties. As a program logic RL can be seen as a language-independent generalisation of Hoare Logics. Thus, using RL one can specify programs independently on the languages in which the programs are written and one can design program-verification techniques that are also language-independent. Several such techniques have already been proposed, all of which have a circular nature: they take as input a set of RL formulas (specifying a given program that one wants to verify, as well as some of its subprograms), and allow formulas in the set to be circularly used in proofs of other formulas, or even in their own proof. Such circular reasoning is essential for dealing with repetitive behaviour (e.g., loops or recursive calls). The reasoning is sound, but only in a monolithic way, in the sense that if a proof succeeds on a given set of formulas then all the formulas in the set are semantically valid; but if the proof fails, then nothing can be inferred about any formula’s validity or invalidity. In practice, this means that one is left with no information after unsuccessful proof attempts.

In this paper we deal with this issue by proposing an incremental method for proving RL formulas. The proposed method takes as input a given formula specifying a given program fragment, together with a (possibly empty) set of helper formulas that specify some if its subprograms and that are were already proved valid (e.g., using our method or any other sound method). Then, certain conditions equivalent to RL formula validity are automatically checked. In case of success, the newly proved formula can be incrementally used as a helper in the proofs of other formulas, thereby proving increasingly larger program fragments. But in case of failure, unlike the case of monolithic methods, one is not left without information - one still knows the validity of the already proved helper formulas, and can build on this knowledge to make progress in the program's proof. We illustrate the approach by successfully verifying the nontrivial Knuth-Morris-Pratt string-matching algorithm, whose verification had previously been tried without success using a monolithic method.


11h00, room C103

Wednesday, March 9#

Characterization of Reachable Attractors Using Petri Net Unfoldings. Thomas Chatain (ENS Cachan & LSV)

Attractors of network dynamics represent the long-term behaviours of the modelled system. Their characterization is therefore crucial for understanding the response and differentiation capabilities of a dynamical system. In the scope of qualitative models of interaction networks, the computation of attractors reachable from a given state of the network faces combinatorial issues due to the state space explosion.

In this paper, we present a new algorithm that exploits the concurrency between transitions of parallel acting components in order to reduce the search space. The algorithm relies on Petri net unfoldings that can be used to compute a compact representation of the dynamics. We illustrate the applicability of the algorithm with Petri net models of cell signalling and regulation networks, Boolean and multi-valued. The proposed approach aims at being complementary to existing methods for deriving the attractors of Boolean models, while being generic since it applies to any safe Petri net.


14h00, room A006

Monday, December 14#

Introduction aux mathématiques à rebours. Ludovic Patey (Univ. Paris Diderot)

Suite à la malédiction des théorèmes d'incomplétude de Gödel, nous sommes condamnés pour l'éternité à chercher de nouveaux axiomes pour compléter nos théories mathématiques. Dans cette démarche fondationnelle, les mathématiques à rebours s'interrogent sur les axiomes minimaux nécessaires pour prouver les théorèmes de la vie de tous les jours. Au cours de cette présentation, nous introduirons les outils des mathématiques à rebours, et illustrerons les deux principales observations du domaine: - La plupart des théorèmes classiques ne nécessitent que des axiomes faibles - Beaucoup de ces théorèmes sont équivalents à l'un parmi cinq ensembles d'axiomes.

[1] S. Simpson, Subsystems of second order arithmetic, 2nd ed., CUP, 2009

[2] D. Hirschfeldt, Slicing the truth, to appear, 2013


14h00, room B013

Tuesday, July 7#

Contextuality, Cohomology & Paradox. Shane Mansfield (Univ. Oxford)

Contextuality is a key feature of quantum mechanics that provides an important non-classical resource for quantum information and computation. Sheaf theory can be used to provide a general treatment of contextuality in quantum theory. However, contextual phenomena are found in other fields as well: for example, database theory. I will discuss my recent work in collaboration with Abramsky, Barbosa, Kishida & Lal (http://arxiv.org/abs/1502.03097) in which we develop this unified view of contextuality. This provides two main contributions: firstly, it exposes a remarkable connection between contexuality and logical paradoxes; secondly, an important class of contextuality arguments can be demonstrated to have a topological origin. More specifically, it is shown that "All-vs-Nothing" proofs of contextuality are witnessed by cohomological obstructions.


14h00, room C005

Tuesday, June 16#

Logical relations for coherence of effect subtyping. Piotr Polesiuk (Université de Wroclaw)

In a programming language with subtyping, a coercion semantics explicits the conversions from one type to another. Such a semantics is typically defined on typing derivations rather than on typing judgments. To avoid semantic ambiguity, it is expected to be coherent, i.e., independent of the typing derivation for a given typing judgment. In this talk I will present step-indexed logical relations for establishing the coherence of coercion semantics of programming languages with subtyping. To illustrate the effectiveness of the proof method, I will give an overview of a coherence proof for a type-directed CPS translation from a call-by-value lambda calculus with delimited continuations. Additionally, I will pinpoint the most interesting aspects of a Coq formalization of this work, in particular those that deal with step-indexing.


14h00, room A006

Tuesday, June 9#

Display calculi for dynamic modal logics. Sabine Frittella (LIF, Marseille)

First, we introduce display calculi and the key notions to understand and use them ; we assume familiarity with the following notions: normal modal logic, Kripke frames, sequent calculi and lattices. Then we expose the meta-theorem for cut-elimination for display calculi. Finally, we present the design of a display calculi for the logic EAK (Epistemic Actions and Knowledge).


14h00, room C103

Tuesday, May 5#

Analyse statique de programmes par interprétation abstraite et procédures de décision SMT. Julien Henry (Verimag)

L’analyse statique de programme a pour but de prouver automatiquement qu’un programme vérifie certaines propriétés. L’interprétation abstraite est un cadre théorique permettant de calculer des invariants de programme. Ces invariants sont des propriétés sur les variables du programme vraies pour toute exécution. La précision des invariants calculés dépend de nombreux paramètres, en particulier du domaine abstrait et de l’ordre d’itération utilisés pendant le calcul d’invariants. Nos travaux de recherche proposent plusieurs extensions de cette méthode qui améliorent la précision de l’analyse.

L’interprétation abstraite peut également être rendue plus précise en distinguant tous les chemins d’exécution du programme, au prix d’une explosion exponentielle de la complexité. Le problème de satisfiabilité modulo théorie (SMT), dont les techniques de résolution ont été grandement améliorées cette décennie, permettent de représenter ces ensembles de chemins implicitement. Nous proposons d’utiliser cette représentation implicite à base de SMT et de les appliquer à des ordres d’itération de l’état de l’art pour obtenir des analyses plus précises.

Nous proposons ensuite de coupler SMT et interprétation abstraite au sein de nouveaux algorithmes appelés Modular Path Focusing et Property-Guided Path Focusing, qui calculent des résumés de boucles et de fonctions de façon modulaire, guidés par des traces d’erreur. Notre technique a différents usages: elle permet de montrer qu’un état d’erreur est inatteignable, mais également d’inférer des préconditions aux boucles et aux fonctions.

Nous appliquons nos méthodes d’analyse statique à l’estimation du temps d’exécution pire cas (WCET). Dans un premier temps, nous présentons la façon d’exprimer ce problème via optimisation modulo théorie, et pourquoi un encodage naturel du problème en SMT génère des formules trop difficiles pour l’ensemble des solveurs actuels. Nous proposons un moyen simple et efficace de réduire considérablement le temps de calcul des solveurs SMT en ajoutant aux formules certaines propriétés impliquées obtenues par analyse statique. Nous montrons que ce cas particulier du WCET suggère qu’il est possible d’améliorer les solveurs SMT en général grâce à des méthodes dérivées de l’analyse statique de programmes.

Enfin, nous présentons l’implémentation de Pagai, un analyseur statique pour LLVM, qui calcule des invariants numériques grâce aux différentes méthodes proposées. Nous avons comparé les différentes techniques implémentées sur des programmes open-source et des benchmarks utilisés par la communauté.


14h00, room B013

Friday, April 24#

Fractals, indécidabilité et automates à plusieurs rubans. Timo Jolivet (Institut de Mathématiques de Toulouse)

On démontre que la propriété "être d'intérieur vide" est indécidable pour une famille simple et naturelle de fractals (définis par des systèmes de fonctions itérées affines 2D). Les méthodes utilisées font intervenir des automates à plusieurs rubans et une variante du problème de correspondance de Post. Ce travail a été réalisé en collaboration avec Jarkko Kari.


14h00, room A006

Tuesday, April 21#

Of Coq and interactive proofs. Arnaud Spiwack (CRI, Mines ParisTech)

The Coq proof assistant is a language, based on Martin-Löf's dependent type theory, which is a synthesis of a (functional) programming language and a mathematical language. As such Coq, and dependent type theory in general, has a unique outlook on formal methods and program correctness. Proofs and programs can be interleaved in many ways, making it possible to keep proofs of program properties local. A the extreme end of this spectrum, it is possible to write programs in the form of constructive mathematical proofs.

In this talk, I will give an overview of the Coq proof assistant and then I will open the hood and show how the interactive proof facilities of Coq are implemented. Historically, they used an idea due to Robin Milner for the proof assistant LCF (which gave us the ML programming language), but dependent type theories such as Coq allow for a more robust design. The API and implementation of the interactive proof facilities of Coq rely on the notion of monad which has been popularised by the Haskell programming language. This yields a powerful and flexible API whose deduction rules closely match those of Coq, whereas the implementation based on Milner's design was limited to an approximation of these rules.


11h00, room A006

Distributed system design: there is still pitfalls on the road ahead. Cyril Cassagnes (Université du Luxembourg)

Heterogenous and distributed computer systems become the norm. That’s why, in this talk, we will discuss the two following aspects: The first aspect is linked to systems heterogeneity that is both software and hardware. For software, this heterogeneity appears also through the functional and non-functional properties of components or agents forming the system. The second aspect of these systems is related to data. The value of digital data has moved information systems at a strategic position, which must process more and more data. This has created large distributed systems. Indeed, these system aim to be distributed in order to achieve better performance thanks to computation and storage distribution. In both cases, it’s important to be able to verify the system behaviour or some algorithms that constitute it, especially when these systems are deployed in our daily environment. Indeed, these system are complex to implement or model and continue to increase in complexity. That’s why, tools are required. For instance, it’s relevant for the first aspect to be able to emphasize emergent behaviour resulting from sub-system interactions at the implementation time of a system of systems in order to guaranty a level of safety and/or predictability of the system


14h00, room B013

Friday, April 17#

Ensembles limites d'automates cellulaires associés à une mesure de probabilité. Martin Delacourt (CMM, Santiago du Chili)

Dans le monde des systèmes dynamiques, on s'intéresse souvent aux comportements asymptotiques et en particulier à l'ensemble limite, c'est à dire l'ensemble des points du système qui peuvent être visités arbitrairement tard au cours d'une évolution. Pour affiner la perception des comportements asymptotiques, on ajoute une mesure de probabilités sur l'ensemble de configurations initiales et on oublie les parties de l'espace qui tendent à disparaître.

On considère ici le cas particulier des automates cellulaires qui sont un modèle classique de systèmes dynamiques discrets. Constitués d'une infinité de cellules formant un graphe régulier et ayant un état parmi un ensemble fini, leur évolution se fait en appliquant une même règle locale en tous points simultanément.

On donnera une caractérisation des ensembles limites associés à une mesure de probabilités, des grands ensembles classiques de sous-shifts qui peuvent être réalisés et un théorème de Rice sur ces ensembles.


14h00, room A006

Tuesday, April 14#

Coinduction, Equilibrium and Rationality of Escalation. Pierre Lescanne (ENS Lyon)

Escalation is the behavior of players who play forever in the same game. Such a situation is a typical field for application of coinduction which is the tool conceived for reasoning in infinite mathematical structures. In particular, we use coinduction to study formally the game called --dollar auction--, which is considered as the paradigm of escalation. Unlike what is admitted since 1971, we show that, provided one assumes that the other agent will always stop, bidding is rational, because it results in a subgame perfect equilibrium. We show that this is not the only rational strategy profile (the only subgame perfect equilibrium). Indeed if an agent stops and will stop at every step, whereas the other agent keeps bidding, we claim that he is rational as well because this corresponds to another subgame perfect equilibrium. In the infinite dollar auction game the behavior in which both agents stop at each step is not a Nash equilibrium, hence is not a subgame perfect equilibrium, hence is not rational. Fortunately, the notion of rationality based on coinduction fits with common sense and experience. Finally the possibility of a rational escalation in an arbitrary game can be expressed as a predicate on games and the rationality of escalation in the dollar auction game can be proved as a theorem which we have verified in the proof assistant COQ. In this talk we will recall the principles of infinite extensive games and use them to introduce coinduction and equilibria (Nash equilibrium, subgame perfect equilibrium). We will show how one can prove that the two strategy profiles presented above are equilibria and how this leads to a "rational" escalation in the dollar auction. We will show that escalation may even happen in much simpler game named 0,1-game.


14h00, room B013

Thursday, April 9#

Approximate Consensus in Highly Dynamic Networks. Bernadette Charron-Bost (CNRS, LIX)

(joint work with Matthias Függer and Thomas Nowak)

We investigate the approximate consensus problem in highly dynamic networks in which topology may change continually and unpredictably. We prove that in both synchronous and partially synchronous networks, approximate consensus is solvable if and only if permanently, the communication graph has a rooted spanning tree. The striking point is that the spanning tree may continually change over time without preventing nodes from converging to consensus. Actually the class of averaging algorithms, which have the benefit of being memoryless and requiring no process identifiers, entirely captures the solvability issue of approximate consensus in that the problem is solvable if and only if it can be solved using any averaging algorithm.

We develop a proof strategy which consists in a reduction to the "non-split" networks. It dramatically improves the best known upper bound on the decision times of averaging algorithms and yields a non-averaging algorithm for approximate consensus in non-anonymous networks with a decision time that is quadratic in the number of nodes. We also prove that a general upper bound on the decision times of averaging algorithms has to be exponential, shedding light on the high price of anonymity. Finally we apply our results to networked systems with a fixed topology and benign fault models to show that with n processes, up to 2n-3 link faults per round can be tolerated for approximate consensus, increasing by a factor 2 the tight bound for exact consensus established by Santoro and Widmayer.


15h00, room C103

Tuesday, April 7#

Model-checking term rewriting systems: reachability analysis guided by temporal properties. Vincent Hugot (Inria Lille)

We present a model-checking framework summarised as a variant of reachability analysis for term rewriting systems. Reachability analysis endeavours to show that "a bad state is never reached", and relies on the computation of an over-approximation of the set of reachable terms, which is of course not calculable in general. We aim to generalise this approach, in the sense of lifting it from the specific statement "a bad state is never reached" to arbitrary specifications in some temporal logic -- though in the case we study, the statements dictate the order in which the transitions of the system may occur.

For now, the method supports a fragment of LTL sufficiently expressive to describe common safety properties. In a first step, translation rules reformulate the verification problem into an equivalent expression of propositional logic whose atoms are comparisons of languages obtained through rewriting, without regard for decidability; we call such a formula a rewrite proposition. In the second step, the rewrite proposition is given a concrete representation in terms of tree automata with or without constraints. Since the general problem is undecidable, these representations are sometimes approximations, for which we use constructions (tree automata completion) studied separately for reachability analysis; the end result is a positively approximated decision procedure for the general problem, ie. a procedure that yields "yes, the system is correct" or "maybe". (Actually there may be several ways to approximate the proposition, and thus a set of such procedures is generated).

We draw a few perspectives regarding the future development of this method, in particular the extension of the supported logic, and the introduction of rewrite strategies into the mix, e.g. innermost.


11h00, room B013

Quantum attacks against iterated block ciphers. Marc Kaplan (Telecom ParisTech)

We study the amplification of security against quantum attacks provided by iteration of block ciphers. In the classical case, the Meet-in-the-middle attack is a generic attack against those constructions. This attack reduces the time required to break double iterations to only twice the time it takes to attack a single ideal block cipher, where a naive cryptographer would expect double iteration to lead to a quadratic improvement of security. We introduce the quantized version of this attack which is an application of Ambainis' celebrated quantum algorithm for the Element Distinctness problem. We then prove the optimality of the attack against two iterated ideal ciphers. An important consequence is that if a quantum adversary is capable of breaking a single encryption in time t, the time to break two ideal iterated cipher is at least t^(4/3). In other words, the amplification of security provided by iteration is much larger against quantum adversaries than against classical ones. We then investigate security amplification by composition further by examining the case of four iterations. We quantize a recent technique introduced by Dinur, Dunkleman, Keller and Shamir, called the dissection attack. In the classical case, this attack allows to save a great amount of memory. Surprisingly, the quantized version also leads to saving in time, compared to a meet-in-the-middle approach. This seems to indicate that when the number of iterations grows, the resistance against quantum attacks decreases.


14h00, room B013

Wednesday, April 1st#

Model Checking Parameterized Asynchronous Shared-Memory Systems. Antoine Durand-Gasselin (TU Münich)

This is joint work with Javier Esparza, Pierre Ganty and Rupak Majumdar. These results have been submitted to CAV.

We characterize the complexity of liveness verification for parameterized systems consisting of a leader process and arbitrarily many anonymous and identical contributors. Processes communicate through a shared, bounded-value register. While each operation on the register is atomic, there is no synchronization primitive to execute a sequence of operations atomically.

We analyze the case in which processes are modeled by finite-state machines or pushdown machines and the property is given by a Büchi automaton over the alphabet of read and write actions of the leader. We show that the problem is decidable, and has a surprisingly low complexity: it is NP-complete when all processes are finite-state machines, and is PSPACE-hard and in NEXPTIME when they are pushdown machines. This complexity is lower than for the non-parameterized case: liveness verification of finitely many finite-state machines is PSPACE-complete, and undecidable for two pushdown machines.

For finite-state machines, we characterize infinite behaviors using existential abstraction and semilinear constraints. For pushdown machines, we show how contributor computations of high stack height can be simulated by computations of many contributors, each with low stack height. Together, our results characterize the complexity of verification for parameterized systems under the assumptions of anonymity and asynchrony.


14h00, room B013

Tuesday, March 31#

Synthèse de contrôleurs pour les jeux stochastiques multi-composantes avec objectif multiple. Nicolas Basset (University of Oxford)

Je présenterai les résultats de deux articles récents concernant la synthèse de contrôleurs (aussi appelés stratégies) pour les systèmes modélisés par des jeux stochastiques avec conditions de gain multiple. Dans le premier article (CONCUR 2014), nous introduisons un produit parallèle de jeux stochastiques. Nous montrons comment les règles compositionnelles de vérification de type assomption-garantie des automates probabilistes (jeux stochastiques à un joueur) peuvent être relevées à notre cadre. Nous expliquons comment composer des stratégies gagnantes pour les composantes en une stratégie gagnante globale pour le jeu composé. Dans le second article (TACAS 2015), nous nous intéressons à la satisfaction presque-sûre de plusieurs objectifs de gain moyen ("mean-payoff``). Nous donnons une méthode de synthèse de stratégie epsilon-optimale pour ce type de condition de victoire. C'est à dire, étant donné un vecteur objectif Pareto optimal v, la stratégie produite permet de maintenir le gain moyen multi-dimensionnel au dessus du vecteur de seuil v - epsilon. Nous utilisons cette méthode de synthèse comme brique élémentaire dans la méthode compositionnelle de l'article précédent (CONCUR 2014), que nous illustrons sur un cas d'étude issue de l'avionique. Ce travail a été réalisé avec Clemens Wiltsche, Marta Kwiatkowska et Ufuk Topcu.


14h00, room B013

Thursday, March 26#

Preuves formelles de sûreté en Coq : réseaux de robots et compilation de langages synchrones. Lionel Rieg (Collège de France)

Dans une première partie, je présenterai le projet Pactole qui vise à construire un cadre formel en Coq pour raisonner sur des réseaux de robots. Ce cadre doit être assez souple pour pouvoir intégrer les différentes variantes de modèles (présence de robots byzantins, capacité de perception des robots, …) en changeant ses paramètres. J'illustrerai l'intérêt de ce cadre sur un problème fondamental : le rassemblement (sur une droite), qui cherche à rassembler des robots en un même point. En toute généralité, ce problème est connu pour être impossible, preuve formelle dans Pactole à la clé. Néanmoins, je donnerai un algorithme qui fonctionne hormis pour une unique position initiale, précisément celle utilisée pour montrer l'impossibilité dans le cas général. On verra ensuite comment généraliser ces résultats en dimension plus grande.

Dans un second temps, je m'intéresserai à la compilation du langage synchrone Esterel. L'objectif de ce travail est de construire un compilateur prouvé d'Esterel vers les circuits, comme CompCert le fait de C vers l'assembleur. À l'inverse des langage usuels, les langages synchrones parlent de façon précise du temps, ce qui influent beaucoup sur leur conception. Ainsi, les difficultés de la compilation sont ainsi de nature très différentes, notamment car la cible n'est pas un langage de programmation. Je présenterai les enjeux de la compilation et une partie des différentes sémantiques qui permettent de relier le langage source à sa traduction en circuits.


15h00, room B011

Tuesday, March 24#

Optimal Non-Perfect Uniform Secret Sharing Schemes. Tarik Kaced (Univ. Hong Kong)

A secret sharing scheme is non-perfect if some subsets of participants cannot recover the secret value but have some information about it. This work is dedicated to the search of efficient non-perfect secret sharing schemes. The efficiency is measured by means of the information ratio, the ratio between the maximum length of the shares and the length of the secret value.

In order to study perfect and non-perfect secret sharing schemes with all generality, we describe the structure of the schemes through their access function, a real function that measures the amount of information that every subset of participants knows about the secret value. We present new tools for the construction of secret sharing schemes. In particular, we construct a secret sharing scheme for every access function.

We find a new lower bound on the information ratio that is better than the ones previously known. In particular, this bound is tight for uniform access functions. The access function of a secret sharing scheme is uniform if all participants play the same role in a scheme (e.g. ramp secret sharing schemes). Moreover, we construct a secret sharing scheme with optimal information ratio for every rational uniform access function.


11h00, room A006

Vérification automatique d'invariants numériques de programmes par optimisation sous-contraintes et itérations sur les politiques. Assalé Adjé (ONERA Toulouse)

En général, la vérification d'une propriété sur des programmes numériques est indécidable. Cependant, la théorie de l'interprétation abstraite permet de prouver qu'une propriété est vraie en construisant une sur-approximation calculable des valeurs possibles prises par les variables du programme et sur laquelle la propriété est vraie.

Dans cette présentation, je montrerai comment construire une telle sur-approximation en utilisant des méthodes d'optimisation sous-contraintes. Dans un deuxième temps, je considérerai le problème suivant: comment obtenir une sur-approximation la plus précise possible. En effet, quand on s'intéresse, par exemple, à prouver que les valeurs atteignables sont bornées, la sur-approximation la plus précise possible est recherchée. Je montrerai comment un algorithme de calcul de point fixe issu de la théorie des jeux comme l'itération sur les politiques permet d'obtenir des bornes plus précises.


14h00, room A006

Friday, March 13#

Implementability issues for real-time distributed systems. Thomas Chatain (LSV)

Formal models for real-time distributed systems, like time Petri nets and networks of timed automata have proved their interest for the verification of real-time systems. On the other hand, the question of using these models as specifications for designing real-time systems raises several difficulties. Here we focus on the ones that are related to the distributed nature of the system. Implementing a model may be possible at the cost of some transformations, which make it suitable for the target device. In this talk, I present several results about semantics of distributed real-time systems and provide methods for the design of such systems.


14h00, room A006

Tuesday, March 10#

Formal Verification of (Electronic) Exams. Jannik Dreier (ETH Zürich)

Nowadays, students can be assessed not only by means of classical pencil-and-paper tests, but also using electronic exams. Such electronic exams can be taken in examination centers, or sometimes even from home via the internet. Electronic exams are appealing as they can reach larger audiences by simplifying the exam process, but they are exposed to new threats, including not only dishonest students, but also outside attackers, and misbehaving authorities. These threats are amplified by two issues: the lack of understanding of what security means for electronic exams (except the old concern about student cheating), and the absence of tools to verify whether an exam system is secure. In this talk, I will present formal definitions of several fundamental authentication, privacy and verifiability properties for exam protocols. As case studies, I will discuss three exam protocols: two recent electronic exam protocols, and the pencil-and-paper exam used by the University of Grenoble, showing that our approach can also cover classical exam systems. The automatic analysis using ProVerif highlights several weaknesses, underlining the need for formal verification.


11h00, room B013

F*: Higher-Order Effectful Program Verification. Chantal Keller (Microsoft Research Cambridge & MSR-Inria Joint Centre)

F* is an ML-like language designed for program verification. It is based on refinement types, a powerful extension of ML types to express properties about programs. These properties are automatically checked via the generation of verification conditions that are finally discharged by SMT solvers. F* provides a uniform framework to deal both with programs with effects and non-termination, for which one might want to establish some properties, and pure and terminating programs, that can also be used inside the logical refinements. To this end, it relies on a fine-grained control of effects and a termination checker based on a semantic criterion. F* finally generates code to various backends, ranging from OCaml to JavaScript.

After giving an overview of this language, I will present some aspects of its internals, in particular the treatment of effects and termination, and the generation of verification conditions all the way down to SMT solvers. I will finally discuss ongoing work on a full certification of F*.


14h00, room B013

Wednesday, March 4#

Information-flow Security: Definitions, Characterizations and Verification Complexity. Henning Schnoor (Univ. Kiel)

We develop a theory for state-based noninterference in a setting where different security policies apply in different parts of a given system. Our theory comprises appropriate security definitions, characterizations of these definitions, for instance in terms of unwindings, algorithms for analyzing the security of systems with local policies, and corresponding complexity results. Additionally, we study the verification complexity for established notions of noninterference such as IP-security and TA-security. In both of these cases, we obtain NL algorithms, significantly improving on previously known bounds.


14h00, room A006

Thursday, March 5#

Monitoring of Temporal First-order Properties with Aggregations. Eugen Zalinescu (ETH Zurich)

In system monitoring, one is often interested in checking properties of aggregated data. Current policy monitoring approaches are limited in the kinds of aggregations they handle. To rectify this, we extend an expressive language, metric first-order temporal logic, with aggregation operators. Our extension is inspired by the aggregation operators common in database query languages like SQL. We provide a monitoring algorithm for this enriched policy specification language. We show that, in comparison to related data processing approaches, our language is better suited for expressing policies, and our monitoring algorithm has competitive performance.


11h00, room A006

Tuesday, February 24#

Mesures limites dans les automates cellulaires. Benjamin Hellouin (Univ. Andres Bello, Santiago)

Je m'intéresse au comportement des automates cellulaires sur une configuration initiale tirée au hasard, ce qui revient à étudier leur action sur l'espace des mesures de probabilité. Je présenterai un panorama de mes recherches sur les mesures limites de ces systèmes, qui peuvent être vues comme leur comportement asymptotique typique ou à leur sortie en tant qu'algorithme probabiliste.

En dimension 1, les mesures atteignables en partant d'une mesure simple, e.g. la mesure uniforme, sont entièrement décrivables par des conditions de calculabilité. Porter ce résultat aux dimensions supérieures est un travail en cours qui a déjà donné des résultats partiels. Cependant cette approche de permet pas d'obtenir des mesures limites de support plein, qui ne peuvent être atteintes que par des dynamiques surjectives.

Bien que les automates cellulaires surjectifs soient capables de calcul, leur action sur les mesures possède une forte rigidité qui est encore peu comprise. On conjecture expérimentalement que certaines sous-classes font converger toutes les mesures initiales simples vers la mesure uniforme. Nous produisons la première preuve complète de ce phénomène dans un automate cellulaire ayant des propriétés algébriques particulières, utilisant des structures autosimilaires dans son évolution temporelle.


14h00, room A006

Tuesday, February 17#

Algorithmes distribués pour l'optimisation du déploiement des microrobots MEMS : des méthodes formelles sont nécessaires? Hicham Lakhlef (Univ. Franche-Comté)

Dans ce séminaire je vais présenter quelques travaux que j'ai faits dans ma thèse. Je parlerai sur des algorithmes distribués pour la reconfiguration des micorobots MEMS.

Les microrobots MEMS gagnent de plus en plus une attention croissante puisque ils peuvent effectuer plusieurs missions et tâches dans une large gamme d'applications, y compris la localisation d'odeur, la lutte contre les incendies, le service médical, la sécurité et le sauvetage. Ils peuvent faire des tâches dans des environnements qui sont caractérisés en étant non structurés, complexes, dynamiques, et inconnus. Pour faire ces missions et tâches les microrobots MEMS doivent appliquer des protocoles de redéploiement afin de s'adapter aux conditions du travail. Ces protocoles de reconfiguration dévoient traiter avec les caractéristiques des nœuds MEMS notamment les ressources (mémoire, énergie) limités. Aussi, ces algorithmes devraient 'tre efficaces, évolutifs, robustes et utilisent seulement les informations locales. Dans ce séminaire, je présente des algorithmes de reconfiguration efficaces pour des microrobots MEMS modulaires. L'objectif de ces algorithmes est d'optimiser la topologie logique des microrobots en utilisant des protocoles de redéploiement distribués. Je présente et je compare deux algorithmes distribués de reconfiguration d'une chaîne à un carré. Le premier algorithme est dynamique dans le sens réveil-sommeil assurant la connexité du réseau durant le processus de reconfiguration et l'autre qui est plus rapide n'assure pas la connexité du réseau durant le processus de reconfiguration. Je montre comment utiliser le parallélisme en mouvements pour optimiser le temps d'exécution des algorithmes et minimiser le nombre de mouvements des nœuds tout en gardant le réseau connexe et les algorithmes robustes et évolutifs. En fin, je présente un algorithme de reconfiguration qui traite avec toute typologie de départ.

Dans les perspectives, je montrerais que des méthodes formelles sont nécessaires pour prouver que les systèmes de microrobots programmés en utilisant des protocoles distribués vont toujours se comporter comme prévu, dans des conditions changeantes et dans une variété d'environnements incertains.


14h00, room A006

Un cadre pour la preuve formelle adapté aux réseaux de robots mobiles. Xavier Urbain (CNAM)

Les réseaux de robots mobiles reçoivent depuis quelques années une attention croissante de la part de la communauté de l'algorithmique distribuée. Si l'utilisation d'essaims de robots coopérant dans l'exécution de diverses tâches est une perspective séduisante, l'analyse de la faisabilité de certaines tâches dans ce cadre émergent est extrêmement ardue, en particulier si certains robots présentent des comportements dits byzantins, c'est-à-dire arbitraires voire hostiles. Pour obtentir des garanties formelles dans ce contexte, nous proposons un cadre mécanique formel fondé sur l'assistant à la preuve Coq et adapté aux réseaux de robots. Nous nous intéressons principalement dans cet exposé aux résultats d'impossibilité, fondamentaux en ce qu'ils établissent ce qui peut ou ne peut pas être réalisé et permettent de définir des bornes et, par là, l'optimalité de certaines solutions. Utiliser un assistant comme Coq travaillant à l'ordre supérieur nous permet d'exprimer aisément des quantifications sur les algorithmes, ceux-ci étant considérés comme des objets abstraits. Nous illustrons les possibilités offertes par notre développement en présentant les premières preuves formelles (et donc certifications) de certains résultats comme l'impossibilité de la convergence de robots amnésiques lorsqu'un tiers d'entre eux sont byzantins, ou encore l'impossibilité du rassemblement pour un nombre pair de robots évoluant dans R.


15h00, room A006

Tuesday, June 12#

Reachability Analysis of Hybrid Systems. Erika Abraham (RWTH Aachen)

Hybrid systems exhibit both dynamic and discrete behavior. Typically, the dynamic behavior stems from the continuous evolution of physical systems, whereas the discrete behavior stems from the execution steps of discrete controllers.

As a modeling language we consider hybrid automata, extending discrete automata with continuous dynamics. Given a hybrid automaton model of a hybrid system, the perhaps most basic question one could be interested in is whether certain model states can be reached from a given initial state. Though this reachability problem sounds simple, it is undecidable for all but some very simple subclasses of hybrid automata.

Nevertheless, there are different techniques to solve it in an incomplete manner. We discuss methods that use different geometric objects like polytopes, zonotopes, ellipsoids etc. to represent state sets, and apply a forward fixedpoint-based search to over-approximate the set of reachable states. Such methods need to determine successor sets under both discrete jumps and continuous evolution. The latter is often done by flowpipe construction, paving the whole flow by a set of geometric objects of the chosen type.


14h00, room C005

Tuesday, May 20#

b2llvm: B developments onto the LLVM. David Deharbe (Univ. Rio Grande do Norte)

We present b2llvm, a code generator for the B-method targetting LLVM intermediate format. b2llvm currently handles the following elements of the B language: simple data types, imperative instructions and component compositions. The translation rules from some essential constructs of the B language for implementations towards LLVM source code are presented and illustrated with examples. Some verification and validation strategies are proposed.


14h00, room B013

Tuesday, April 29#

Proofs, Counterexamples, and Codatatypes for Isabelle/HOL. Jasmin Blanchette (TU Munich)

Isabelle is a generic proof assistant that supports HOL (Higher-Order Logic), ZF (Zermelo-Fraenkel Set Theory), and TLA+ (Temporal Logic of Actions). My research so far has focused on Isabelle/HOL: (1) I enhanced the popular Sledgehammer proof tool, which is based on external automatic provers (e.g., E, SPASS, Vampire, and Z3); (2) I designed the SAT-based counterexample generator Nitpick; (3) together with a few colleagues, I am adding support for codatatypes and corecursive functions. In this talk, I will briefly summarize these strands of research and my future plans.


14h00, room B013

Wednesday, March 19#

Quelques contributions de la Logique Linéaire à la théorie de la complexité. Clément Aubert (IML)

En théorie de la démonstration, la Logique Linéaire propose une formalisation des preuves qui place leur dynamique au centre des investigations et porte une attention soutenue aux opérations structurelles. Depuis l’introduction de ses variantes dites bornées et allégées, la Logique Linéaire a contribué à la théorie de la complexité de différentes façons. Nous présenterons différentes avancées en nous focalisant sur les caractérisations implicites de classes de complexité, c’est-à-dire indépendantes des mesures sur des modèles abstraits. En utilisant une gamme d’outils syntaxiques et sémantiques, la Logique Linéaire est parvenue à proposer des caractérisations de l’exécution parallèle, du non-déterminisme, ou encore des programmes dit efficaces en temps ou en espace. Nous dresserons un rapide panorama de ces méthodes en les liant aux autres approches implicites de la complexité et en mettant en avant nos contributions.


14h00, room B013

Friday, November 8#

Knowledge is Power, and so is Communication. Rohit Parikh (CUNY)

The BDI theory says that people's actions are influenced by two factors, what they believe and what they want. Thus we can influence people's actions by what we choose to tell them or by the knowledge that we withhold. Shakespeare's Beatrice-Benedick case in Much Ado about Nothing is an old example.

Currently we often use Kripke structures to represent knowledge (and belief). So we will address the following issues:

a) How can we bring about a state of knowledge, represented by a Kripke structure, not only about facts, but also about the knowledge of others, among a group of agents?

b) What kind of a theory of action under uncertainty can we use to predict how people will act under various states of knowledge?

c) How can A say something credible to B when their interests (their payoff matrices) are in partial conflict?


15h00, room C005

Tuesday, October 15#

Tutorial on Quantum Communication. Iordanis Kerenidis (CNRS, LIAFA)

Part of the Journées Informatique Quantique 2013, October 14 and 15.


14h00, room A008

Tuesday, October 22#

Journée Doctorants.


room A008

Wednesday, October 9#

Parameterized Model Checking of Fault-tolerant Distributed Algorithms by Abstraction. Josef Widder (TU Wien)

Fault-tolerant distributed algorithms are central for building reliable, spatially distributed systems. In order to ensure that these algorithms actually make systems more reliable, we must ensure that these algorithms are actually correct. Unfortunately, model checking state-of-the-art fault-tolerant distributed algorithms (such as Paxos) is currently out of reach except for very small systems.

In order to be eventually able to model check such fault-tolerant distributed algorithms also for larger systems, several problems have to be addressed. In this talk, I will present recent work that considers modeling and verification of fault-tolerant algorithms that basically only contain threshold guards to control the flow of the algorithm. I will present an abstraction technique that allows parameterized model checking of safety and liveness properties of such algorithms. Our experimental data shows that our abstraction is precise, and allows automated verification of simple, fault-tolerant distributed algorithms for various fault models. As threshold guards are widely used in fault-tolerant distributed algorithms (and also in Paxos), efficient methods to handle them bring us closer to the above mentioned goal.


14h00, room C005

Tuesday, October 8#

The philosopher diner revisited: a new algebraic approach for an old modeling problem ? David Janin (LaBRI)

In the field of distributed algorithm, the philosopher diner is a typical algorithmical and modeling problem. How to model distributed states? Local transitions? Their combinations? Resulting global transitions? Correctness criteria? There already are many modeling proposals. In this talk, we will provide yet another one. We will show how inverse semigroup theory -- a mathematical theory developed since the 50's yet quite unknown in computer science -- and the associated language theory provide robust concepts and tools for that modeling.

This is a joint (exploratory) work with Anne Dicky to be presented at MSR 2013 in Rennes. Oddly enough to be mentioned, such an approach was inspired by recent experiments in music system modeling and programming.


15h00, room B013

Tuesday, September 24#

Symbolic Probabilistic Analysis of Off-line Guessing. Bruno Conchinha (ETH Zurich)

We introduce a probabilistic framework for the automated analysis of security protocols. Our framework provides a general method for expressing properties of cryptographic primitives, modeling an attacker who is more powerful than conventional Dolev-Yao attackers. It allows modeling equational properties of cryptographic primitives as well as property statements about their weaknesses, e.g. primitives leaking partial information about messages or the use of weak algorithms for random number generation. Moreover, we can use these properties to find attacks and estimate their success probability. Existing symbolic methods can neither model such properties nor find such attacks.

We show that the probability estimates we obtain are negligibly different from those yielded by a generalized random oracle model based on sampling terms into bitstrings while respecting the stipulated properties of cryptographic primitives.

As case studies, we use a prototype implementation of our framework to model non-trivial properties of RSA encryption and automatically estimate the probability of off-line guessing attacks on the EKE protocol.


14h00, room C005

Tuesday, July 23#

Malicious-but-cautious certificate authorities. Mark Ryan (University of Birmingham)

The "certificate authority" model for authenticating public keys of websites has been attacked in recent years, and several proposals have been made to reinforce it. We develop and extend a recent proposal called "certificate transparency", so that it properly handles certificate revocation. We show how extended certificate transparency has applications beyond the web. In particular, we detail how it can be used to build a secure end-to-end email or messaging system using PKI with no trusted parties. We believe this finally makes end-to-end encrypted email as usable and user-friendly as encrypted web browsing is today. Underlying these ideas is a new attacker model appropriate for cloud computing, which we call "malicious-but-cautious".


14h00, room C005

Tuesday, June 25#

Le genre des langages réguliers. Florian Deloup (Institut de Mathématiques de Toulouse, Univ. Toulouse III)

Si l’on oublie une partie de sa structure, un automate peut être regardé comme un graphe pour lequel la notion de genre est bien définie. Intuitivement, un graphe est de genre plus petit que g s’il existe un plongement du graphe dans une sphère à "g anses". Bien entendu, du point de vue informatique, les automates sont plus riches que des graphes : ils calculent des langages et un langage régulier donné admet une infinité d’automates le reconnaissant. Il est alors naturel de définir le genre d’un langage régulier L comme le genre minimal des automates reconnaissant L. On exposera quelques résultats et quelques questions ouvertes sur ce nouvel invariant des langages.


14h00, room C005


Wednesday, May 15#

Evaluation is MSOL compatible. Sylvain Salvati (INRIA, LaBRI)

We consider simply-typed lambda calculus with fixpoint operators. Evaluation of a term gives as a result the Bohm tree of the term. We show that evaluation is compatible with monadic second-order logic (MSOL). This means that for a fixed finite vocabulary of terms, the MSOL properties of Bohm trees of terms are effectively MSOL properties of terms themselves. Theorems of this kind have been known for some graph operations: unfolding, and Muchnik iteration. Similarly to those results, our main theorem has diverse applications. It can be used to show decidability results, to construct classes of graphs with decidable MSOL theory, or to obtain MSOL formulas expressing properties of terms. Another application is decidability of a control-flow synthesis problem.

It is a joint work with Igor Walukiewicz.


14h00, room C005

Tuesday, April 16#

Analyzing Probabilistic Programs: Pushing the Limits of Automation. Joost-Pieter Katoen (Aachen University)

Probabilistic programs are pivotal for cryptography, privacy and quantum programs, are typically a few number of lines, but hard to understand and analyse. The main challenge for their automated analysis is the infinite domain of the program variables combined with the occurrence of parameters. Here, parameters can range over concrete probabilities but can also encompass various aspects of the program, such as number of participants, size of certain variables etc.

McIver and Morgan have extended the seminal proof method of Floyd and Hoare for sequential programs to probabilistic programs by making the program annotations real- rather than Boolean-valued expressions in the program variables. As in traditional program analysis, the crucial annotations are the loop invariants. The problem to synthesize loop invariants is very challenging for probabilistic programs where real-valued, quantitative invariants are needed.

In this talk, we present a novel approach to synthesize loop invariants for the class of linear probabilistic programs. In order to provide an operational interpretation to the quantitative annotations, we give a connection between McIver and Morgan's wp-semantics of probabilistic programs and parameterized Markov decision processes.


14h00, room C005

Thursday and Friday, April 11 and 12#

Tuesday, April 9#

Sound Security Protocol Transformations. Binh Thanh Nguyen (ETH Zurich)

We propose a class of protocol transformations, which can be used to

  1. develop (families of) security protocols by refinement and
  2. abstract existing protocols to increase the efficiency of verification tools.
We prove the soundness of these transformations with respect to an expressive security property specification language covering secrecy and authentication properties. Our work clarifies and significantly extends the scope of earlier work in this area. We illustrate the usefulness of our approach on a family of key establishment protocols.


14h00, room C005

Wednesday, March 27#

Induction in SMT: Seminar day.

11:00 - 12:00 Decidability of inductive theorems based on rewriting induction. Takahito Aoto

Rewriting induction (Reddy, 1990) is a method for proving inductive theorems of term rewriting systems (TRSs). It is known that generally it is undecidable that given equation and TRS whether the equation is an inductive theorem of the TRS. Toyama (2002) and Falk and Kapur (2006) gave some characterization of equations and TRSs for which it is decidable whether the equation is the inductive theorem of the TRS or not, based on rewriting induction, that is, using the rewriting induction as the underlying decision procedure. We give an abstract view of this approach, and try to integrate some known decidable classes.

15:00 - 16:00 Rewriting Induction + Linear Arithmetic = Decision Procedure. Stephan Falke.

This talk discusses new results on the decidability of inductive validity of conjectures. For these results, a class of term rewrite systems (TRSs) with built-in linear integer arithmetic is introduced and it is shown how these TRSs can be used in the context of inductive theorem proving. The proof method developed for inductive theorem proving couples (implicit) inductive reasoning with a decision procedure for the theory of linear integer arithmetic with (free) constructors. The effectiveness of the new decidability results on a large class of conjectures is demonstrated by an evaluation of the prototype implementation Sail2.

16:00 - 17:00 Noetherian Induction for First-Order Reasoning. Sorin Stratulat.

Noetherian induction is a powerful mathematical principle that allows to reason on sets of unbounded number of elements, provided that there is a Noetherian ordering that forbids the construction of any infinite strictly decreasing sequence of elements. In first-order logic, we can distinguish two instantiations of it, depending on the kind of elements we are reasoning on: terms or (first-order) formulas. The term-based principles rely on schemata. An induction schema attaches a set of formulas as induction hypotheses (IH) to another formula, called induction conclusion, with the intention that the IHs be used in the proof of the induction conclusion. Widely-spread among proof assistants, it can be easily integrated into sequent-based inference systems as a separate inference rule. The induction reasoning is local, at schema level, and the induction orderings may differ from one schema to another. However, it is not lazy since the IHs should be defined (sometimes long) before their use or are not used at all. Moreover, it cannot manage naturally the mutual induction with other formulas from the proof.

The formula-based principles are issued from the Musser’s inductionless induction (or proof by consistency) method based on the Knuth-Bendix completion algorithm. The implementing reasoning methods are lazy and allow mutual induction. On the other hand, they are reductive, i.e. the newly derived formulas in the proof should be smaller (or equal), at the ground level, than the currently processed formula, and the induction ordering is unique for a given proof. In this talk, we sketch a cycle-based induction method that generalizes and keeps the best features of the above methods. The application of any IH is validated by a cycle of formulas governed by an induction ordering. The reasoning is local, at cycle level, reductive-free, and naturally performs mutual and lazy induction. From a theoretical point of view, our method synthesizes the overall usage of Noetherian induction reasoning in first-order logic. From a practical point of view, it allows for less restrictive specifications, more efficient implementations and proof certification processes.


room B013

Wednesday, March 27#

Obfuscation by Partial Evaluation of Distorted Interpreters. Professor Neil Jones, University of Copenhagen(info) (visiting UL - LORIA)

(with Roberto Giacobazzi, Isabella Mastroeni, Verona; from PEPM 2012)

How to construct a general program obfuscator? We present a novel approach to automatically generating obfuscated code P' from any program P whose source code is given.

Start with a (program-executing) interpreter interp for the language in which P is written. Then “distort” interp so it is still correct, but its specialization P' with respect to P is transformed code that is equivalent to the original program, but harder to understand or analyze.

Potency of the obfuscator is proved with respect to a general model of the attacker, modeled as an approximate (abstract) interpreter. A systematic approach to distortion is to make program P obscure by transforming it to P' on which (abstract) interpretation is incomplete. Interpreter distortion can be done by making residual in the specialization process sufficiently many interpreter operations to defeat an attacker in extracting sensible information from transformed code. Our method is applied to: code flattening, data-type obfuscation, and opaque predicate insertion. The technique is language independent and can be exploited for designing obfuscating compilers.


14h00, room A008

Tuesday, March 26#

Multi-Focus Rewrite Rules. Christophe Calvès (UL - LORIA)

Rewrite rules are frequently used to perform program transformations. This includes compilation (optimisations, code generation, ...) and evaluation to specify program semantics, for state exploration or just interpretation. This talk presents multi-focus rewrite rules and strategies, able to traverse, match and rewrite, in a single rule, several subterms of a subject. These rules are particulary well-suited for programming-languages semantics's specification as it often requires to consider serval elements (code fragment, memory, ...) to perform actions.

We will demonstrates the modularity and compositionality of our approach with the specification, in TOM (a compiler embedding algebraic terms and rewrite rules in many languages), of the semantics of a ML-like toy language.


14h00, room A008

Thursday, March 21#

Quantic Day. Quantic Day (UL - LORIA)

Le jeudi 21 mars aura lieu une journée sur la thématique de l'informatique quantique en C103.


10h00 - Accueil 10h30 - 11h15 Gilles Dowek - TBA 11h15 - 12h00 Pablo Arrighi - Quantum Cellular Automata: structure, universality

14h00 - 14h45 Maarten Van den Nest - Classical simulation complexity of quantum circuits. 14h45 - 15h30 Emmanuel Jeandel - Dynamique des modèles de calcul

16h15 - 16h45 Benoit Valiron - Quipper: a scalable quantum programming language 16h45 - 17h15 Vincent Nesme - Marches quantiques et Bancs de filtres 17h15 - 17h45 Alejandro Diaz-Caro - Vectorial types, non-determinism and probabilistic systems: towards a computational quantum logic


10h00-18h00, room C103

Tuesday, March 12#

Russian Cards, communication protocols, and cryptography. Hans van Ditmarsch (CNRS - LORIA)

Consider the following logic puzzle, known as 'Russian Cards':

From a pack of seven known cards 0,1,2,3,4,5,6 Alice and Bob each draw three cards and Eve gets the remaining card. How can Alice and Bob openly (publicly) inform each other about their cards, without Eve learning of any of their cards who holds it?

Clearly Alice and Bob are required to devise a secure protocol to communicate their secrets to each other, and Eve is the eavesdropper. Unlike traditional security settings, initially the three agents share knowledge: how many cards there are, and that everybody knows his own cards. This riddle, similar riddles, and things such as schemes for bidding in Bridge, have played a role in the development of information-based cryptography. One original publication (not treating this riddle) is by Fischer & Wright: Bounds on Secret Key Exchange Using a Random Deal of Cards, Journal of Cryptography, 1996.

I will present and discuss solutions to the riddle. There are several solutions. I will explain its interest to the dynamic epistemic logic community: various 'near solutions' can be described in terms of knowledge, common knowledge, and preservation of knowledge properties after update (typically, making ignorance public can leak information). I will summarily present some recent results more or less in combatorial mathematics. This includes work with my - until recently - collaborators in Sevilla. Some relevant publications:

  • M.H. Albert, R.E.L. Aldred, M.D. Atkinson, H.P. van Ditmarsch, C.C. Handley, Safe communication for card players by combinatorial designs for two-step protocols. Australasian Journal of Combinatorics, 33:33-46, 2005.
  • Andrés Cordón-Franco, Hans van Ditmarsch, David Fernández Duque, Joost J. Joosten, Fernando Soler-Toscano. A secure additive protocol for card players. Australasian Journal of Combinatorics 54: 163-175, 2012.
  • Colleen Swanson, Douglas Stinson. Combinatorial Solutions Providing Improved Security for the Generalized Russian Cards Problem. Designs, Codes and Cryptography; Published online first November 2012.


14h, room A008

Friday, March 1#

Les expressions régulières déterministes. Benoît Groz (Université de Tel-Aviv)

Les expressions régulières déterministes sont utilisées dans plusieurs langages de schémas XML (DTDs, XML Schémas). Nous présentons des techniques permettant de tester en temps linéaire si une expression régulière est déterministe. Nous montrons aussi que l'on peut décider en temps linéaire l'appartenance d'un mot au langage d'une l'expression pour de larges familles d'expressions déterministes. Nos algorithmes construisent des structures de données sur l'arbre de syntaxe de l'expression alors que les algorithmes classiques pour ces problèmes reposent sur la construction de l'automate de Glushkov, une construction quadratique lorsque la taille de l'alphabet n'est pas bornée.


14h, room A008

Tuesday, February 12#

Parikh Matrices of Words and Their Properties. Kumbakonam Govindarajan Subramanian (Prof. at Universiti Sains Malaysia)

Words and their Combinatorial properties constitute a growing area of research that can be classified under Discrete Mathematics. A number of fundamental and significant mathematical results have been revealed in this study. This area can also be considered as a study of discrete sequences or a study of combinatorial properties of free monoids. A word is a finite or an infinite sequence of symbols taken from a finite set, called alphabet. A subword of a given word w is a subsequence of w. In the recent past, the notion of a special matrix, called Parikh Matrix, associated with a word, has been introduced and investigated. This matrix gives information on the number of certain subwords of a given word. This talk is intended to provide some background and highlight interesting results established by researchers in this topic of Parikh matrix. An extension of the notion of a Parikh matrix to picture arrays, which are rectangular arrangements of symbols, will also be indicated.


14h, room A008

Friday, February 15#

Preuves de programmes à échanges de messages non copiants dans Heap-Hop. Etienne Lozes (LSV - Univ. Cassel)

La conception de programmes à échanges de messages peut s'avérer difficile lorsque le coup occasionné par la recopie du message de l'espace d'adressage de l'envoyeur à celui du receveur devient un facteur critique pour l'efficacité du programme. Une solution à ce problème consiste à utiliser un espace d'adressage commun à l'envoyeur et au receveur. Dans cette approche, la recopie peut être évitée, mais la bonne entente entre envoyeur et receveur est nécessaire quant aux accès à la mémoire. Jules Villard et moi-même avons introduit une extension de la logique de séparation qui permet de prouver de tels programmes corrects du point de vue des accès mémoires. Inspiré du langage Sing#, notre système introduit une notion de contrat de canal qui permet de garantir des propriétés plus fortes que celles habituellement établies par la logique de séparation: les programmes prouvés sont garantis sans erreurs de communications (message oublié, message inattendu) et sans fuites mémoires. J'illustrerai ces idées sur l'outil Heap-Hop, et donnerai un aperçu de nos développements en cours sur la théorie des contrats de canaux.


14h, room A008

Tuesday, February 5#

Everlasting Security through Quantum Cryptography. Dominique Unruh (University of Tartu, Tallinn)

Cryptography is a powerful tool for processing confidential data. Cryptographic protocols are, however, only as secure as the underlying encryption schemes. And we do not know whether these might not be broken at some point in the future. This leaves us in a difficult situation: If we process highly sensitive data using cryptographic protocols, an attacker might simply record all messages. Should the underlying encryption scheme be broken in the future, the attacker will then be able to decrypt all confidential data in retrospect. For highly confidential data (such as, e.g., medical records) such a situation is not acceptable.

A way out is "everlasting security". A protocol with everlasting security guarantees that all data involved in the protocol stays secure, even if at some point in the future all the underlying encryption schemes are broken. Unfortunately, with traditional cryptographic techniques, everlasting security can only be achieved in very limited situations.

In this talk, we explain how everlasting security can be achieved for a wide variety of tasks by using quantum cryptography, i.e., by making use of quantum mechanical effects in the cryptographic protocol.

(No prior knowledge on quantum cryptography is required.)


14h, room A008

Tuesday, December 18#

Algorithmic Structuring and Compression of Proofs. Stefan Hetzl (INRIA Saclay)

This talk will describe a method for the compression of formal proofs by the computation of new lemmas. We use tree grammars for an efficient treatment of the term-structure of a first-order proof. Our algorithm works by first minimizing a tree grammar which, in a second step, induces a formula that serves as abbreviating lemma. Finally, I will outline an approach to extending this work to a method for inductive theorem proving.


14h, room A008

Friday, November 9#

Proving cryptographic schemes in Agda (Dependently typed functional programming for Alice and Bob). Nicolas Pouillard (DemTech project, IT University of Copenhagen (ITU))

In order to gain confidence in cryptographic schemes and primitive operations, cryptographers use a combination of theories. These theories enable us to mathematically prove security properties of these constructions. Probability theory, game theory, number theory and complexity theory are often all needed at the same time. We aim at realizing these proof arguments in Type Theory using Agda. Our current approach is to carefully use abstractions and dependently typed functional programming techniques to elegantly prove security properties. Finally we aim at reducing the usual gap between the formal description of the construction and its actual implementation. We do so by writing in a functional style supporting extraction into low level circuits.


11h, room A008

Tuesday, October 23#

Bisimulations et preuves de congruence dans les langages d'ordre supérieur. Sergueï Lenglet (Université de Lorraine, LORIA)

Les bisimilarités sont des équivalences qui mettent en relation les termes d'un langage qui ont le même comportement (pour une certaine définition de "même comportement"). Pour prouver la correction d'une bisimilarité, il est nécessaire de prouver qu'elle est une congruence, c.à.d. préservée par contextes : si deux termes sont bisimilaires, alors ils sont toujours bisimilaires une fois placé dans n'importe quel contexte. C'est un problème particulièrement difficile dans les langages d'ordre supérieur, tels le lambda-calcul ou le pi-calcul d'ordre supérieur (dans lequel les messages échangés contiennent des processus exécutables). Pour les langages dérivés du lambda-calcul, il existe une méthode systématique pour prouver la congruence, appelée méthode de Howe. Nous verrons comment adapter la méthode de Howe au pi-calcul d'ordre supérieur.

Je mentionnerai aussi rapidement en fin d'exposé mes autres travaux, pour vous donner une idée de mes centres d'intérêts en recherche et éventuellement initier des collaborations au sein du département.


14h, room A008

Tuesday, October 16#

Unification in Blind Signatures. Serdar Erbatur (SUNY Albany)

Blind signatures are signature schemes that keep the content confidential and have applications in modern cryptography for electronic voting and for digital cash schemes. We study unification modulo axioms that originate from RSA implementation of blind signatures.

In this talk, we will present one such axiom called E. The axiom E has an undecidable unification problem. It also specifies that a binary operator distributes synchronously over another binary operator, hence called synchronous distributivity. Furthermore, it appears that this is the weakest equational theory which has an undecidable unification problem. Next, we briefly present the axiom F that is similar to E and show that it has a decidable unification problem. This last axiom is not related to blind signatures but satisfies a property of El Gamal encryption.These are only some of many possible axiomatizations of algebraic properties of blinding schemes. The results we obtained will be useful to analyze the protocols with blinding when the algorithms are integrated into the protocol analysis tools.


14h, room A008

Tuesday, July 10#

Corruption evidence in electronic voting. Mark D. Ryan (University of Birmingham)

Coercion-resistance is a fundamental, and strong, property of electronic voting systems. It states that a voter should be able to cast his vote as intended, even in presence of a coercer that may try to force him to cast a different vote. Especially in remote voting, coercion-resistance is extremely hard to achieve and leads either to schemes of questionable usability or to trust assumptions that are difficult to meet in practice.

We propose a change of perspective, replacing the requirement of coercion-resistance with a requirement that we call "corruption-evidence": there should be public evidence of the amount of coercion and other corruption that has taken place during a particular execution of the voting system. Depending on the significance of this amount, election authorities can decide to consider the election valid or not, leading to disincentivisation of coercion.


11h, room A008

Tuesday, May 15#

Attack-Defense Tree Methodology for Security Assessment. Barbara Kordy (Université du Luxembourg)

Attack-defense trees are a novel methodology for graphical security modeling and assessment. They extend the well established formalism of attack trees, by allowing nodes that represent defensive measures to appear at any level of the tree. This extension enlarges the modeling capabilities of attack trees and makes the new formalism suitable for representing interactions between possible attacks and corresponding defensive measures.

This talk will give an overview of the attack-defense tree methodology. After introducing the syntax as well as possible semantics for attack-defense trees, a number of interesting questions, including equivalent representations of security scenarios, quantitative analysis using attack-defense trees and computational complexity of the considered model, will be discussed.

Bio: Barbara Kordy is a postdoctoral researcher at the University of Luxembourg. She is currently leading the ATREES project which focuses on modeling and evaluating vulnerability scenarios using attack-defense tree methodology. Her research interests include formal methods for security assessment, privacy and anonymity issues and analysis of security protocols. She received her Ph.D. and master degree in computer science from the University of Orléans in France. She also holds a master degree in mathematics obtained at the University of Silesia in Poland.


14:00, amphi C

Monday, April 23#

Bisimulations pour les opérateurs de contrôle délimité. Sergueï Lenglet (Université Paris-Diderot, PPS)

Nous présentons les premiers résultats sur la définition d'une théorie comportementale pour un lambda-calcul étendu avec les opérateurs de contrôle délimité shift et reset. Dans un premier temps, nous définissons une notion d'équivalence comportementale, que nous cherchons ensuite à caractériser à l'aide de bisimulations (applicative et de forme normale). Nous étudions aussi la relation entre équivalence comportementale et une autre équivalence basée sur une transformation CPS.


14:00, room A008

Thursday, April 5#

Malleability in Modern Cryptography. Markulf Kohlweiss (Microsoft Research Cambridge)

In recent years, malleable cryptographic primitives have advanced from being seen as a weakness allowing for attacks, to being considered a potentially useful feature. Malleable primitives are cryptographic objects that allow for meaningful computations, as most notably in the example of fully homomorphic encryption. Malleability is, however, a notion that is difficult to capture both in the hand-written and the formal security analysis of protocols.

In my work, I look at malleability from both angles. On one hand, it is a source of worrying attacks that have, e.g., to be mitigated in a verified implementation of the transport layer security (TLS) standard used for securing the Internet. On the other hand, malleability is a feature that helps to build efficient protocols, such as delegatable anonymous credentials and fast and resource friendly proofs of computations for smart metering. We are building a zero-knowledge compiler for a high-level relational language (ZQL), that systematically optimizes and verifies the use of such cryptographic evidence.

We recently discovered that malleability is also applicable to verifiable shuffles, an important building block for universally verifiable, multi-authority election schemes. We construct a publicly verifiable shuffle that for the first time uses one compact proof to prove the correctness of an entire multi-step shuffle. In our work, we examine notions of malleability for non-interactive zero-knowledge (NIZK) proofs. We start by defining a malleable proof system, and then consider ways to meaningfully 'control' the malleability of the proof system. In our shuffle application controlled-malleable proofs allow each mixing authority to take as input a set of encrypted votes and a controlled-malleable NIZK proof that these are a shuffle of the original encrypted votes submitted by the voters; it then permutes and re-randomizes these votes and updates the proof by exploiting its controlled malleability.

Short Bio: I am a postdoc at Microsoft Research Cambridge in the Programming Principles and Tools group. I did my PhD at the COSIC (Computer Security and Industrial Cryptography) group at the K.U.Leuven, and my master thesis at IBM Research Zurich. My research focus is on privacy-enhancing protocols and formal verification of cryptographic protocols.


14:00, room A006

Friday, April 6#

Composing objects and services: a formal model driven development (MDD) approach. Paul Gibson (Telecom & Management SudParis)

In this talk, we will present the problem of composing systems and reasoning about the correctness of such compositions. We will present semantics linking objects and services, and discuss the use of formal methods during MDD of sysems of services. Different formalisms will be used to highlight the important issues, but we will focus on our on-going research using Event-B to model requirements, refined into designs, that are to be transformed into OO code (with concrete examples in Java).


14:00, room A008

Friday, March 23#

Tableaux modulo théories en utilisant la superdéduction : une application à la vérification de règles de preuve B avec l'outil de démonstration automatique Zenon. David Delahaye (CNAM)

Nous proposons une méthode qui permet de développer les tableaux modulo théories en utilisant les principes de la superdéduction, parmi lesquelles la théorie est utilisée pour enrichir le système de déduction avec de nouvelles règles de déduction. Cette méthode est présentée dans le cadre de l'outil de démonstration automatique Zenon, et nous décrivons un exemple d'application à la théorie des ensembles de la méthode B. Cela permet de fournir un outil de démonstration alternatif à celui de l'Atelier B, qui peut être utilisé pour vérifier les règles de preuve B en particulier. Nous proposons également des benchmarks dans lesquels cet outil de démonstration est capable de vérifier automatiquement une partie des règles provenant de la base de règles ajoutées maintenue par Siemens IC-MOL.


14:00, room A008

Tuesday, March 20#

Towards Automated Security Proofs of Block-Cipher Based Cryptographic Primitives. Martin Gagné (VERIMAG, Université Joseph Fourier)

A block cipher algorithm (e.g. AES, Blowfish, DES, Serpent and Twofish) is a symmetric key algorithm that takes a key, a fixed size input message block and produces a fixed size output block. Block ciphers are among the most ubiquitous algorithms in symmetric cryptography, and can be used to construct encryption schemes, message authentication codes, collision-resistant hash functions, and many other primitives.

As with any other cryptographic construction, we require that these primitives be proven secure, usually through a reduction of the security of the primitive to the security of the block cipher.

Since block-cipher based primitives are usually constructed using a fairly small set of operations, we believe that Hoare logic is an ideal tool for automatically proving their security. We present a Hoare logic for proving the security of block cipher modes of operation. Our logic is fairly simple, requires only three invariants, and can be used to prove the security of all the common modes of operation. We also present a second Hoare logic for proving the security of message authentication codes. This logic is much more complicated than the first, and we discuss the different challenges posed by this primitive.


14:00, room A008

Monday, March 19#

Modèles de calcul non initialisés et Systèmes Dynamiques. Emmanuel Jeandel (LIF, Université de Provence)

Dans cet exposé, on considère les modèles de calcul usuels (machines de Turing, machines à compteurs) comme des systèmes dynamiques, correspondant à l'application de leur fonction de transition sur une configuration quelconque. En particulier, dans ce contexte, il est nécessaire d'observer leur comportement partant de n'importe quelle configuration, plutôt que d'une configuration initiale donnée.

Le problème devient alors totalement différent. Par exemple, la preuve de l'indécidabilité de l'existence d'une configuration sur laquelle une machine de Turing ne s'arrête pas (Hooper, 1966) est bien plus délicate que l'indécidabilité de l'arrêt d'une machine de Turing partant d'une configuration donnée (Turing, 1937), et ce phénomène se répète pour la majorité des modèles de calcul.

Après une présentation générale de la problématique, nous essaierons dans cet exposé d'illustrer les méthodes permettant de résoudre ou contourner le problème dans le cadre du modèle de calcul des pavages par tuiles de Wang, qui convient fort bien à un traitement dynamique.


14:00, room A008

Tuesday, March 13#

Specifying Kermeta through Rewriting Logic. Christophe Calvès (INRIA Lille-Nord Europe)

Kermeta is a Domain-Specific Language designed as a kernel for metamodel engineering. It unifies metamodeling, constraints, semantics and transformations into a coherent and statically typed language. In a nutshell, it has an object-oriented base similar to Java but with handling of metamodeling aspects such as attributes, associations, multiplicity, ... and a "design by contract" approach in the style of Eiffel.

Kermeta enable users to give a semantics to their metamodel. Unfortunately, even if Kermeta is a mature software, it has not been(formally) specified yet. This lack prevents any (formal) verification on metamodel's semantics.

In this talk we will present a specification of Kermeta's semantics through the formal definition of a compiler from Kermeta to an object-oriented stack-based abstract machine. Both the compiler and the abstract machine are defined in the specification framework K, based on rewriting logic.

We will show the problems raised by Kermerta's particularities (such as attribute/reference handling, design by contract, multiple inheritance, genericity, ...), how to formalize them and compile them correctly.


14:00, room A008

Thursday, December 8#

A Bishop-style constructive theory of posets. Marian Baroni, University ’Dunarea de Jos’ of Galati (Romania).

Partially ordered sets are examined from the point of view of Bishop’s constructive mathematics, which can be viewed as the constructive core of mathematics and whose theorems can be translated into many formal systems of computable mathematics. The features of Bishop’s constructivism are illustrated by a simple example, the least-upper-bound principle. Unlike the classical case, this statement does not hold constructively. However, the order completeness of the real number set can be expressed constructively by an equivalent condition for the existence of supremum, a condition of order locatedness which is vacuously true in the classical setting. A generalization of this condition leads to a reasonable definition of order completeness for an arbitrary partially ordered set.


16:00, room A006

Tuesday, December 6#

Calculabilité et mathématiques effectives. Antoine Taveneaux, Université Paris Diderot, LIAFA.

Dans de nombreux théorèmes mathématiques, le quantificateur existentiel n'est pas "effectif", dans le sens où l'objet dont on prouve l'existence n'est pas calculable.

La théorie de la calculabilité porte principalement sur la comparaison des ensembles d'entiers, du point de vue de leur calculabilité mutuelle. Nous verrons que l'on peut aussi comparer l'effectivité des théorèmes, et faire en quelque sorte de la "calculabilité sur les théorèmes".

Cet exposé sera fait en français et ne suppose presque aucun prérequis en théorie de la calculabilité.


14:00, room A006

Monday, November 28#

Running Mixnet-Based Elections with Helios. Olivier Pereira, UCL Crypto Group (Louvain-la-Neuve).

The Helios voting system is an open-audit web-based voting system that has been used by various institutions in real-stake elections during the last few years. While targeting the simplicity of the election workflow, the homomorphic tallying process used in Helios limits its suitability for many elections (large number of candidates, specific ballot filling rules...).

We present a variant of Helios that allows an efficient mixnet-based tallying procedure, and document the various choices we made in terms of election workflow and algorithm selection. In particular, we propose a modified version the TDH2 scheme of Shoup and Gennaro that we found particularly suitable for the encryption of the ballots.

Our Helios variant has been tested during two multi-thousand voter elections. The lessons taken from the first of these elections motivated some changes into our procedure, which have been successfully experimented during the second election.

This is joint work with Philippe Bulens and Damien Giry.


10:00, room A008

Tuesday, November 15#

Obfuscation by Partial Evaluation of Distorted Interpreters. Roberto Giacobazzi, Univ. Verona (Italy).

Joint work with Neil D. Jones and Isabella Mastroeni.

How to construct a general program obfuscator? We present a novel approach to automatically generating obfuscated code P' from any program P with source clear code. Start with a (program-executing) interpreter Interp for the language in which P is written. Then “distort” Interp so it is still correct, but its specialization P' with respect to P is transformed code that is equivalent to the original program, but harder to understand or analyze. Potency of the obfuscator is proved with respect to a general model of the attacker, modeled as an approximate (abstract) interpreter. A systematic approach to distortion is to make program P obscure by transforming it to P' on which (abstract) interpretation is incomplete. Interpreter distortion can be done by making residual in the specialization process sufficiently many interpreter operations to defeat an attacker in extracting sensible information from transformed code. Our method is applied to: code flattening, data-type obfuscation, and opaque predicate insertion. The technique is language independent and can be exploited for designing obfuscating compilers.


10:30, room B011

A decision procedure for trace equivalence. Vincent Cheval, LSV (Cachan).

We consider security properties of cryptographic protocols that can be modeled using the notion of trace equivalence. The notion of equivalence is crucial when specifying privacy- type properties, like anonymity, vote-privacy, and unlinkability.

In this paper, we give a calculus that is close to the applied pi calculus and that allows one to capture most existing protocols that rely on classical cryptographic primitives. First, we propose a symbolic semantics for our calculus relying on constraint systems to represent infinite sets of possible traces, and we reduce the decidability of trace equivalence to deciding a notion of symbolic equivalence between sets of constraint systems. Second, we develop an algorithm allowing us to decide whether two sets of constraint systems are in symbolic equivalence or not. Altogether, this yields the first decidability result of trace equivalence for a general class of processes that may involve else branches and/or private channels (for a bounded number of sessions).


14:00, room B011

Tuesday, October 18#

Kolmogorov complexity and "no information" notion. Alexander Shen, CNRS, LIRMM (Montpellier).

There are many ways to formalize the intuitive idea of "a message that has no information" or "a message has full information" about something. One can use logic of knowledge, Shannon's information theory, consider protocols (deterministic, non-deterministic, probabilistic etc.). We discuss these approaches and then switch to the algorithmic information theory approach and explain A. Muchnik's result about (kind of) cryptography framework in algorithmic information theory.


14:00, room C005 (salle du conseil)

Friday, June 10#

Dynamic logic and its parents. Daniel Leivant, Indiana University.

Modal deductive formalisms about imperative programs, such as dynamic logic, can be seen as syntactic sugar for more explicit forms of reasoning, such as first order theories and second order logic. This view suggests notions of completeness for dynamic logic, which we argue are more informative and appropriate than the traditional notion of relative completeness.


14:00, room C103

Wednesday, May 11#

Politique de sécurité dynamique. Frederic Prost, LIG (Grenoble).

La propriété de non-interférence qui consiste à prouver que deux parties d'un programme n'ont pas d'interactions entre elles, est la base de nombreuses analyses de programme : parallélisation, program slicing, élimination de code mort, etc. Dans le domaine de la confidentialité il s'agit de montrer qu'il n'y a pas de flux d'information du "privé" vers le "public". Cette propriété est cependant trop stricte pour rendre compte des applications réelles : par exemple lors de la vérification d'un mot de passe il y a formellement une interaction entre le domaine privé (la valeur du mot de passe) et le domaine public (l'accès est autorisé ou non en regard du code entré). Cependant de telles fuites d'information sont considérées comme acceptables. Nous introduisons une notion de politique de confidentialité sous forme de règles de réécritures associées (on pourrait voir ça comme un typage plus fin) pour spécifier que de telles fuites sont "permises" et vérifier qu'un programme est bien correct vis-à-vis d'une politique de sécurité. De même après trois essais infructueux une carte banquaire est bloquée, nous montrons comment en ajoutant des actions aux règles définissants la politique de sécurité on peut modifier cette dernière au cours du calcul et comment vérifier que le programme reste valide vis-à-vis de sa politique de sécurité.


15:30, room B013

Tuesday, May 10#

An Introduction to Program Equivalence. Vasileios Koutavas, Trinity College (Dublin).

Contextual equivalence is a relation between program terms which guarantees that related terms can replace eachother inside any arbitrary system, without changing the behaviour of the system. This relation reveals fundamental symmetries of program terms that abstract away from concrete syntax and illuminate the semantics of programming languages. Its implications span from programming language design to software engineering, to system security, to program verification.

To understand contextual equivalence, and to develop reasoning principles for it, we need to develop a theory that can express the basic interractions between a program and its environment, and to express contextual equivalence within that theory. In this talk we will review the theories available today and then concentrate on thosed based on bisimulation, which we will examine for a variety of functional, imperative, and nondeterministic languages.


10:00, room A008

Tuesday, April 5#

Assistance à la conception de modèles à l'aide de contraintes. Marie de Roquemaurel, IRIT, Université de Toulouse III.

Les systèmes informatiques embarqués deviennent omniprésents dans de multiples domaines et de plus en plus d'industriels cherchent à modéliser leurs problèmes réels, à vérifier la cohérence de leur modélisation, et à utiliser des outils performants pour les résoudre. Dans ce travail, nous considérons les problèmes modélisés avec les outils utilisés actuellement par la communauté de l'ingénierie dirigée par les modèles comme le framework EMF (Eclipse Modeling Framework) pour représenter le système informatique embarqué ou le langage de contraintes OCL (Object Constraint Language) pour traduire les propriétés métier. Pour vérifier ces propriétés de manière efficace, l'approche choisie consiste à transformer le problème en un problème de satisfaction de contraintes et d'utiliser les techniques efficaces d'Intelligence Artificielle pour le résoudre. Les premiers résultats expérimentaux sont prometteurs.

Lors de cet exposé, je présenterai cette approche et nous verrons qu'elle présente l'avantage de pouvoir faire de la validation de contraintes, de compléter des modèles partiels tout en pouvant prendre en compte un critère à minimiser.

Mots-clés : Conception Système, Ingénierie Dirigée par les Modèles, Transformation de Modèles, Synthèse, Contraintes


14:00, room A006

Tuesday, February 15#

Decidable Logics Combining Heap Structures and Data. Gennara Parlato, CNRS, LIAFA (Paris).

We define a new logic, STRAND, that allows reasoning with heap-manipulating programs using deductive verification and SMT solvers. STRAND logic (“STRucture ANd Data” logic) formulas express constraints involving heap structures and the data they contain; they are defined over a class of pointer-structures R defined using MSO-defined relations over trees, and are of the form ∃x∀y.φ(x, y), where φ is a monadic second-order logic (MSO) formula with additional quantification that combines structural constraints as well as data-constraints, but where the data-constraints are only allowed to refer to x and y.

The salient aspects of the logic are: (a) the logic is powerful, allowing existential and universal quantification over the nodes, and complex combinations of data and structural constraints; (b) checking Hoare-triples for linear blocks of statements with pre-conditions and post-conditions expressed as Boolean combinations of the existential and universal STRAND formulas reduces to satisfiability of a STRAND formula; (c) there are powerful decidable fragments of STRAND, one semantically defined and one syntactically defined, where the decision procedure works by combining the theory of MSO over trees and the quantifier-free theory of the underlying data-logic.

We demonstrate the effectiveness and practicality of the logic by checking verification conditions generated in proving properties of several heap-manipulating programs, using a tool that combines an MSO decision procedure over trees (MONA) with an SMT solver for integer constraints (Z3).


14:00, room A008

Tuesday, February 8#

Linear Dependent Types for Certified Resource Consumption. Marco Gaboardi, Università di Bologna.

I present the ideas underlying the design of an interactive programming framework useful to ensure the reliability and correctness of programs with respect to their resource consumption. The key ingredient of this framework is a type system, dlPCF, mixing ideas from linear logic and dependent types. I introduce dlPCF and I show that it is sound and relatively complete. Completeness holds in a strong sense: dlPCF is not only able to precisely capture the functional behaviour of programs (i.e. how the output relates to the input) but also some of their intensional properties, namely the complexity of their evaluation. Additionally, dlPCF is parametrized on the underlying language of index terms, which can be tuned so as to sacrifice completeness for tractability.


14:00, room A008

Tuesday, January 25#

Formal analysis of security protocols: the case of electronic voting. Steve Kremer, INRIA, LSV (Cachan).

Security protocols are distributed programs which aim at establishing security guarantees, such as confidentiality, authentication or anonymity, while being executed over an untrusted network. Nowadays, security protocols are present in many aspects of our everyday life: electronic commerce, mobile phones, e-banking, etc. Symbolic methods, which often come with automated tool support, have shown extremely useful for finding structural flaws in such protocols or proving their absence.

In this talk we report on our efforts carried out over the last years to apply such techniques to electronic voting protocols. The specificities of electronic voting have raised several challenges: they make use of esoteric cryptographic primitives and need to guarantee complex security properties, such as receipt-freeness or verifiability. We have modelled and analyzed such protocols and their properties in the applied pi calculus, a formal language for expressing security protocols. Formally defining the protocols and properties allowed us to highlight many (hidden) assumptions and has shown to be a non trivial task in itself. Regarding automated analysis, this line of work raised a number of challenges in formal analysis of security protocols, some of which are still the subject of active research.


14:00, room A008

Tuesday, January 18#

Solving Simple Stochastic Tail Games. Florian Horn, CNRS, LIAFA (Paris).

Stochastic games are a natural model for open reactive processes: one player represents the controller and his opponent represents a hostile environment. The evolution of the system depends on the decisions of the players, supplemented by random transitions. There are two main algorithmic problems on such games: computing the values (quantitative analysis) and deciding whether a player can win with probability 1 (qualitative analysis). In this paper we reduce the quantitative analysis to the qualitative analysis: we provide an algorithm for computing values which uses qualitative analysis as a sub-procedure. The correctness proof of this algorithm reveals several nice properties of perfect-information stochastic tail games, in particular the existence of optimal strategies.


14:00, room A008

Here is the link to the Calendar of the seminar.

To add this calendar to your zimbra calendars: on your zimbra account,

  1. right click on 'Calendars',
  2. select 'Link to shared calendar',
  3. enter as email address, 'FMSeminar' as path (and you can change the name and color).

You can also use the ics file.

How to reach LORIA.

See explanations here and a map there.

Doctoral training.

The FM Seminar is part of the doctoral training ED IAEM.

Each PhD student must print this form once and bring it to the seminars to be validated by the organizer.

Other seminars may be part of the doctoral training (ask the corresponding organizers).

Ajouter un attachement

Seuls les utilisateurs autorisés peuvent publier de nouveaux attachements.
« Cette page (révision-287) a été modifiée pour la dernière fois le 07-mars-2017 11:58 par hoyrup