!!!Formal Methods Seminar

The seminar usually takes place on Tuesday at 2 p.m.  
It is of course open to anyone interested. If you want to give a talk or propose a speaker, please contact <img src="http://www.loria.fr/~hoyrup/addr.png" align='top'>.

\\

%%tabbedSection 
%%tab-Talks 2017

__Upcoming talks__



!Tuesday, June 13
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Verifying Quantum Computations, Graphically.__ [Ross Duncan|http://personal.strath.ac.uk/ross.duncan/] (Univ. of Strathclyde, Glasgow)

Quantum computers are coming soon, and with them software whose functioning depends on the principles of quantum mechanics.  Unfortunately, the “logic” which underlies quantum mechanics is very different to the logics usually employed by computer scientists for the verification of software.  How then can we verify the correctness of quantum software?  In this talk I will present the ZX-calculus, a formal graphical language based on the algebraic structure of quantum theory, and demonstrate how it can be used to verify quantum computations with the help of an automated rewrite system.


\\
''14h00, room A008''
/%

----

__Previous talks__



!Thursday, April 20
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__On The Unreasonable Effectiveness of Boolean SAT Solvers.__ [Vijay Ganesh|https://ece.uwaterloo.ca/~vganesh/] (Univ. of Waterloo, Canada)

Modern conflict-driven clause-learning (CDCL) Boolean SAT solvers routinely solve very large industrial SAT instances in relatively short periods of time. This phenomenon has stumped both theoreticians and practitioners since Boolean satisfiability is an NP-complete problem widely believed to be intractable. It is clear that these solvers somehow exploit the structure of real-world instances. However, to-date there have been few results that precisely characterize this structure or shed any light on why these SAT solvers are so efficient.

In this talk, I will present results that provide a deeper empirical understanding of why CDCL SAT solvers are so efficient, which may eventually lead to a complexity-theoretic result. Our results can be divided into two parts. First, I will talk about structural parameters that can characterize industrial instances and shed light on why they are easier to solve even though they may contain millions of variables compared to small crafted instances with hundreds of variables. Second, I will talk about internals of CDCL SAT solvers, and describe why they are particularly suited to solve industrial instances.

Brief Bio:

Dr. Vijay Ganesh is an assistant professor at the University of Waterloo since 2012. Prior to that he was a research scientist at MIT, and completed his PhD in computer science from Stanford University in 2007. Vijay's primary area of research is the theory and practice of automated reasoning aimed at software engineering, formal methods, security, and mathematics. In this context he has led the development of many SAT/SMT solvers, most notably, STP, The Z3 string solver, MapleSAT, and MathCheck. He has also proved several decidability and complexity results relating to the SATisfiability problem for various mathematical theories. For his research, he has won over 21 awards including an ACM Test of Time Award at CCS 2016, two Google Faculty Research Awards in 2011 and 2013, and a Ten-Year Most Influential Paper Award at DATE 2008.

\\
''14h00, room A006''
/%


!Thursday, March 2
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Solvability of Matrix-Exponential Equations.__ [Amaury Pouly|https://www.pouly.fr/] (MPI-SWS, Saarbrücken)

We consider the problem of checking whether a switching system can reach a particular state. We show that even this problem is undecidable but decidable in some restricted cases. A switching system is a particular form of hybrid system very commonly used to model electrical systems with several states. It consists of a finite number of states and in each state, the system evolves according to a linear differential equation. This type of model is also used in modelisation to over-approximate the possible behaviours of a physical system when the transition rules between states are unclear. Our result 
shows a surprising connection with a number of theorems from algebraic and transcendental number theory, and Hilbert's Tenth Problem. One possible interpretation of this work is that switching systems, despite their innocent look, are already too powerful a model of computation.

\\
''14h00, room A006''
/%

!Tuesday, January 17
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Minimal probabilistic automata have to make irrational choices.__ [Mahsa Shirmohammadi|https://www.cs.ox.ac.uk/people/mahsa.shirmohammadi/] (Oxford Univ.)

In this talk, we answer the question of whether, given a probabilistic automaton (PA) with rational transition probabilities, there always exists a minimal equivalent PA that also has rational transition probabilities. The PA and its minimal equivalent PA accept all finite words with equal probabilities.

We approach this problem by exhibiting a connection with a longstanding open question concerning nonnegative matrix factorization (NMF). NMF is the problem of decomposing a given nonnegative n×m matrix M into a product of a nonnegative n×d matrix W and a nonnegative d×m matrix H. In 1993 Cohen and Rothblum posed the problem of whether a rational matrix M always has an NMF of minimal inner dimension d whose factors W and H are also rational. We answer this question negatively, by exhibiting a matrix for which W and H require irrational entries. As a corollary we obtain a negative answer to the question on rationality of minimal probabilistic automata.

This talk is based on two papers in ICALP 2016 and SODA 2017.

\\
''14h00, room A008''
/%


!Thursday, January 12
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Towards SAT Solving in Batches.__ [Sophie Tourret|http://researchmap.jp/tourret/?lang=english] (NII Tokyo)

In this presentation, I will talk about my ongoing work on how to perform SAT solving by batches, i.e., by making multiple decisions at once in the CDCL loop. There are theoretical obstacles to this because the admissible strategy of CDCL assumes that conflict resolution and unit propagation have a higher priority than decisions and in the current CDCL framework if this strategy is not followed, a deadlock can occur where no rule can be applied and the satisfiability of the input formula is still unknown. A new formalism will be presented to solve this problem, that is inspired by results from [[1,2]. The considered method to verify this claim (Isabelle) and the practical context motivating this approach (SAT solving on GPU) will also be introduced.

[[1] Weidenbach, C. Automated reasoning building blocks Correct System Design, Springer, 2015, 172-188

[[2] Blanchette, J. C., Fleury, M. & Weidenbach, C. A Verified SAT Solver Framework with Learn, Forget, Restart, and Incrementality, IJCAR, 2016

\\
''14h00, room B011''
/%

/%
%%tab-2016

!Tuesday, October 4
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Bit-Exact Automated Reasoning About Floating-Point Arithmetic.__ [Martin Brain|https://www.cs.ox.ac.uk/people/martin.brain/] (Oxford Univ.)

The majority of new instrumentation and control systems, even those
that are safety critical, make use of floating-point numbers. Thus
the value of computer-generated assurance evidence (including
verification, test coverage, etc.) depends on the correctness,
completeness and efficiency of automated reasoning about
floating-point expressions. In this talk we will review the SMT-LIB
Theory of Floating-Point, its semantics and the rationale behind key
design decisions as well as surveying the current state-of-the-art
in solver technology and future research directions. We aim to
provide system integrators with sufficient information to integrate
floating-point support into SMT interfaces and solver developer
enough ideas to work on the next generation of floating-point
reasoning.

\\
''14h00, room A006''
/%


!Wednesday, March 23
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Vers la synthèse de composants sécurisés.__ [Yannick Chevalier|https://www.irit.fr/~Yannick.Chevalier/] (Univ. Toulouse III)

Lors d'Usenix Enigma 2016, le directeur de l'équipe d'"accès sur
mesure" de la NSA a fait une liste des failles privilégiées permettant
d'entrer dans un système. En dehors de l'ingénierie sociale, le point
d'entrée privilégié dans les systèmes informatiques est dans l'écart
qui existe entre les composants (protocoles, logiciels,...) que les
administrateurs systèmes ont voulu déployer et les logiciels
réellement existants.

Cette différence vient de la perpétuation de composants obsolètes, de
la non-application de patches, mais aussi de composants n'implémentant
pas fidèlement leur spécification. Cet exposé portera sur ce dernier
point, et sur les travaux ouvrant la voie vers la synthèse sécurisée de
composants communiquants, parmi lesquels des protocoles
cryptographiques ou des services et applications Web.

Cet exposé portera dans un premier temps sur la définition d'une
sémantique opérationnelle à partir d'une déclaration d'échanges de
messages (travail commun avec Michaël Rusinowitch), et dans un
second temps de techniques de synthèse de service lorsqu'il est fait
abstraction des messages (travail commun avec MR et Walid Belkhir). 
Les travaux futurs consisteront  à combiner ces deux approches pour
générer automatiquement des implémentations de protocoles et de 
services à partir de composants sécurisés déjà prouvés.

\\
''14h00, room C103''
/%

!Tuesday, March 22
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Behavioral verification of higher-order programs.__ [Sylvain Salvati|http://www.labri.fr/perso/salvati/] (Inria Bordeaux, LaBRI)

Higher-order constructions make their way into main stream
programming languages like Java, C++, python, rust... These
constructions bring new challenges to the verification of programs as
they make their control flow more complex.

In this talk, I will present how methods coming from denotational
semantics can prove decidable the verification of certain properties of
higher-order programs. These properties are expressed by means of finite
state automata of the possibly infinite execution trees generated by the
programs and can capture safety properties but also liveness and
fairness properties.

\\
''14h00, room C103''
/%


!Monday, March 21
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__The transitivity problem of Turing machines.__ [Nicolas Ollinger|http://www.univ-orleans.fr/lifo/Members/Nicolas.Ollinger/] (Univ. Orléans, LIFO)

A Turing machine is topologically transitive if every partial configuration -- that is a state, a head position, plus a finite portion of the tape -- can reach any other partial configuration, provided that they are completed into proper configurations. In this talk, we study the computational complexity of transitivity combining a reduction technique called embedding that preserves dynamical properties with a remarkable small minimal aperiodic Turing machine called SMART. We further study the computational complexity of minimality, the property of every configuration reaching every partial configuration.

This is joint work with Julien Cassaigne from I2M in Marseille, Anahí Gajardo and Rodrigo Torres-Avilés from the University of Concepción, Chile.

\\
''14h00, room C103''
/%

%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Preuve formelle pour essaims de robots : rassemblement dans le plan réel.__ [Xavier Urbain| http://www.ensiie.fr/~urbain/] (ENSIIE)

Les réseaux de robots mobiles reçoivent depuis quelques années une
attention croissante de la part de la communauté de l'algorithmique
distribuée. L'analyse de la faisabilité de certaines tâches en
coopération par des essaims de robots autonomes est toutefois
extrêmement ardue ; le modèle formel Pactole a été conçu pour aider à
la vérification formelle mécanique (avec coq) de ce modèle émergent.
Il permet à la fois de certifier des résultats d'impossibilité et de
prouver la correction de protocoles distribués. À titre d'illustration
je présenterai dans cet exposé un algorithme original universel pour
le rassemblement de robots oublieux dans le plan réel. Cet algorithme
est prouvé correct grâce à notre développement formel.

\\
''15h00, room C103''
/%


!Tuesday, March 15
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Automated Verification of Asynchronously Communicating Systems.__ [Gwen Salaün|http://convecs.inria.fr/people/Gwen.Salaun/] (Ensimag)

Recent software is mostly constructed by reusing and
composing existing components abstracted as finite state machines.
Asynchronous communication is a classic interaction mechanism used for
such software systems. However, analyzing communicating systems
interacting asynchronously via FIFO buffers is an undecidable
problem. A typical approach is to make finite the corresponding state
space by limiting the presence of communication cycles in behavioral
models or by fixing buffer sizes.  In this talk, our focus is on
systems that are likely to be unbounded and therefore result in
infinite systems. We do not want to restrict the system by imposing
any arbitrary bounds.  We first present the synchronizability property
and then introduce a notion of stability. We prove that once the
system is stable for a specific buffer bound, it remains stable
whatever larger bounds are chosen for buffers. This enables us to
check certain properties on the system for that bound and to ensure
that the system will preserve them whatever larger bounds are used for
buffers. We also prove that computing this bound is undecidable but we
show how we succeed in computing these bounds for many typical
examples using heuristics and equivalence checking.  We will also
overview several properties of interest that communicating systems
must satisfy when obtained via projection from choreography
specifications.

\\
''14h00, room A006''
/%


!Monday, March 7
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Incremental Language-Independent Program Verification.__ [Vlad Rusu|http://researchr.org/profile/vladrusu] (Inria Lille)

Reachability Logic (RL) is a recently introduced formalism for defining the operational semantics of programming languages and for specifying program properties. As a program logic RL can be seen as a language-independent generalisation of Hoare Logics. Thus, using RL one can specify programs independently on the languages in which the programs are written and one can design program-verification techniques that are also language-independent. Several such techniques have already been proposed, all of which have a circular nature: they take as input a set of RL formulas (specifying a given program that one wants to verify, as well as some of its subprograms), and allow formulas in the set to be circularly used in proofs of other formulas, or even in their own proof. Such circular reasoning is essential for dealing with repetitive behaviour (e.g., loops or recursive calls). The reasoning is sound, but only in a monolithic way, in the sense that if a proof succeeds on a given set of formulas then all the formulas in the set are semantically valid; but if the proof fails, then nothing can be inferred about any formula’s validity or invalidity. In practice, this means that one is left with no information after unsuccessful proof attempts.

In this paper we deal with this issue by proposing an incremental method for proving RL formulas. The proposed method takes as input a given formula specifying a given program fragment, together with a (possibly empty) set of helper formulas that specify some if its subprograms and that are were already proved valid (e.g., using our method or any other sound method). Then, certain conditions equivalent to RL formula validity are automatically checked. In case of success, the newly proved formula can be incrementally used as a helper in the proofs of other formulas, thereby proving increasingly larger program fragments. But in case of failure, unlike the case of monolithic methods, one is not left without information - one still knows the validity of the already proved helper formulas, and can build on this knowledge to make progress in the program's proof. We illustrate the approach by successfully verifying the nontrivial Knuth-Morris-Pratt string-matching algorithm, whose verification had previously been tried without success using a monolithic method.

\\
''11h00, room C103''
/%

!Wednesday, March 9
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Characterization of Reachable Attractors Using Petri Net Unfoldings.__ [Thomas Chatain|http://www.lsv.ens-cachan.fr/~chatain/?l=fr] (ENS Cachan & LSV)

Attractors of network dynamics represent the long-term behaviours of the modelled system. Their characterization is therefore crucial for understanding the response and differentiation capabilities of a dynamical system. In the scope of qualitative models of interaction networks, the computation of attractors reachable from a given state of the network faces combinatorial issues due to the state space explosion.

In this paper, we present a new algorithm that exploits the concurrency between transitions of parallel acting components in order to reduce the search space. The algorithm relies on Petri net unfoldings that can be used to compute a compact representation of the dynamics. We illustrate the applicability of the algorithm with Petri net models of cell signalling and regulation networks, Boolean and multi-valued. The proposed approach aims at being complementary to existing methods for deriving the attractors of Boolean models, while being generic since it applies to any safe Petri net.

\\
''14h00, room A006''
/%

/%
%%tab-2015


!Monday, December 14
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Introduction aux mathématiques à rebours.__ [Ludovic Patey|http://www.ludovicpatey.com/] (Univ. Paris Diderot)

Suite à la malédiction des théorèmes d'incomplétude de Gödel, nous sommes condamnés pour l'éternité à chercher de nouveaux axiomes pour compléter nos théories mathématiques. Dans cette démarche fondationnelle, les mathématiques à rebours s'interrogent sur les axiomes minimaux nécessaires pour prouver les théorèmes de la vie de tous les jours. Au cours de cette présentation, nous introduirons les outils des mathématiques à rebours, et illustrerons les deux principales observations du domaine:
- La plupart des théorèmes classiques ne nécessitent que des axiomes faibles
- Beaucoup de ces théorèmes sont équivalents à l'un parmi cinq ensembles d'axiomes.

[[1] S. Simpson, Subsystems of second order arithmetic, 2nd ed., CUP, 2009

[[2] D. Hirschfeldt, Slicing the truth, to appear, 2013

\\
''14h00, room B013''
/%


!Tuesday, July 7
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Contextuality, Cohomology & Paradox.__ [Shane Mansfield|https://www.cs.ox.ac.uk/people/shane.mansfield/] (Univ. Oxford)

Contextuality is a key feature of quantum mechanics that provides an important non-classical resource for quantum information and computation. Sheaf theory can be used to provide a general treatment of contextuality in quantum theory. However, contextual phenomena are found in other fields as well: for example, database theory. I will discuss my recent work in collaboration with Abramsky, Barbosa, Kishida & Lal ([http://arxiv.org/abs/1502.03097|http://arxiv.org/abs/1502.03097]) in which we develop this unified view of contextuality. This provides two main contributions: firstly, it exposes a remarkable connection between contexuality and logical paradoxes; secondly, an important class of contextuality arguments can be demonstrated to have a topological origin. More specifically, it is shown that "All-vs-Nothing" proofs of contextuality are witnessed by cohomological obstructions.

\\
''14h00, room C005''
/%


!Tuesday, June 16
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Logical relations for coherence of effect subtyping.__ [Piotr Polesiuk|] (Université de Wroclaw)

In a programming language with subtyping, a coercion semantics  explicits the conversions from one type to another. Such a semantics is typically
defined on typing derivations rather than on typing judgments. To avoid
semantic ambiguity, it is expected to be coherent, i.e.,
independent of the typing derivation for a given typing judgment.
In this talk I will present step-indexed
logical relations for establishing the coherence of coercion semantics
of programming languages with subtyping. To illustrate the effectiveness
of the proof method, I will give an overview of a coherence proof for
a type-directed CPS translation from a call-by-value lambda
calculus with delimited continuations. Additionally, I will pinpoint the most interesting aspects of a Coq formalization of this work, in particular those that deal with step-indexing.

\\
''14h00, room A006''
/%


!Tuesday, June 9
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Display calculi for dynamic modal logics.__ [Sabine Frittella|https://sites.google.com/site/sabinefrittellalogic/] (LIF, Marseille)

First, we introduce display calculi and the key notions to understand and use them ; we assume familiarity with the following notions: normal modal logic, Kripke frames, sequent calculi and lattices.  Then we expose the meta-theorem for cut-elimination for display calculi. Finally, we present the design of a display calculi for the logic EAK (Epistemic Actions and Knowledge).

\\
''14h00, room C103''
/%


!Tuesday, May 5
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Analyse statique de programmes par interprétation abstraite et procédures de décision SMT.__ [Julien Henry|http://www-verimag.imag.fr/~jhenry/] (Verimag)

L’analyse statique de programme a pour but de prouver automatiquement qu’un programme vérifie certaines propriétés. L’interprétation abstraite est un cadre théorique permettant de calculer des invariants de programme. Ces invariants sont des propriétés sur les variables du programme vraies pour toute exécution. La précision des invariants calculés dépend de nombreux paramètres, en particulier du domaine abstrait et de l’ordre d’itération utilisés pendant le calcul d’invariants. Nos travaux de recherche proposent plusieurs extensions de cette méthode qui améliorent la précision de l’analyse.

L’interprétation abstraite peut également être rendue plus précise en distinguant tous les chemins d’exécution du programme, au prix d’une explosion exponentielle de la complexité. Le problème de satisfiabilité modulo théorie (SMT), dont les techniques de résolution ont été grandement améliorées cette décennie, permettent de représenter ces ensembles de chemins implicitement. Nous proposons d’utiliser cette représentation implicite à base de SMT et de les appliquer à des ordres d’itération de l’état de l’art pour obtenir des analyses plus précises. 

Nous proposons ensuite de coupler SMT et interprétation abstraite au sein de nouveaux algorithmes appelés Modular Path Focusing et Property-Guided Path Focusing, qui calculent des résumés de boucles et de fonctions de façon modulaire, guidés par des traces d’erreur. Notre technique a différents usages: elle permet de montrer qu’un état d’erreur est inatteignable, mais également d’inférer des préconditions aux boucles et aux fonctions. 

Nous appliquons nos méthodes d’analyse statique à l’estimation du temps d’exécution pire cas (WCET). Dans un premier temps, nous présentons la façon d’exprimer ce problème via optimisation modulo théorie, et pourquoi un encodage naturel du problème en SMT génère des formules trop difficiles pour l’ensemble des solveurs actuels. Nous proposons un moyen simple et efficace de réduire considérablement le temps de calcul des solveurs SMT en ajoutant aux formules certaines propriétés impliquées obtenues par analyse statique. Nous montrons que ce cas particulier du WCET suggère qu’il est possible d’améliorer les solveurs SMT en général grâce à des méthodes dérivées de l’analyse statique de programmes.

Enfin, nous présentons l’implémentation de Pagai, un analyseur statique pour LLVM, qui calcule des invariants numériques grâce aux différentes méthodes proposées. Nous avons comparé les différentes techniques implémentées sur des programmes open-source et des benchmarks utilisés par la communauté.

\\
''14h00, room B013''
/%


!Friday, April 24
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Fractals, indécidabilité et automates à plusieurs rubans.__ [Timo Jolivet|http://jolivet.org/timo/] (Institut de Mathématiques de Toulouse)

On démontre que la propriété "être d'intérieur vide" est indécidable pour une famille simple et naturelle de fractals (définis par des systèmes de fonctions itérées affines 2D). Les méthodes utilisées font intervenir des automates à plusieurs rubans et une variante du problème de correspondance de Post. Ce travail a été réalisé en collaboration avec Jarkko Kari.

\\
''14h00, room A006''
/%

!Tuesday, April 21
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Of Coq and interactive proofs.__ [Arnaud Spiwack|http://assert-false.net/arnaud/] (CRI, Mines ParisTech)

The Coq proof assistant is a language, based on Martin-Löf's dependent type theory, which is a synthesis of a (functional) programming language and a mathematical language. As such Coq, and dependent type theory in general, has a unique outlook on formal methods and program correctness. Proofs and programs can be interleaved in many ways, making it possible to keep proofs of program properties local. A the extreme end of this spectrum, it is possible to write programs in the form of constructive mathematical proofs.

In this talk, I will give an overview of the Coq proof assistant and then I will open the hood and show how the interactive proof facilities of Coq are implemented. Historically, they used an idea due to Robin Milner for the proof assistant LCF (which gave us the ML programming language), but dependent type theories such as Coq allow for a more robust design. The API and implementation of the interactive proof facilities of Coq rely on the notion of monad which has been popularised by the Haskell programming language. This yields a powerful and flexible API whose deduction rules closely match those of Coq, whereas the implementation based on Milner's design was limited to an approximation of these rules.

\\
''11h00, room A006''
/%

%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Distributed system design: there is still pitfalls on the road ahead.__ [Cyril Cassagnes|http://wwwen.uni.lu/snt/people/cyril_cassagnes] (Université du Luxembourg)

Heterogenous and distributed computer systems become the norm. That’s why, in this talk, we will discuss the two following aspects:
The first aspect is linked to systems heterogeneity that is both software and hardware. For software, this heterogeneity appears also through the functional and non-functional properties of components or agents forming the system.
The second aspect of these systems is related to data. The value of digital data has moved information systems at a strategic position, which must process more and more data. This has created large distributed systems. Indeed, these system aim to be distributed in order to achieve better performance thanks to computation and storage distribution.
In both cases, it’s important to be able to verify the system behaviour or some algorithms that constitute it, especially when these systems are deployed in our daily environment. Indeed, these system are complex to implement or model and continue to increase in complexity. That’s why, tools are required.
For instance, it’s relevant for the first aspect to be able to emphasize emergent behaviour resulting from sub-system interactions at the implementation time of a system of systems in order to guaranty a level of safety and/or predictability of the system

\\
''14h00, room B013''
/%


!Friday, April 17
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Ensembles limites d'automates cellulaires associés à une mesure de probabilité.__ [Martin Delacourt|http://www.dim.uchile.cl/~mdelacourt/] (CMM, Santiago du Chili)

Dans le monde des systèmes dynamiques, on s'intéresse souvent 
aux comportements asymptotiques et en particulier à l'ensemble limite, 
c'est à dire l'ensemble des points du système qui peuvent être visités 
arbitrairement tard au cours d'une évolution. Pour affiner la perception 
des comportements asymptotiques, on ajoute une mesure de probabilités 
sur l'ensemble de configurations initiales et on oublie les parties de 
l'espace qui tendent à disparaître.

  On considère ici le cas particulier des automates cellulaires qui sont 
un modèle classique de systèmes dynamiques discrets. Constitués d'une 
infinité de cellules formant un graphe régulier et ayant un état parmi 
un ensemble fini, leur évolution se fait en appliquant une même règle 
locale en tous points simultanément.

  On donnera une caractérisation des ensembles limites associés à une 
mesure de probabilités, des grands ensembles classiques de sous-shifts 
qui peuvent être réalisés et un théorème de Rice sur ces ensembles.

\\
''14h00, room A006''
/%


!Tuesday, April 14
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Coinduction, Equilibrium and Rationality of Escalation.__ [Pierre Lescanne|http://perso.ens-lyon.fr/pierre.lescanne/] (ENS Lyon)

Escalation is the behavior of players who play forever in the same
game.  Such a situation is a typical field for application of
coinduction which is the tool conceived for reasoning in infinite
mathematical structures. In particular, we use coinduction to study
formally the game called --dollar auction--, which is considered as
the paradigm of escalation.  Unlike what is admitted since 1971, we
show that, provided one assumes that the other agent will always stop,
bidding is rational, because it results in a subgame perfect
equilibrium.  We show that this is not the only rational strategy
profile (the only subgame perfect equilibrium).  Indeed if an agent
stops and will stop at every step, whereas the other agent keeps
bidding, we claim that he is rational as well because this corresponds
to another subgame perfect equilibrium.  In the infinite dollar
auction game the behavior in which both agents stop at each step is
not a Nash equilibrium, hence is not a subgame perfect equilibrium,
hence is not rational.  Fortunately, the notion of rationality based
on coinduction fits with common sense and experience.  Finally the
possibility of a rational escalation in an arbitrary game can be
expressed as a predicate on games and the rationality of escalation in
the dollar auction game can be proved as a theorem which we have
verified in the proof assistant COQ.  In this talk we will recall the
principles of infinite extensive games and use them to introduce
coinduction and equilibria (Nash equilibrium, subgame perfect
equilibrium).  We will show how one can prove that the two strategy
profiles presented above are equilibria and how this leads to a
"rational" escalation in the dollar auction.  We will show that
escalation may even happen in much simpler game named 0,1-game.

\\
''14h00, room B013''
/%


!Thursday, April 9
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Approximate Consensus in Highly Dynamic Networks.__ [Bernadette Charron-Bost|http://www.lix.polytechnique.fr/~charron/Presentation.html] (CNRS, LIX)

(joint work with Matthias Függer and Thomas Nowak)

We investigate the approximate consensus problem in highly dynamic networks in which topology may change continually and unpredictably. We prove that in both synchronous and partially synchronous networks, approximate consensus is solvable if and only if permanently, the communication graph  has a rooted spanning tree. The striking point is that the spanning tree may continually change over time without preventing nodes from converging to consensus. Actually the class of averaging algorithms, which have the benefit of being memoryless and requiring no process identifiers, entirely captures the solvability issue of approximate consensus in that the problem is solvable if and only if it can be solved using any averaging algorithm.

We develop a proof strategy which consists in a        reduction to the "non-split" networks. It dramatically improves the best known upper bound on the decision times of averaging algorithms and yields a non-averaging algorithm for approximate consensus in non-anonymous networks with a decision time that is quadratic in the number of nodes. We also prove that a general upper bound on the decision times of averaging algorithms has to be exponential, shedding light on the high price of anonymity.
          
Finally we apply our results to networked systems with a fixed topology and benign fault models to show that with n processes, up to 2n-3 link faults per round can be tolerated for approximate consensus, increasing by a factor 2 the tight bound for exact consensus        established by Santoro and Widmayer.

\\
''15h00, room C103''
/%



!Tuesday, April 7
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Model-checking term rewriting systems: reachability analysis guided by temporal properties.__ [Vincent Hugot|http://tcs.vhugot.com/] (Inria Lille)

We present a model-checking framework summarised as a variant of
reachability analysis for term rewriting systems.  Reachability
analysis endeavours to show that "a bad state is never reached", and
relies on the computation of an over-approximation of the set of
reachable terms, which is of course not calculable in general. We
aim to generalise this approach, in the sense of lifting it from
the specific statement "a bad state is never reached" to arbitrary
specifications in some temporal logic -- though in the case we study,
the statements dictate the order in which the transitions of the
system may occur.

For now, the method supports a fragment of LTL sufficiently
expressive to describe common safety properties. In a first
step, translation rules reformulate the verification problem
into an equivalent expression of propositional logic whose
atoms are comparisons of languages obtained through rewriting,
without regard for decidability; we call such a formula a rewrite
proposition. In the second step, the rewrite proposition is given a
concrete representation in terms of tree automata with or without
constraints. Since the general problem is undecidable, these
representations are sometimes approximations, for which we use
constructions (tree automata completion) studied separately for
reachability analysis; the end result is a positively approximated
decision procedure for the general problem, ie. a procedure that
yields "yes, the system is correct" or "maybe".  (Actually there may
be several ways to approximate the proposition, and thus a set of
such procedures is generated).

We draw a few perspectives regarding the future development of this
method, in particular the extension of the supported logic, and the
introduction of rewrite strategies into the mix, e.g. innermost.

\\
''11h00, room B013''
/%




%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Quantum attacks against iterated block ciphers.__ [Marc Kaplan|http://perso.telecom-paristech.fr/~kaplan/] (Telecom ParisTech)

We study the amplification of security against quantum attacks provided by iteration of block ciphers. In the classical case, the Meet-in-the-middle attack is a generic attack against those constructions. This attack reduces the time required to break double iterations to only twice the time it takes to attack a single ideal block cipher, where a naive cryptographer would expect double iteration to lead to a quadratic improvement of security. 
We introduce the quantized version of this attack which is an application of Ambainis' celebrated quantum algorithm for the Element Distinctness problem. We then prove the optimality of the attack against two iterated ideal ciphers. An important consequence is that if a quantum adversary is capable of breaking a single encryption in time t, the time to break two ideal iterated cipher is at least t^(4/3). In other words, the amplification of security provided by iteration is much larger against quantum adversaries than against classical ones.
We then investigate security amplification by composition further by examining the case of four iterations. We quantize a recent technique introduced by Dinur, Dunkleman, Keller and Shamir, called the dissection attack. In the classical case, this attack allows to save a great amount of memory. Surprisingly, the quantized version also leads to saving in time, compared to a meet-in-the-middle approach. This seems to indicate that when the number of iterations grows, the resistance against quantum attacks decreases.

\\
''14h00, room B013''
/%


!Wednesday, April 1st
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Model Checking Parameterized Asynchronous Shared-Memory Systems.__ [Antoine Durand-Gasselin|https://www7.in.tum.de/~durandga/] (TU Münich)

This is joint work with Javier Esparza, Pierre Ganty and Rupak Majumdar.
These results have been submitted to CAV.

We characterize the complexity of liveness verification for
parameterized systems consisting of a leader process and arbitrarily
many anonymous and identical contributors.  Processes communicate
through a shared, bounded-value register. While each operation on the
register is atomic, there is no synchronization primitive to execute a
sequence of operations atomically.

We analyze the case in which processes are modeled by finite-state
machines or pushdown machines and the property is given by a Büchi
automaton over the alphabet of read and write actions of the leader.
We show that the problem is decidable, and has a surprisingly low
complexity: it is NP-complete when all processes are finite-state
machines, and is PSPACE-hard and in NEXPTIME when they are pushdown
machines. This complexity is lower than for the non-parameterized
case: liveness verification of finitely many finite-state machines is
PSPACE-complete, and undecidable for two pushdown machines.

For finite-state machines, we characterize infinite behaviors using
existential abstraction and semilinear constraints.  For pushdown
machines, we show how contributor computations of high stack height
can be simulated by computations of many contributors, each with low
stack height.  Together, our results characterize the complexity of
verification for parameterized systems under the assumptions of
anonymity and asynchrony.

\\
''14h00, room B013''
/%


!Tuesday, March 31
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Synthèse de contrôleurs pour les jeux stochastiques multi-composantes avec objectif multiple.__ [Nicolas Basset|http://www.liafa.univ-paris-diderot.fr/~nbasset/] (University of Oxford)

Je présenterai les résultats de deux articles récents concernant la synthèse de contrôleurs 
(aussi appelés stratégies) pour les systèmes modélisés par des jeux stochastiques avec conditions de gain multiple. 
Dans le premier article (CONCUR 2014), nous introduisons un produit parallèle 
de jeux stochastiques. Nous montrons comment les règles compositionnelles de vérification 
de type assomption-garantie des automates probabilistes (jeux stochastiques à un joueur) peuvent être relevées à notre cadre. 
Nous expliquons comment composer des stratégies gagnantes pour les composantes 
en une stratégie gagnante globale pour le jeu composé. 
Dans le second article (TACAS 2015), nous nous intéressons à la satisfaction presque-sûre de plusieurs objectifs de gain moyen ("mean-payoff``). 
Nous donnons une méthode de synthèse de stratégie epsilon-optimale pour ce type de condition de victoire. 
C'est à dire, étant donné un vecteur objectif Pareto optimal v, la stratégie produite permet de maintenir le gain moyen multi-dimensionnel au dessus du vecteur de seuil v - epsilon. 
Nous utilisons cette méthode de synthèse comme brique élémentaire dans la méthode compositionnelle de l'article précédent (CONCUR 2014), 
que nous illustrons sur un cas d'étude issue de l'avionique. 
Ce travail a été réalisé avec Clemens Wiltsche, Marta Kwiatkowska et Ufuk Topcu. 

\\
''14h00, room B013''
/%


!Thursday, March 26
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Preuves formelles de sûreté en Coq : réseaux de robots et compilation de 
langages synchrones.__ [Lionel Rieg|http://perso.ens-lyon.fr/lionel.rieg/] (Collège de France)

Dans une première partie, je présenterai le projet Pactole qui vise à 
construire un cadre formel en Coq pour raisonner sur des réseaux de 
robots. Ce cadre doit être assez souple pour pouvoir intégrer les 
différentes variantes de modèles (présence de robots byzantins, capacité 
de perception des robots, …) en changeant ses paramètres. J'illustrerai 
l'intérêt de ce cadre sur un problème fondamental : le rassemblement 
(sur une droite), qui cherche à rassembler des robots en un même point. 
En toute généralité, ce problème est connu pour être impossible, preuve 
formelle dans Pactole à la clé. Néanmoins, je donnerai un algorithme qui 
fonctionne hormis pour une unique position initiale, précisément celle 
utilisée pour montrer l'impossibilité dans le cas général. On verra 
ensuite comment généraliser ces résultats en dimension plus grande.

Dans un second temps, je m'intéresserai à la compilation du langage 
synchrone Esterel. L'objectif de ce travail est de construire un 
compilateur prouvé d'Esterel vers les circuits, comme CompCert le fait 
de C vers l'assembleur. À l'inverse des langage usuels, les langages 
synchrones parlent de façon précise du temps, ce qui influent beaucoup 
sur leur conception. Ainsi, les difficultés de la compilation sont ainsi 
de nature très différentes, notamment car la cible n'est pas un langage 
de programmation. Je présenterai les enjeux de la compilation et une 
partie des différentes sémantiques qui permettent de relier le langage 
source à sa traduction en circuits.

\\
''15h00, room B011''
/%

!Tuesday, March 24
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Optimal Non-Perfect Uniform Secret Sharing Schemes.__ [Tarik Kaced|http://home.ie.cuhk.edu.hk/~tarik/] (Univ. Hong Kong)

A secret sharing scheme is non-perfect if some subsets of participants 
cannot recover the secret value but have some information about it. This 
work is dedicated to the search of efficient non-perfect secret sharing 
schemes. The efficiency is measured by means of the information ratio, 
the ratio between the maximum length of the shares and the length of the 
secret value.

In order to study perfect and non-perfect secret sharing schemes with 
all generality, we describe the structure of the schemes through their 
access function, a real function that measures the amount of information 
that every subset of participants knows about the secret value. We 
present new tools for the construction of secret sharing schemes. In 
particular, we construct a secret sharing scheme for every access function.

We find a new lower bound on the information ratio that is better than 
the ones previously known. In particular, this bound is tight for 
uniform access functions. The access function of a secret sharing scheme 
is uniform if all participants play the same role in a scheme (e.g. ramp 
secret sharing schemes). Moreover, we construct a secret sharing scheme 
with optimal information ratio for every rational uniform access function.

\\
''11h00, room A006''
/%

%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Vérification automatique d'invariants numériques de programmes par optimisation sous-contraintes et itérations sur les politiques.__ [Assalé Adjé|https://sites.google.com/site/assaleadje/home] (ONERA Toulouse)

En général, la vérification d'une propriété sur des programmes numériques est indécidable. Cependant, la théorie de l'interprétation abstraite permet de prouver qu'une propriété est vraie en construisant une sur-approximation calculable des valeurs possibles prises par les variables du programme et sur laquelle la propriété est vraie.

Dans cette présentation, je montrerai comment construire une telle sur-approximation en utilisant des méthodes d'optimisation sous-contraintes. Dans un deuxième temps, je considérerai le problème suivant: comment obtenir une sur-approximation la plus précise possible. En effet, quand on s'intéresse, par exemple, à prouver que les valeurs atteignables sont bornées, la sur-approximation la plus précise possible est recherchée. Je montrerai comment un algorithme de calcul de point fixe issu de la théorie des jeux comme l'itération sur les politiques permet d'obtenir des bornes plus précises.

\\
''14h00, room A006''
/%


!Friday, March 13
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Implementability issues for real-time distributed systems.__ [Thomas Chatain|http://www.lsv.ens-cachan.fr/~chatain/] (LSV)

Formal models for real-time distributed systems, like time Petri nets and networks of timed automata have proved their interest for the verification of real-time systems. On the other hand, the question of using these models as specifications for designing real-time systems raises several difficulties. Here we focus on the ones that are related to the distributed nature of the system. Implementing a model may be possible at the cost of some transformations, which make it suitable for the target device. In this talk, I present several results about semantics of distributed real-time systems and provide methods for the design of such systems.

\\
''14h00, room A006''
/%

!Tuesday, March 10
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Formal Verification of (Electronic) Exams.__ [Jannik Dreier|http://people.inf.ethz.ch/jdreier/] (ETH Zürich)

Nowadays, students can be assessed not only by means of classical 
pencil-and-paper tests, but also using electronic exams. Such electronic 
exams can be taken in examination centers, or sometimes even from home 
via the internet. Electronic exams are appealing as they can reach 
larger audiences by simplifying the exam process, but they are exposed 
to new threats, including not only dishonest students, but also outside 
attackers, and misbehaving authorities. These threats are amplified by 
two issues: the lack of understanding of what security means for 
electronic exams (except the old concern about student cheating), and 
the absence of tools to verify whether an exam system is secure.
In this talk, I will present formal definitions of several fundamental 
authentication, privacy and verifiability properties for exam protocols.
As case studies, I will discuss three exam protocols: two recent 
electronic exam protocols, and the pencil-and-paper exam used by the 
University of Grenoble, showing that our approach can also cover 
classical exam systems. The automatic analysis using ProVerif highlights 
several weaknesses, underlining the need for formal verification.

\\
''11h00, room B013''
/%

%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__F*: Higher-Order Effectful Program Verification.__ [Chantal Keller|http://prosecco.gforge.inria.fr/personal/ckeller/] (Microsoft Research Cambridge & MSR-Inria Joint Centre)

F* is an ML-like language designed for program verification. It is based
on refinement types, a powerful extension of ML types to express
properties about programs. These properties are automatically checked
via the generation of verification conditions that are finally
discharged by SMT solvers. F* provides a uniform framework to deal both
with programs with effects and non-termination, for which one might want
to establish some properties, and pure and terminating programs, that
can also be used inside the logical refinements. To this end, it relies
on a fine-grained control of effects and a termination checker based on
a semantic criterion. F* finally generates code to various backends,
ranging from OCaml to JavaScript.

After giving an overview of this language, I will present some aspects
of its internals, in particular the treatment of effects and
termination, and the generation of verification conditions all the way
down to SMT solvers. I will finally discuss ongoing work on a full
certification of F*.

\\
''14h00, room B013''
/%


!Wednesday, March 4
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Information-flow Security: Definitions, Characterizations and Verification Complexity.__ [Henning Schnoor|https://www.theorie.informatik.uni-kiel.de/de/team/henning-schnoor] (Univ. Kiel)

We develop a theory for state-based noninterference in a
setting where different security policies apply in different parts of a
given system. Our theory comprises appropriate security definitions,
characterizations of these definitions, for instance in terms of
unwindings, algorithms for analyzing the security of systems with local
policies, and corresponding complexity results. Additionally, we study
the verification complexity for established notions of noninterference
such as IP-security and TA-security. In both of these cases, we obtain
NL algorithms, significantly improving on previously known bounds.

\\
''14h00, room A006''
/%

!Thursday, March 5
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Monitoring of Temporal First-order Properties with Aggregations.__ [Eugen Zalinescu|http://people.inf.ethz.ch/zeugen/] (ETH Zurich)

In system monitoring, one is often interested in checking properties
of aggregated data.  Current policy monitoring approaches are limited
in the kinds of aggregations they handle.  To rectify this, we extend
an expressive language, metric first-order temporal logic, with
aggregation operators.  Our extension is inspired by the aggregation
operators common in database query languages like SQL.  We provide a
monitoring algorithm for this enriched policy specification language.
We show that, in comparison to related data processing approaches, our
language is better suited for expressing policies, and our monitoring
algorithm has competitive performance.

\\
''11h00, room A006''
/%

!Tuesday, February 24
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Mesures limites dans les automates cellulaires.__ [Benjamin Hellouin|http://mat-unab.cl/~hellouin/] (Univ. Andres Bello, Santiago)

Je m'intéresse au comportement des automates cellulaires sur une configuration initiale tirée au hasard, ce qui revient à étudier leur action sur l'espace des mesures de probabilité. Je présenterai un panorama de mes recherches sur les mesures limites de ces systèmes, qui peuvent être vues comme leur comportement asymptotique typique ou à leur sortie en tant qu'algorithme probabiliste.

En dimension 1, les mesures atteignables en partant d'une mesure simple, e.g. la mesure uniforme, sont entièrement décrivables par des conditions de calculabilité. Porter ce résultat aux dimensions supérieures est un travail en cours qui a déjà donné des résultats partiels. Cependant cette approche de permet pas d'obtenir des mesures limites de support plein, qui ne peuvent être atteintes que par des dynamiques surjectives.

Bien que les automates cellulaires surjectifs soient capables de calcul, leur action sur les mesures possède une forte rigidité qui est encore peu comprise. On conjecture expérimentalement que certaines sous-classes font converger toutes les mesures initiales simples vers la mesure uniforme. Nous produisons la première preuve complète de ce phénomène dans un automate cellulaire ayant des propriétés algébriques particulières, utilisant des structures autosimilaires dans son évolution temporelle.

\\
''14h00, room A006''
/%

!Tuesday, February 17
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Algorithmes distribués pour l'optimisation du déploiement des microrobots MEMS : des méthodes formelles sont nécessaires?__ [Hicham Lakhlef|http://lifc.univ-fcomte.fr/page_personnelle/accueil/206] (Univ. Franche-Comté)


Dans ce séminaire je vais présenter quelques travaux que j'ai faits dans ma thèse. Je parlerai sur des algorithmes distribués pour la reconfiguration des micorobots MEMS. 

Les  microrobots MEMS gagnent de plus en plus une attention croissante puisque ils peuvent effectuer plusieurs  missions et tâches dans une large gamme d'applications, y compris la localisation d'odeur, la lutte contre les incendies, le service médical, la sécurité et le sauvetage. Ils peuvent faire des tâches dans des environnements qui sont caractérisés en étant non structurés, complexes, dynamiques, et inconnus. Pour faire ces missions et tâches les microrobots MEMS  doivent appliquer des protocoles de redéploiement afin  de s'adapter aux conditions du travail. Ces protocoles de reconfiguration dévoient traiter avec les caractéristiques des nœuds MEMS notamment les ressources (mémoire, énergie) limités. Aussi, ces algorithmes devraient 'tre efficaces,  évolutifs,  robustes et utilisent seulement les informations locales.
Dans ce séminaire, je présente  des algorithmes de reconfiguration efficaces pour des microrobots MEMS modulaires. L'objectif de ces algorithmes est d'optimiser la topologie logique des microrobots en utilisant des protocoles  de redéploiement distribués. Je présente  et je compare deux algorithmes distribués de reconfiguration d'une chaîne à un carré. Le premier algorithme est dynamique dans le sens réveil-sommeil  assurant  la connexité  du réseau  durant le processus de reconfiguration et l'autre qui est plus rapide  n'assure pas la connexité du réseau durant le processus de reconfiguration. Je montre comment  utiliser le parallélisme en mouvements pour optimiser le temps d'exécution des algorithmes et minimiser le nombre de mouvements des nœuds tout en gardant le réseau connexe et  les algorithmes robustes et évolutifs. En fin, je présente un algorithme de reconfiguration  qui traite avec toute typologie de départ. 

Dans les perspectives, je montrerais que des méthodes formelles  sont nécessaires pour  prouver que les systèmes de microrobots programmés en utilisant des protocoles distribués vont toujours se comporter comme prévu, dans des conditions changeantes et dans une variété d'environnements incertains. 


\\
''14h00, room A006''
/%

%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Un cadre pour la preuve formelle adapté aux réseaux de robots mobiles.__ [Xavier Urbain|http://www.ensiie.fr/~urbain/] (CNAM)

Les réseaux de robots mobiles reçoivent depuis quelques années une 
attention croissante de la part de la communauté de l'algorithmique 
distribuée. Si l'utilisation d'essaims de robots coopérant dans 
l'exécution de diverses tâches est une perspective séduisante, l'analyse 
de la faisabilité de certaines tâches dans ce cadre émergent est 
extrêmement ardue, en particulier si certains robots présentent des 
comportements dits byzantins, c'est-à-dire arbitraires voire hostiles. 
Pour obtentir des garanties formelles dans ce contexte, nous proposons 
un cadre mécanique formel fondé sur l'assistant à la preuve Coq et 
adapté aux réseaux de robots. Nous nous intéressons principalement dans 
cet exposé aux résultats d'impossibilité, fondamentaux en ce qu'ils 
établissent ce qui peut ou ne peut pas être réalisé et permettent de 
définir des bornes et, par là, l'optimalité de certaines solutions. 
Utiliser un assistant comme Coq travaillant à l'ordre supérieur nous 
permet d'exprimer aisément des quantifications sur les algorithmes, 
ceux-ci étant considérés comme des objets abstraits. Nous illustrons les 
possibilités offertes par notre développement en présentant les 
premières preuves formelles (et donc certifications) de certains 
résultats comme l'impossibilité de la convergence de robots amnésiques 
lorsqu'un tiers d'entre eux sont byzantins, ou encore l'impossibilité du 
rassemblement pour un nombre pair de robots évoluant dans R.

\\
''15h00, room A006''
/%


/%
%%tab-2014
!Tuesday, June 12
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Reachability Analysis of Hybrid Systems.__ [Erika Abraham|http://www-i2.informatik.rwth-aachen.de/i2/eab/] (RWTH Aachen)

Hybrid systems exhibit both dynamic and discrete behavior. Typically,
the dynamic behavior stems from the continuous evolution of physical
systems, whereas the discrete behavior stems from the execution steps of
discrete controllers.

As a modeling language we consider hybrid automata, extending discrete
automata with continuous dynamics. Given a hybrid automaton model of a
hybrid system, the perhaps most basic question one could be interested
in is whether certain model states can be reached from a given initial
state. Though this reachability problem sounds simple, it is undecidable
for all but some very simple subclasses of hybrid automata.

Nevertheless, there are different techniques to solve it in an
incomplete manner. We discuss methods that use different geometric
objects like polytopes, zonotopes, ellipsoids etc. to represent state
sets, and apply a forward fixedpoint-based search to over-approximate
the set of reachable states. Such methods need to determine successor
sets under both discrete jumps and continuous evolution. The latter is
often done by flowpipe construction, paving the whole flow by a set of
geometric objects of the chosen type.

\\
''14h00, room C005''
/%


!Tuesday, May 20
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__b2llvm: B developments onto the LLVM.__ David Deharbe (Univ. Rio Grande do Norte)

We present b2llvm, a code generator for the B-method targetting LLVM intermediate format. b2llvm currently handles the following elements of the B language: simple data types,  imperative instructions and component compositions. The translation rules from some essential constructs of the B language for implementations towards LLVM source code are presented and illustrated with examples. Some verification and validation strategies are proposed.

\\
''14h00, room B013''
/%

!Tuesday, April 29
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Proofs, Counterexamples, and Codatatypes for Isabelle/HOL.__ [Jasmin Blanchette|http://www4.in.tum.de/~blanchet/] (TU Munich)

Isabelle is a generic proof assistant that supports HOL (Higher-Order Logic), ZF (Zermelo-Fraenkel Set Theory), and TLA+ (Temporal Logic of Actions). My research so far has focused on Isabelle/HOL: (1) I enhanced the popular Sledgehammer proof tool, which is based on external automatic provers (e.g., E, SPASS, Vampire, and Z3); (2) I designed the SAT-based counterexample generator Nitpick; (3) together with a few colleagues, I am adding support for codatatypes and corecursive functions. In this talk, I will briefly summarize these strands of research and my future plans.

\\
''14h00, room B013''
/%
!Wednesday, March 19
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Quelques contributions de la Logique Linéaire à la théorie de la complexité.__ [Clément Aubert|http://aubert.perso.math.cnrs.fr/#] (IML)

En théorie de la démonstration, la Logique Linéaire propose une formalisation des preuves qui place leur dynamique au centre des investigations et porte une attention soutenue aux opérations structurelles. Depuis l’introduction de ses variantes dites bornées et allégées, la Logique Linéaire a contribué à la théorie de la complexité de différentes façons. Nous présenterons différentes avancées en nous focalisant sur les caractérisations implicites de classes de complexité, c’est-à-dire indépendantes des mesures sur des modèles abstraits. En utilisant une gamme d’outils syntaxiques et sémantiques, la Logique Linéaire est parvenue à proposer des caractérisations de l’exécution parallèle, du non-déterminisme, ou encore des programmes dit efficaces en temps ou en espace. Nous dresserons un rapide panorama de ces méthodes en les liant aux autres approches implicites de la complexité et en mettant en avant nos contributions.

\\
''14h00, room B013''
/%

/%
%%tab-2013

!Friday, November 8
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Knowledge is Power, and so is Communication.__ [Rohit Parikh|http://www.sci.brooklyn.cuny.edu/cis/parikh/] (CUNY)

The BDI theory says that people's actions are influenced by two factors, what they believe and what they want. Thus we can influence people's actions by what we choose to tell them or by the knowledge that we withhold. Shakespeare's Beatrice-Benedick case in Much Ado about Nothing is an old example.  

Currently we often use Kripke structures to represent knowledge (and belief).  So we will address the following issues:

a) How can we bring about a state of knowledge, represented by a Kripke structure, not only about facts, but also about the knowledge of others, among a group of agents?  

b) What kind of a theory of action under uncertainty can we use to predict how people will act under various states of knowledge?

c) How can A say something credible to B when their interests (their payoff matrices) are in partial conflict?

\\
''15h00, room C005''
/%



!Tuesday, October 15
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Tutorial on Quantum Communication.__ [Iordanis Kerenidis|http://www.liafa.univ-paris-diderot.fr/~jkeren/jkeren/Iordanis_Kerenidis.html] (CNRS, LIAFA)

Part of the [Journées Informatique Quantique|http://www.loria.fr/~sperdrix/JIQ13.html] 2013, October 14 and 15.

\\
''14h00, room A008''
/%

!Tuesday, October 22
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__[Journée Doctorants|http://cassis.loria.fr/fm-departement/Wiki.jsp?page=FM-Day].__

\\
''room A008''
/%

!Wednesday, October 9
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Parameterized Model Checking of Fault-tolerant Distributed Algorithms by Abstraction.__ [Josef Widder|http://forsyte.at/people/widder/] (TU Wien)

Fault-tolerant distributed algorithms are central for building
reliable, spatially distributed systems. In order to ensure that
these algorithms actually make systems more reliable, we must
ensure that these algorithms are actually correct. Unfortunately,
model checking state-of-the-art fault-tolerant distributed
algorithms (such as Paxos) is currently out of reach except for
very small systems.  

In order to be eventually able to model check such fault-tolerant
distributed algorithms also for larger systems, several problems have
to be addressed. In this talk, I will present recent work that
considers modeling and verification of fault-tolerant algorithms that
basically only contain threshold guards to control the flow of the
algorithm. I will present an abstraction technique that allows
parameterized model checking of safety and liveness properties of such
algorithms. Our experimental data shows that our abstraction is
precise, and allows automated verification of simple, fault-tolerant
distributed algorithms for various fault models. As threshold guards
are widely used in fault-tolerant distributed algorithms (and also in
Paxos), efficient methods to handle them bring us closer to the above
mentioned goal.

\\
''14h00, room C005''
/%

!Tuesday, October 8
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__The philosopher diner revisited:  a new algebraic approach for an old modeling problem ?__ [David Janin|http://www.labri.fr/perso/janin/] (LaBRI)

In the field of distributed algorithm, the philosopher diner is a typical algorithmical and modeling problem. How to model distributed states? Local transitions? Their combinations? Resulting global transitions? Correctness criteria? There already  are many modeling proposals. In this talk, we will provide yet another one. We will show how  inverse semigroup theory -- a mathematical theory developed since the 50's yet quite unknown in computer science -- and the associated language theory provide robust concepts and tools for that modeling.

This is a joint (exploratory) work with Anne Dicky to be presented at MSR 2013 in Rennes. Oddly enough to be mentioned, such an approach was inspired by recent experiments in music system modeling and programming.

\\
''15h00, room B013''
/%

!Tuesday, September 24
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Symbolic Probabilistic Analysis of Off-line Guessing.__ [Bruno Conchinha|http://www.infsec.ethz.ch/people/brunoco] (ETH Zurich)

We introduce a probabilistic framework for the automated analysis of
security protocols. Our framework provides a general method for
expressing properties of cryptographic primitives, modeling an attacker
who is more powerful than conventional Dolev-Yao attackers. It allows
modeling equational properties of cryptographic primitives as well as
property statements about their weaknesses, e.g. primitives leaking
partial information about messages or the use of weak algorithms for
random number generation. Moreover, we can use these properties to find
attacks and estimate their success probability. Existing symbolic
methods can neither model such properties nor find such attacks.

We show that the probability estimates we obtain are negligibly
different from those yielded by a generalized random oracle model based
on sampling terms into bitstrings while respecting the stipulated
properties of cryptographic primitives.

As case studies, we use a prototype implementation of our framework to
model non-trivial properties of RSA encryption and automatically
estimate the probability of off-line guessing attacks on the EKE protocol.

\\
''14h00, room C005''
/%

!Tuesday, July 23
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Malicious-but-cautious certificate authorities.__ [Mark Ryan|http://www.cs.bham.ac.uk/~mdr/] (University of Birmingham)

The "certificate authority" model for authenticating public keys of
websites has been attacked in recent years, and several proposals have
been made to reinforce it. We develop and extend a recent proposal
called "certificate transparency", so that it properly handles
certificate revocation. We show how extended certificate transparency
has applications beyond the web. In particular, we detail how it can be
used to build a secure end-to-end email or messaging system using PKI
with no trusted parties. We believe this finally makes end-to-end
encrypted email as usable and user-friendly as encrypted web browsing is
today. Underlying these ideas is a new attacker model appropriate for
cloud computing, which we call "malicious-but-cautious".

\\
''14h00, room C005''
/%

!Tuesday, June 25
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Le genre des langages réguliers.__ [Florian Deloup|http://www.math.univ-toulouse.fr/~deloup/] (Institut de Mathématiques de Toulouse, Univ. Toulouse III)

Si l’on oublie une partie de sa structure, un automate peut être regardé comme un graphe pour lequel la notion de genre est bien définie. Intuitivement, un graphe est de genre plus petit que g s’il existe un plongement du graphe dans une sphère à "g anses". Bien entendu, du point de vue informatique, les automates sont plus riches que des graphes : ils calculent des langages et un langage régulier donné admet une infinité d’automates le reconnaissant. Il est alors naturel de définir le genre d’un langage régulier L comme le genre minimal des automates reconnaissant L. On exposera quelques résultats et quelques questions ouvertes sur ce nouvel invariant des langages.

\\
''14h00, room C005''
/%
\\

!Wednesday, May 15
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Evaluation is MSOL compatible.__ [Sylvain Salvati|http://www.labri.fr/perso/salvati/] (INRIA, LaBRI)

We consider simply-typed lambda calculus with fixpoint
operators. Evaluation of a term gives as a result the Bohm tree of the
term. We show that evaluation is compatible with monadic second-order
logic (MSOL). This means that for a fixed finite vocabulary of terms,
the MSOL properties of Bohm trees of terms are effectively MSOL
properties of terms themselves. Theorems of this kind have been known
for some graph operations: unfolding, and Muchnik iteration. Similarly
to those results, our main theorem has diverse applications. It can be
used to show decidability results, to construct classes of graphs with
decidable MSOL theory, or to obtain MSOL formulas expressing properties
of terms. Another application is decidability of a control-flow
synthesis problem.

It is a joint work with Igor Walukiewicz.

\\
''14h00, room C005''
/%

!Tuesday, April 16
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Analyzing Probabilistic Programs: Pushing the Limits of Automation.__ [Joost-Pieter Katoen|http://www-i2.informatik.rwth-aachen.de/~katoen/] (Aachen University)

Probabilistic programs are pivotal for cryptography, privacy and quantum programs, are typically a few number of lines, but hard to understand and analyse. The main challenge for their automated analysis is the infinite domain of the program variables combined with the occurrence of parameters. Here, parameters can range over concrete probabilities but can also encompass various aspects of the program, such as number of participants, size of certain variables etc.

McIver and Morgan have extended the seminal proof method of Floyd and Hoare for sequential programs to probabilistic programs by making the program annotations real- rather than Boolean-valued expressions in the program variables.  As in traditional program analysis, the crucial annotations are the loop invariants. The problem to synthesize loop invariants is very challenging for probabilistic programs where real-valued, quantitative invariants are needed.

In this talk, we present a novel approach to synthesize loop invariants for the class of linear probabilistic programs. In order to provide an operational interpretation to the quantitative annotations, we give a connection between McIver and Morgan's wp-semantics of probabilistic programs and parameterized Markov decision processes.

\\
''14h00, room C005''
/%

!Thursday and Friday, April 11 and 12
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
[Journées Calculabilités 2013|http://www.loria.fr/~hoyrup/Computability/home.html]
/%

!Tuesday, April 9
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Sound Security Protocol Transformations.__ Binh Thanh Nguyen (ETH Zurich)

We propose a class of protocol transformations, which can be used to
# develop (families of) security protocols by refinement and
# abstract existing protocols to increase the efficiency of verification tools.
We prove the soundness of these transformations with respect to an expressive security property specification language covering secrecy and authentication properties.
Our work clarifies and significantly extends the scope of earlier work in this area.
We illustrate the usefulness of our approach on a family of key establishment protocols.

\\
''14h00, room C005''
/%

!Wednesday, March 27
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Induction in SMT: Seminar day__.

11:00 - 12:00 __Decidability of inductive theorems based on rewriting induction__. Takahito Aoto

Rewriting induction (Reddy, 1990) is a method for proving inductive 
theorems of term rewriting systems (TRSs). It is known that generally it 
is undecidable that given equation and TRS whether the equation is an 
inductive theorem of the TRS. Toyama (2002) and Falk and Kapur (2006) 
gave some characterization of equations and TRSs for which it is 
decidable whether the equation is the inductive theorem of the TRS or 
not, based on rewriting induction, that is, using the rewriting 
induction as the underlying decision procedure. We give an abstract view 
of this approach, and try to integrate some known decidable classes.

15:00 - 16:00 __Rewriting Induction + Linear Arithmetic = Decision Procedure__. Stephan Falke.

This talk discusses new results on the decidability of inductive 
validity of conjectures. For these results, a class of term rewrite 
systems (TRSs) with built-in linear integer arithmetic is introduced and 
it is shown how these TRSs can be used in the context of inductive 
theorem proving. The proof method developed for inductive theorem 
proving couples (implicit) inductive reasoning with a decision procedure 
for the theory of linear integer arithmetic with (free) constructors. 
The effectiveness of the new decidability results on a large class of 
conjectures is demonstrated by an evaluation of the prototype 
implementation Sail2.

16:00 - 17:00 __Noetherian Induction for First-Order Reasoning__. Sorin Stratulat.

Noetherian induction is a powerful mathematical principle that allows to 
reason on sets of unbounded number of elements, provided that there is a 
Noetherian ordering that forbids the construction of any infinite 
strictly decreasing sequence of elements. In first-order logic, we can 
distinguish two instantiations of it, depending on the kind of elements 
we are reasoning on: terms or (first-order) formulas. The term-based 
principles rely on schemata. An induction schema attaches a set of 
formulas as induction hypotheses (IH) to another formula, called 
induction conclusion, with the intention that the IHs be used in the 
proof of the induction conclusion. Widely-spread among proof assistants, 
it can be easily integrated into sequent-based inference systems as a 
separate inference rule. The induction reasoning is local, at schema 
level, and the induction orderings may differ from one schema to 
another. However, it is not lazy since the IHs should be defined 
(sometimes long) before their use or are not used at all. Moreover, it 
cannot manage naturally the mutual induction with other formulas from 
the proof.

The formula-based principles are issued from the Musser’s inductionless 
induction (or proof by consistency) method based on the Knuth-Bendix 
completion algorithm. The implementing reasoning methods are lazy and 
allow mutual induction. On the other hand, they are reductive, i.e. the 
newly derived formulas in the proof should be smaller (or equal), at the 
ground level, than the currently processed formula, and the induction 
ordering is unique for a given proof. In this talk, we sketch a 
cycle-based induction method that generalizes and keeps the best 
features of the above methods. The application of any IH is validated by 
a cycle of formulas governed by an induction ordering. The reasoning is 
local, at cycle level, reductive-free, and naturally performs mutual and 
lazy induction. From a theoretical point of view, our method synthesizes 
the overall usage of Noetherian induction reasoning in first-order 
logic. From a practical point of view, it allows for less restrictive 
specifications, more efficient implementations and proof certification 
processes.

\\
''room B013''
/%

!Wednesday, March 27
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Obfuscation by Partial Evaluation of Distorted Interpreters__. [Professor Neil Jones, University of Copenhagen|
/wiki/index.php?title=Main_Page] (visiting UL - LORIA)


 (with Roberto Giacobazzi, Isabella Mastroeni, Verona; from PEPM 2012)

How to construct a general program obfuscator? We present a novel approach to automatically generating obfuscated code P' from any program P whose source code is given.

Start with a (program-executing) interpreter interp for the language in which P is written. Then “distort” interp so it is still correct, but its specialization P' with respect to P is transformed code that is equivalent  to the original program, but harder to understand or analyze.

Potency of the obfuscator is proved with respect to a general model of the attacker, modeled as an approximate (abstract) interpreter. A systematic approach to distortion is to make program P obscure by transforming it to P' on which (abstract) interpretation is incomplete. Interpreter distortion can be done by making residual in the specialization process sufficiently many interpreter operations to defeat an attacker in extracting sensible information from transformed code. Our method is applied to: code flattening, data-type obfuscation, and opaque predicate insertion. The technique is language independent and can be exploited for designing obfuscating compilers.

\\
''14h00, room A008''
/%

!Tuesday, March 26
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Multi-Focus Rewrite Rules__. [Christophe Calvès|http://calves.me/christophe/wiki/index.php?title=Main_Page] (UL - LORIA)

Rewrite rules are frequently used to perform program transformations. This includes compilation (optimisations, code generation, ...) and evaluation to specify program semantics, for state exploration or just interpretation. This talk presents multi-focus rewrite rules and strategies, able to traverse, match and rewrite, in a single rule, several subterms of a subject. These rules are particulary well-suited for programming-languages semantics's specification as it often requires to consider serval elements (code fragment, memory, ...) to perform actions.

We will demonstrates the modularity and compositionality of our approach with the specification, in TOM (a compiler embedding algebraic terms and rewrite rules in many languages), of the semantics of a ML-like toy language. 

\\
''14h00, room A008''
/%


!Thursday, March 21
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
Quantic Day. [Quantic Day|http:http://www.loria.fr/news/journee-du-quantique-au-loria-le-21-mars] (UL - LORIA)

Le jeudi 21 mars aura lieu une journée sur la thématique de l'informatique quantique en C103.

----
10h00 - Accueil 
10h30 - 11h15  Gilles Dowek - TBA
11h15 - 12h00  Pablo Arrighi - Quantum Cellular Automata: structure, universality

14h00 - 14h45  Maarten Van den Nest - Classical simulation complexity of quantum circuits.
14h45 - 15h30  Emmanuel Jeandel - Dynamique des modèles de calcul

16h15 - 16h45  Benoit Valiron - Quipper: a scalable quantum programming language
16h45 - 17h15  Vincent Nesme - Marches quantiques et Bancs de filtres
17h15 - 17h45  Alejandro Diaz-Caro - Vectorial types, non-determinism and probabilistic systems: towards a computational quantum logic


\\
''10h00-18h00, room C103''
/%


!Tuesday, March 12
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Russian Cards, communication protocols, and cryptography__. [Hans van Ditmarsch|http://personal.us.es/hvd/] (CNRS - LORIA)

Consider the following logic puzzle, known as 'Russian Cards':

 From a pack of seven known cards 0,1,2,3,4,5,6 Alice and Bob each draw
three cards and Eve gets the remaining card. How can Alice and Bob
openly (publicly) inform each other about their cards, without Eve
learning of any of their cards who holds it?

Clearly Alice and Bob are required to devise a secure protocol to
communicate their secrets to each other, and Eve is the eavesdropper.
Unlike traditional security settings, initially the three agents share
knowledge: how many cards there are, and that everybody knows his own
cards. This riddle, similar riddles, and things such as schemes for
bidding in Bridge, have played a role in the development of
information-based cryptography. One original publication (not treating
this riddle) is by Fischer & Wright: Bounds on Secret Key Exchange Using
a Random Deal of Cards, Journal of Cryptography, 1996.

I will present and discuss solutions to the riddle. There are several
solutions. I will explain its interest to the dynamic epistemic logic
community: various 'near solutions' can be described in terms of
knowledge, common knowledge, and preservation of knowledge properties
after update (typically, making ignorance public can leak information).
I will summarily present some recent results more or less in combatorial
mathematics. This includes work with my - until recently - collaborators
in Sevilla. Some relevant publications:

* M.H. Albert, R.E.L. Aldred, M.D. Atkinson, H.P. van Ditmarsch, C.C. Handley, Safe communication for card players by combinatorial designs for two-step protocols. Australasian Journal of Combinatorics, 33:33-46, 2005.
* Andrés Cordón-Franco, Hans van Ditmarsch, David Fernández Duque, Joost J. Joosten, Fernando Soler-Toscano. A secure additive protocol for card players. Australasian Journal of Combinatorics 54: 163-175, 2012.
* Colleen Swanson, Douglas Stinson. Combinatorial Solutions Providing Improved Security for the Generalized Russian Cards Problem. Designs, Codes and Cryptography; Published online first November 2012.


\\
''14h, room A008''
/%
!Friday, March 1
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Les expressions régulières déterministes__. [Benoît Groz|http://www.cs.tau.ac.il/~groz/index.php] (Université de Tel-Aviv)

Les expressions régulières déterministes sont utilisées dans plusieurs langages de schémas XML (DTDs, XML Schémas). Nous présentons des techniques permettant de tester en temps linéaire si une expression régulière est déterministe. Nous montrons aussi que l'on peut décider  en temps linéaire l'appartenance d'un mot au langage d'une l'expression pour de larges familles d'expressions déterministes. Nos algorithmes construisent des structures de données sur l'arbre de syntaxe de l'expression alors que les algorithmes classiques pour ces problèmes reposent sur la construction de l'automate de Glushkov, une construction quadratique lorsque la taille de l'alphabet n'est pas bornée. 

\\
''14h, room A008''
/%

!Tuesday, February 12
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Parikh Matrices of Words and Their Properties__. Kumbakonam Govindarajan Subramanian (Prof. at Universiti Sains Malaysia)

Words and their Combinatorial properties constitute a growing area of research that can be classified under Discrete Mathematics. A number of fundamental and significant mathematical results have been revealed in this study. This area can also be considered as a study of discrete sequences or a study of combinatorial properties of free monoids. A word is a finite or an infinite sequence of symbols taken from a finite set, called alphabet. A subword of a given word w is a subsequence of w. In the recent past, the notion of a special matrix, called Parikh Matrix, associated with a word, has been introduced and investigated. This matrix gives information on the number of certain subwords of a given word. This talk is intended to provide some background and highlight interesting results established by researchers in this topic of Parikh matrix. An extension of the notion of a Parikh matrix to picture arrays, which are rectangular arrangements of symbols, will also be indicated.

\\
''14h, room A008''
/%


!Friday, February 15
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Preuves de programmes à échanges de messages non copiants dans Heap-Hop__. [Etienne Lozes|http://www.lsv.ens-cachan.fr/~lozes/?l=fr] (LSV - Univ. Cassel)

La conception de programmes à échanges de messages peut s'avérer difficile lorsque le coup occasionné par la recopie du message de l'espace d'adressage de l'envoyeur à celui du receveur devient un facteur critique pour l'efficacité du programme. Une solution à ce problème consiste à utiliser un espace d'adressage commun à l'envoyeur et au receveur. Dans cette approche, la recopie peut être évitée, mais la bonne entente entre envoyeur et receveur est nécessaire quant aux accès à la mémoire. Jules Villard et moi-même avons introduit une extension de la logique de séparation qui permet de prouver de tels programmes corrects du point de vue des accès mémoires. Inspiré du langage Sing#, notre système introduit une notion de contrat de canal qui permet de garantir des propriétés plus fortes que celles habituellement établies par la logique de séparation: les programmes prouvés sont garantis sans erreurs de communications (message oublié, message inattendu) et sans fuites mémoires. J'illustrerai ces idées sur l'outil Heap-Hop, et donnerai un aperçu de nos développements en cours sur la théorie des contrats de canaux.

\\
''14h, room A008''
/%

!Tuesday, February 5
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Everlasting Security through Quantum Cryptography__. [Dominique Unruh|http://www.cs.ut.ee/~unruh/] (University of Tartu, Tallinn)

Cryptography is a powerful tool for processing confidential data.
Cryptographic protocols are, however, only as secure as the underlying
encryption schemes. And we do not know whether these might not be
broken at some point in the future. This leaves us in a difficult
situation: If we process highly sensitive data using cryptographic
protocols, an attacker might simply record all messages. Should the
underlying encryption scheme be broken in the future, the attacker
will then be able to decrypt all confidential data in retrospect. For
highly confidential data (such as, e.g., medical records) such a
situation is not acceptable.

A way out is "everlasting security". A protocol with everlasting
security guarantees that all data involved in the protocol stays
secure, even if at some point in the future all the underlying
encryption schemes are broken. Unfortunately, with traditional
cryptographic techniques, everlasting security can only be achieved in
very limited situations.

In this talk, we explain how everlasting security can be achieved for
a wide variety of tasks by using quantum cryptography, i.e., by making
use of quantum mechanical effects in the cryptographic protocol.

(No prior knowledge on quantum cryptography is required.)

\\
''14h, room A008''
/%

/%
%%tab-2012

!Tuesday, December 18
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Algorithmic Structuring and Compression of Proofs__. [Stefan Hetzl|http://www.lix.polytechnique.fr/~hetzl/] (INRIA Saclay)

This talk will describe a method for the compression of formal proofs by the
computation of new lemmas. We use tree grammars for an efficient treatment of
the term-structure of a first-order proof. Our algorithm works by first
minimizing a tree grammar which, in a second step, induces a formula that
serves as abbreviating lemma. Finally, I will outline an approach to
extending this work to a method for inductive theorem proving.

\\
''14h, room A008''
/%

!Friday, November 9
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Proving cryptographic schemes in Agda (Dependently typed functional programming for Alice and Bob)__. [Nicolas Pouillard|http://nicolaspouillard.fr/] (DemTech project, IT University of Copenhagen (ITU))

  In order to gain confidence in cryptographic schemes and primitive
    operations, cryptographers use a combination of theories. These
    theories enable us to mathematically prove security properties of
    these constructions. Probability theory, game theory, number theory
    and complexity theory are often all needed at the same time. We aim
    at realizing these proof arguments in Type Theory using Agda. Our
    current approach is to carefully use abstractions and dependently
    typed functional programming techniques to elegantly prove security
    properties. Finally we aim at reducing the usual gap between the
    formal description of the construction and its actual implementation.
    We do so by writing in a functional style supporting extraction into
    low level circuits.


\\
''11h, room A008''
/%

!Tuesday, October 23
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Bisimulations et preuves de congruence dans les langages d'ordre supérieur__. [Sergueï Lenglet|http://www.loria.fr/~slenglet/] (Université de Lorraine, LORIA)

Les bisimilarités sont des équivalences qui mettent en relation les termes d'un langage qui ont le même comportement (pour une certaine définition de "même comportement"). Pour prouver la correction d'une bisimilarité, il est nécessaire de prouver qu'elle est une congruence, c.à.d. préservée par contextes : si deux termes sont bisimilaires, alors ils sont toujours bisimilaires une fois placé dans n'importe quel contexte. C'est un problème particulièrement difficile dans les langages d'ordre supérieur, tels le lambda-calcul ou le pi-calcul d'ordre supérieur (dans lequel les messages échangés contiennent des processus exécutables). Pour les langages dérivés du lambda-calcul, il existe une méthode systématique pour prouver la congruence, appelée méthode de Howe. Nous verrons comment adapter la méthode de Howe au pi-calcul d'ordre supérieur.

Je mentionnerai aussi rapidement en fin d'exposé mes autres travaux, pour vous donner une idée de mes centres d'intérêts en recherche et éventuellement initier des collaborations au sein du département.

\\
''14h, room A008''
/%

!Tuesday, October 16
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Unification in Blind Signatures__. [Serdar Erbatur|http://www.cs.albany.edu/~se/] (SUNY Albany)

Blind signatures are signature schemes that keep the content confidential and have applications in modern cryptography for electronic voting and for digital cash schemes. We study unification modulo axioms that originate from RSA implementation of blind signatures.

In this talk, we will present one such axiom called E. The axiom E has an undecidable unification problem. It also specifies that a binary operator distributes synchronously over another binary operator, hence called ''synchronous distributivity''. Furthermore, it appears that this is the weakest equational theory which has an undecidable unification problem. Next, we briefly present the axiom F that is similar to E and show that it has a decidable unification problem. This last axiom is not related to blind signatures but satisfies a property of El Gamal encryption.These are only some of many possible axiomatizations of algebraic properties of blinding schemes. The results we obtained will be useful to analyze the protocols with blinding when the algorithms are integrated into the protocol analysis tools.

\\
''14h, room A008''
/%

!Tuesday, July 10
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Corruption evidence in electronic voting__. [Mark D. Ryan|http://www.cs.bham.ac.uk/~mdr/] (University of Birmingham)

Coercion-resistance is a fundamental, and strong, property of electronic
voting systems. It states that a voter should be able to cast his vote
as intended, even in presence of a coercer that may try to force him to
cast a different vote. Especially in remote voting, coercion-resistance
is extremely hard to achieve and leads either to schemes of questionable
usability or to trust assumptions that are difficult to meet in practice.

We propose a change of perspective, replacing the requirement of
coercion-resistance with a requirement that we call
"corruption-evidence": there should be public evidence of the amount of
coercion and other corruption that has taken place during a particular
execution of the voting system. Depending on the significance of this
amount, election authorities can decide to consider the election valid
or not, leading to disincentivisation of coercion.

\\
''11h, room A008''
/%

!Tuesday, May 15
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Attack-Defense Tree Methodology for Security Assessment__. [Barbara Kordy|http://satoss.uni.lu/members/barbara/] (Université du Luxembourg)

Attack-defense trees are a novel methodology for graphical security modeling and assessment. They extend the well established formalism of attack trees, by allowing nodes that represent defensive measures to appear at any level of the tree. This extension enlarges the modeling capabilities of attack trees and makes the new formalism suitable for representing interactions between possible attacks and corresponding defensive measures.

This talk will give an overview of the attack-defense tree methodology. After introducing the syntax as well as possible semantics for attack-defense trees, a number of interesting questions, including equivalent representations of security scenarios, quantitative analysis using attack-defense trees and computational complexity of the considered model, will be discussed.

__Bio:__ Barbara Kordy is a postdoctoral researcher at the University of Luxembourg. She is currently leading the ATREES project which focuses on modeling and evaluating vulnerability scenarios using attack-defense tree methodology. Her research interests include formal methods for security assessment, privacy and anonymity issues and analysis of security protocols. She received her Ph.D. and master degree in computer science from the University of Orléans in France. She also holds a master degree in mathematics obtained at the University of Silesia in Poland. 

\\
''14:00, amphi C''
/%

!Monday, April 23
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Bisimulations pour les opérateurs de contrôle délimité__. [Sergueï Lenglet|http://sardes.inrialpes.fr/~slenglet/] (Université Paris-Diderot, PPS)

Nous présentons les premiers résultats sur la définition d'une théorie comportementale pour un lambda-calcul étendu avec les opérateurs de contrôle délimité shift et reset. Dans un premier temps, nous définissons une notion d'équivalence comportementale, que nous cherchons ensuite à caractériser à l'aide de bisimulations (applicative et de forme normale). Nous étudions aussi la relation entre équivalence comportementale et une autre équivalence basée sur une transformation CPS.

\\
''14:00, room A008''
/%


!Thursday, April 5
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Malleability in Modern Cryptography__. [Markulf Kohlweiss|http://research.microsoft.com/en-us/people/markulf/] (Microsoft Research Cambridge)



In recent years, malleable cryptographic primitives have advanced from being seen as a weakness allowing for attacks, to being considered a potentially useful feature. Malleable primitives are cryptographic objects that allow for meaningful computations, as most notably in the example of fully homomorphic encryption. Malleability is, however, a notion that is difficult to capture both in the hand-written and the formal security analysis of protocols.

In my work, I look at malleability from both angles. On one hand, it is a source of worrying attacks that have, e.g., to be mitigated in a verified implementation of the transport layer security (TLS) standard used for securing the Internet. On the other hand, malleability is a feature that helps to build efficient protocols, such as delegatable anonymous credentials and fast and resource friendly proofs of computations for smart metering. We are building a zero-knowledge compiler for a high-level relational language (ZQL), that systematically optimizes and verifies the use of such cryptographic evidence.

We recently discovered that malleability is also applicable to verifiable shuffles, an important building block for universally verifiable, multi-authority election schemes. We construct a publicly verifiable shuffle that for the first time uses one compact proof to prove the correctness of an entire multi-step shuffle. In our work, we examine notions of malleability for non-interactive zero-knowledge (NIZK) proofs. We start by defining a malleable proof system, and then consider ways to meaningfully 'control' the malleability of the proof system. In our shuffle application controlled-malleable proofs allow each mixing authority to take as input a set of encrypted votes and a controlled-malleable NIZK proof that these are a shuffle of the original encrypted votes submitted by the voters; it then permutes and re-randomizes these votes and updates the proof by exploiting its controlled malleability.

__Short Bio:__ I am a postdoc at Microsoft Research Cambridge in the Programming Principles and Tools group. I did my PhD at the COSIC (Computer Security and Industrial Cryptography) group at the K.U.Leuven, and my master thesis at IBM Research Zurich. My research focus is on privacy-enhancing protocols and formal verification of cryptographic protocols.

\\
''14:00, room A006''
/%


!Friday, April 6
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Composing objects and services: a formal model driven development (MDD) approach__. [Paul Gibson|http://www-public.int-evry.fr/~gibson/] (Telecom & Management SudParis)

In this talk, we will present the problem of composing systems and
reasoning about the
correctness of such compositions. We will present semantics linking
objects and services, and
discuss the use of formal methods during MDD of sysems of services.
Different formalisms will be used
to highlight the important  issues, but we will focus on our on-going
research using Event-B  to model
requirements, refined into designs, that are to be transformed into OO
code (with concrete examples in Java).

\\
''14:00, room A008''
/%


!Friday, March 23
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)

__Tableaux modulo théories en utilisant la superdéduction : une application à la vérification de règles de preuve B avec l'outil de démonstration automatique Zenon__. [David Delahaye|http://cedric.cnam.fr/~delahaye/] (CNAM)

Nous proposons une méthode qui permet de développer les tableaux modulo théories en utilisant les principes de la superdéduction, parmi lesquelles la théorie est utilisée pour enrichir le système de déduction avec de nouvelles règles de déduction. Cette méthode est présentée dans le cadre de l'outil de démonstration automatique Zenon, et nous décrivons un exemple d'application à la théorie des ensembles de la méthode B. Cela permet de fournir un outil de démonstration alternatif à celui de l'Atelier B, qui peut être utilisé pour vérifier les règles de preuve B en particulier. Nous proposons également des benchmarks dans lesquels cet outil de démonstration est capable de vérifier automatiquement une partie des règles provenant de la base de règles ajoutées maintenue par Siemens IC-MOL.

\\
''14:00, room A008''
/%


!Tuesday, March 20
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Towards Automated Security Proofs of Block-Cipher Based Cryptographic Primitives__. [Martin Gagné|http://www-verimag.imag.fr/~gagne/] (VERIMAG, Université Joseph Fourier)

A block cipher algorithm (e.g. AES, Blowfish, DES, Serpent and  
Twofish) is a symmetric key algorithm that takes a key, a fixed size  
input message block and produces a fixed size output block. Block  
ciphers are among the most ubiquitous algorithms in symmetric  
cryptography, and can be used to construct encryption schemes, message  
authentication codes, collision-resistant hash functions, and many  
other primitives.

As with any other cryptographic construction, we require that these  
primitives be proven secure, usually through a reduction of the  
security of the primitive to the security of the block cipher.

Since block-cipher based primitives are usually constructed using a  
fairly small set of operations, we believe that Hoare logic is an  
ideal tool for automatically proving their security. We present a  
Hoare logic for proving the security of block cipher modes of  
operation. Our logic is fairly simple, requires only three invariants,  
and can be used to prove the security of all the common modes of  
operation.
We also present a second Hoare logic for proving the security of  
message authentication codes.  This logic is much more complicated  
than the first, and we discuss the different challenges posed by this  
primitive.

\\
''14:00, room A008''
/%

!Monday, March 19
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Modèles de calcul non initialisés et Systèmes Dynamiques__. [Emmanuel Jeandel|http://www.lif.univ-mrs.fr/~ejeandel/] (LIF, Université de Provence)

Dans cet exposé, on considère les modèles de calcul usuels (machines de Turing, machines à compteurs) comme des systèmes dynamiques, correspondant à l'application de leur fonction de transition sur une configuration quelconque. En particulier, dans ce contexte, il est nécessaire d'observer leur comportement partant de n'importe quelle configuration, plutôt que d'une configuration initiale donnée.

Le problème devient alors totalement différent. Par exemple, la preuve de l'indécidabilité de l'existence d'une configuration sur laquelle une machine de Turing ne s'arrête pas (Hooper, 1966) est bien plus délicate que l'indécidabilité de l'arrêt d'une machine de Turing partant d'une configuration donnée (Turing, 1937), et ce phénomène se répète pour la majorité des modèles de calcul.

Après une présentation générale de la problématique, nous essaierons dans cet exposé d'illustrer les méthodes permettant de résoudre ou contourner le problème dans le cadre du modèle de calcul des pavages par tuiles de Wang, qui convient fort bien à un traitement dynamique.

\\
''14:00, room A008''
/%

!Tuesday, March 13
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Specifying Kermeta through Rewriting Logic__. [Christophe Calvès|http://chercheurs.lille.inria.fr/~calves/] (INRIA Lille-Nord Europe)

Kermeta is a Domain-Specific Language designed as a kernel for
metamodel engineering. It unifies metamodeling, constraints,
semantics and transformations into a coherent and statically
typed language. In a nutshell, it has an object-oriented base
similar to Java but with handling of metamodeling aspects such
as attributes, associations, multiplicity, ... and a "design
by contract" approach in the style of Eiffel.

Kermeta enable users to give a semantics to their metamodel.
Unfortunately, even if Kermeta is a mature software, it has
not been(formally) specified yet. This lack prevents any
(formal) verification on metamodel's semantics.

In this talk we will present a specification of Kermeta's
semantics through the formal definition of a compiler from
Kermeta to an object-oriented stack-based abstract machine.
Both the compiler and the abstract machine are defined in
the specification framework K, based on rewriting logic.

We will show the problems raised by Kermerta's particularities
(such as attribute/reference handling, design by contract,
multiple inheritance, genericity, ...), how to formalize them and
compile them correctly.

\\
''14:00, room A008''
/%

/%
%%tab-2011

!Thursday, December 8
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__A Bishop-style constructive theory of posets__. Marian Baroni, University ’Dunarea de Jos’ of Galati (Romania).

Partially ordered sets are examined from the point of view of Bishop’s constructive mathematics, which can be viewed as the constructive core of mathematics and whose theorems can be translated into many formal systems of computable mathematics. The features of Bishop’s constructivism are illustrated by a simple example, the least-upper-bound principle. Unlike the classical case, this statement does not hold constructively. However, the order completeness of the real number set can be expressed constructively by an equivalent condition for the existence of supremum, a condition of order locatedness which is vacuously true in the classical setting. A generalization of this condition leads to a reasonable definition of order completeness for an arbitrary partially ordered set.

\\
''16:00, room A006''
/%

!Tuesday, December 6
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Calculabilité et mathématiques effectives__. [Antoine Taveneaux|http://www.liafa.jussieu.fr/~taveneaux/], Université Paris Diderot, LIAFA.

Dans de nombreux théorèmes mathématiques, le quantificateur existentiel n'est pas "effectif", dans le sens où l'objet dont on prouve l'existence n'est pas calculable.

La théorie de la calculabilité porte principalement sur la comparaison des ensembles d'entiers, du point de vue de leur calculabilité mutuelle. Nous verrons que l'on peut aussi comparer l'effectivité des théorèmes, et faire en quelque sorte de la "calculabilité sur les théorèmes".

Cet exposé sera fait en français et ne suppose presque aucun prérequis en théorie de la calculabilité.

\\
''14:00, room A006''
/%

!Monday, November 28

%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Running Mixnet-Based Elections with Helios__. [Olivier Pereira|http://www.uclouvain.be/crypto/people/show/10], UCL Crypto Group (Louvain-la-Neuve).

The Helios voting system is an open-audit web-based voting system that 
has been used by various institutions in real-stake elections during 
the last few years. While targeting the simplicity of the election 
workflow, the homomorphic tallying process used in Helios limits its 
suitability for many elections (large number of candidates, specific 
ballot filling rules...). 

We present a variant of Helios that allows an efficient mixnet-based 
tallying procedure, and document the various choices we made in terms 
of election workflow and algorithm selection. In particular, we 
propose a modified version the TDH2 scheme of Shoup and Gennaro that 
we found particularly suitable for the encryption of the ballots. 

Our Helios variant has been tested during two multi-thousand voter 
elections. The lessons taken from the first of these elections 
motivated some changes into our procedure, which have been successfully 
experimented during the second election. 

This is joint work with Philippe Bulens and Damien Giry.

\\
''10:00, room A008''
/%

!Tuesday, November 15
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Obfuscation by Partial Evaluation of Distorted Interpreters__. [Roberto Giacobazzi|http://profs.sci.univr.it/~giaco], Univ. Verona (Italy).

Joint work with Neil D. Jones and Isabella Mastroeni.

How to construct a general program obfuscator? We present a novel
approach to automatically generating obfuscated code P' from any
program P with source clear code. Start with a (program-executing)
interpreter Interp for the language in which P is written. Then
“distort” Interp so it is still correct, but its specialization P' with
respect to P is transformed code that is equivalent to the original
program, but harder to understand or analyze. Potency of the obfuscator
is proved with respect to a general model of the attacker,
modeled as an approximate (abstract) interpreter. A systematic approach
to distortion is to make program P obscure by transforming
it to P' on which (abstract) interpretation is incomplete. Interpreter
distortion can be done by making residual in the specialization
process sufficiently many interpreter operations to defeat an
attacker in extracting sensible information from transformed code.
Our method is applied to: code flattening, data-type obfuscation,
and opaque predicate insertion. The technique is language independent
and can be exploited for designing obfuscating compilers.

\\
''10:30, room B011''
/%

%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__A decision procedure for trace equivalence__. [Vincent Cheval|http://www.lsv.ens-cachan.fr/~cheval/], LSV (Cachan).

We consider security properties of cryptographic protocols that can be modeled using the notion of trace equivalence. The notion of equivalence is crucial when specifying privacy- type properties, like anonymity, vote-privacy, and unlinkability.

In this paper, we give a calculus that is close to the applied pi calculus and that allows one to capture most existing protocols that rely on classical cryptographic primitives. First, we propose a symbolic semantics for our calculus relying on constraint systems to represent infinite sets of possible traces, and we reduce the decidability of trace equivalence to deciding a notion of symbolic equivalence between sets of constraint systems. Second, we develop an algorithm allowing us to decide whether two sets of constraint systems are in symbolic equivalence or not. Altogether, this yields the first decidability result of trace equivalence for a general class of processes that may involve else branches and/or private channels (for a bounded number of sessions).

\\
''14:00, room B011''
/%

!Tuesday, October 18
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Kolmogorov complexity and "no information" notion__. [Alexander Shen|http://www.lif.univ-mrs.fr/~ashen/], CNRS, LIRMM (Montpellier).

There are many ways to formalize the intuitive idea of "a message that has no information" or "a message has full information" about something. One can use logic of knowledge, Shannon's information theory, consider protocols (deterministic, non-deterministic, probabilistic etc.). We discuss these approaches and then switch to the algorithmic information theory approach and explain A. Muchnik's result about (kind of) cryptography framework in algorithmic information theory.

\\
''14:00, room C005 (salle du conseil)''
/%

!Friday, June 10
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Dynamic logic and its parents__. [Daniel Leivant|https://www.cs.indiana.edu/~leivant/], Indiana University.

  Modal deductive formalisms about imperative programs, such as
  dynamic logic, can be seen as syntactic sugar for more explicit
  forms of reasoning, such as first order theories and second order
  logic. This view suggests notions of completeness for dynamic logic,
  which we argue are more informative and appropriate than the traditional notion of relative completeness.

\\
''14:00, room C103''
/%

!Wednesday, May 11
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Politique de s&eacute;curit&eacute; dynamique__. [Frederic Prost|http://membres-lig.imag.fr/prost/], LIG (Grenoble).

La propri&eacute;t&eacute; de non-interf&eacute;rence qui consiste &agrave; prouver que deux parties d'un programme n'ont pas d'interactions entre elles, est la base de nombreuses analyses de programme : parall&eacute;lisation, program slicing, &eacute;limination de code mort, etc. Dans le domaine de la confidentialit&eacute; il s'agit de montrer qu'il n'y a pas de flux d'information du "priv&eacute;" vers le "public". Cette propri&eacute;t&eacute; est cependant trop stricte pour rendre compte des applications r&eacute;elles : par exemple lors de la v&eacute;rification d'un mot de passe il y a formellement une interaction entre le domaine priv&eacute; (la valeur du mot de passe) et le domaine public (l'acc&egrave;s est autoris&eacute; ou non en regard du code entr&eacute;). Cependant de telles fuites d'information sont consid&eacute;r&eacute;es comme acceptables. Nous introduisons une notion de politique de confidentialit&eacute; sous forme de r&egrave;gles de r&eacute;&eacute;critures associ&eacute;es (on pourrait voir ça comme un typage plus fin) pour sp&eacute;cifier que de telles fuites sont "permises" et v&eacute;rifier qu'un programme est bien correct vis-&agrave;-vis d'une politique de s&eacute;curit&eacute;. De m&ecirc;me apr&egrave;s trois essais infructueux une carte banquaire est bloqu&eacute;e, nous montrons comment en ajoutant des actions aux r&egrave;gles d&eacute;finissants la politique de s&eacute;curit&eacute; on peut modifier cette derni&egrave;re au cours du calcul et comment v&eacute;rifier que le programme reste valide vis-&agrave;-vis de sa politique de s&eacute;curit&eacute;.

\\
''15:30, room B013''
/%

!Tuesday, May 10
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__An Introduction to Program Equivalence__. [Vasileios Koutavas|http://www.scss.tcd.ie/Vasileios.Koutavas/], Trinity College (Dublin).

Contextual equivalence is a relation between program terms which guarantees that
related terms can replace eachother inside any arbitrary system, without
changing the behaviour of the system. This relation reveals fundamental
symmetries of program terms that abstract away from concrete syntax and
illuminate the semantics of programming languages. Its implications span from
programming language design to software engineering, to system security, to
program verification.

To understand contextual equivalence, and to develop reasoning principles for
it, we need to develop a theory that can express the basic interractions between
a program and its environment, and to express contextual equivalence within that
theory. In this talk we will review the theories available today and then
concentrate on thosed based on bisimulation, which we will examine for a variety
of functional, imperative, and nondeterministic languages.

\\
''10:00, room A008''
/%

!Tuesday, April 5
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Assistance à la conception de modèles à l'aide de contraintes__. [Marie de Roquemaurel|http://www.irit.fr/~Marie.Deroquemaurel/index.html], IRIT, Université de Toulouse III.

Les systèmes informatiques embarqués deviennent omniprésents dans de multiples domaines et de plus en plus d'industriels cherchent à modéliser leurs problèmes réels, à vérifier la cohérence de leur modélisation, et à utiliser des outils performants pour les résoudre.
Dans ce travail, nous considérons les problèmes modélisés avec les outils utilisés actuellement par la communauté de l'ingénierie dirigée par les modèles comme le framework EMF (Eclipse Modeling Framework) pour représenter le système informatique embarqué ou le langage de contraintes OCL (Object Constraint Language) pour traduire les propriétés métier. Pour vérifier ces propriétés de manière efficace, l'approche choisie consiste à transformer le problème en un problème de satisfaction de contraintes et d'utiliser les techniques efficaces d'Intelligence Artificielle pour le résoudre. Les premiers résultats expérimentaux sont prometteurs.

Lors de cet exposé, je présenterai cette approche et nous verrons qu'elle présente l'avantage de pouvoir faire de la validation de contraintes, de compléter des modèles partiels tout en pouvant prendre en compte un critère à minimiser.

Mots-clés : Conception Système, Ingénierie Dirigée par les Modèles, Transformation de Modèles, Synthèse, Contraintes 

\\
''14:00, room A006''
/%

!Tuesday, February 15
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Decidable Logics Combining Heap Structures and Data__. [Gennara Parlato|http://www.dia.unisa.it/~parlato/], CNRS, LIAFA (Paris). 

We define a new logic, STRAND, that allows reasoning with heap-manipulating programs using deductive verification and SMT solvers. STRAND logic (“STRucture ANd Data” logic) formulas express constraints involving heap structures and the data they contain; they are defined over a class of pointer-structures R defined using MSO-defined relations over trees, and are of the form ∃x∀y.φ(x, y), where φ is a monadic second-order logic (MSO) formula with additional quantification that combines structural constraints as well as data-constraints, but where the data-constraints are only allowed to refer to x and y.

The salient aspects of the logic are: (a) the logic is powerful, allowing existential and universal quantification over the nodes, and complex combinations of data and structural constraints; (b) checking Hoare-triples for linear blocks of statements with pre-conditions and post-conditions expressed as Boolean combinations of the existential and universal STRAND formulas reduces to satisfiability of a STRAND formula; (c) there are powerful decidable fragments of STRAND, one semantically defined and one syntactically defined, where the decision procedure works by combining the theory of MSO over trees and the quantifier-free theory of the underlying data-logic.

We demonstrate the effectiveness and practicality of the logic by checking verification conditions generated in proving properties of several heap-manipulating programs, using a tool that combines an MSO decision procedure over trees (MONA) with an SMT solver for integer constraints (Z3). 

\\
''14:00, room A008'' 
/%


!Tuesday, February 8
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Linear Dependent Types for Certified Resource Consumption__. [Marco Gaboardi|http://www.cs.unibo.it/~gaboardi/], Università di Bologna.

I present the ideas underlying the design of an interactive programming framework useful to ensure the reliability and correctness of programs with respect to their resource consumption. The key ingredient of this framework is a type system, dlPCF, mixing ideas from linear logic and dependent types. I introduce dlPCF and I show that it is sound and relatively complete. Completeness holds in a strong sense: dlPCF is not only able to precisely capture the functional behaviour of programs (i.e. how the output relates to the input) but also some of their intensional properties, namely the complexity of their evaluation. Additionally, dlPCF is parametrized on the underlying language of index terms, which can be tuned so as to sacrifice completeness for tractability.

\\
''14:00, room A008''
/%

!Tuesday, January 25
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Formal analysis of security protocols: the case of electronic voting__. [Steve Kremer|http://www.loria.fr/~skremer/], INRIA, LSV (Cachan).

Security protocols are distributed programs which aim at establishing security guarantees, such as confidentiality, authentication or anonymity, while being executed over an untrusted network. Nowadays, security protocols are present in many aspects of our everyday life: electronic commerce, mobile phones, e-banking, etc. Symbolic methods, which often come with automated tool support, have shown extremely useful for finding structural flaws in such protocols or proving their absence.

In this talk we report on our efforts carried out over the last years to apply such techniques to electronic voting protocols. The specificities of electronic voting have raised several challenges: they make use of esoteric cryptographic primitives and need to guarantee complex security properties, such as receipt-freeness or verifiability. We have modelled and analyzed such protocols and their properties in the applied pi calculus, a formal language for expressing security protocols. Formally defining the protocols and properties allowed us to highlight many (hidden) assumptions and has shown to be a non trivial task in itself. Regarding automated analysis, this line of work raised a number of challenges in formal analysis of security protocols, some of which are still the subject of active research.

\\
''14:00, room A008''
/%

!Tuesday, January 18
%%(border: 1px solid #aaa; margin:1em 2.5%; padding:0.5em; background:#f9f9f9)
__Solving Simple Stochastic Tail Games__. [Florian Horn|http://liafa.jussieu.fr/~horn/], CNRS, LIAFA (Paris).

Stochastic games are a natural model for open reactive processes: one player represents the controller and his opponent represents a hostile environment. The evolution of the system depends on the decisions of the players, supplemented by random transitions. There are two main algorithmic problems on such games: computing the values (quantitative analysis) and deciding whether a player can win with probability 1 (qualitative analysis). In this paper we reduce the quantitative analysis to the qualitative analysis: we provide an algorithm for computing values which uses qualitative analysis as a sub-procedure. The correctness proof of this algorithm reveals several nice properties of perfect-information stochastic tail games, in particular the existence of optimal strategies.

\\
''14:00, room A008''
/%

/%

%%tab-Calendar

[{IFramePlugin url='https://zimbra.inria.fr/service/user/mathieu.hoyrup@inria.fr/FMSeminar.html' width='100%' height='800' border='1' scrolling='yes' align='center'}]

Here is the [link|https://zimbra.inria.fr/service/user/mathieu.hoyrup@inria.fr/FMSeminar.html] to the Calendar of the seminar.

To add this calendar to your zimbra calendars: on your zimbra account,
# right click on 'Calendars',
# select 'Link to shared calendar',
# enter <img src="http://www.loria.fr/~hoyrup/addr.png" align='top'> as email address, 'FMSeminar' as path (and you can change the name and color).

You can also use the [ics file|https://zimbra.inria.fr/service/user/mathieu.hoyrup@inria.fr/FMSeminar].
/%

%%tab-Misc
__How to reach LORIA__.

See explanations [here|http://www.loria.fr/le-loria-1-en/infos-pratiques?set_language=en] and a map [there|http://maps.google.fr/maps/ms?msid=204772468194119636478.0004926725347aec99d8a&msa=0].

__Doctoral training__.

The FM Seminar is part of the doctoral training ED IAEM.


Each PhD student must print [this form|http://www.loria.fr/~hoyrup/validation.pdf] once and bring it to the seminars to be validated by the organizer. 

Other seminars may be part of the doctoral training (ask the corresponding organizers).
/%

/%