WFLP 2016 Talk Schedule and Pre-Proceedings
Invited Talk
Tuesday, 10:20–11:20
Declarative Model Transformations with Triple Graph Grammars (slides)
Anthony Anjorin
Abstract:
Triple Graph Grammars (TGGs) provide a rule-based means of
specifying a consistency relation over two graph languages. TGG
rules are declarative in the sense that they are both direction
agnostic and pattern based, characterising all consistent pairs of
graphs without fixing the order in which rules are to be applied to
restore consistency.
A declarative specification of consistency is advantageous as very
different tools can be derived automatically from a TGG including an
instance generator, consistent forward and backward transformations,
and incrementally working synchronisers, which are able to realise
forward and backward change propagation without incurring
unnecessary information loss.
This talk will present TGGs as a pragmatic implementation of the
more abstract symmetric delta lens framework for bidirectional
(model) transformations (bx), touching on the main strengths,
limitations, and future challenges of the approach in comparison to
other bx languages.
Contributed Papers
The electronic versions of accepted submissions below constitute the informal pre-proceedings of WFLP 2016.
Formal post-proceedings were published later in EPTCS.
Session 1
chair: Baltasar Trancón y Widemann
Tuesday, 11:40–12:10
A Typeful Integration of SQL into Curry (paper, slides)
Michael Hanus and Julia Krone
Abstract:
We present an extension of the declarative programming language
Curry to support the access to data stored in relational databases
via SQL. Since Curry is statically typed, our emphasis on this SQL
integration is on type safety. Our extension respects the type
system of Curry so that run-time errors due to ill-typed data are
avoided. This is obtained by preprocessing SQL statements at compile
time and translating them into type-safe database access
operations. As a consequence, the type checker of the Curry system
can spot type errors in SQL statements at compile time. To generate
appropriately typed access operations, the preprocessor uses an
entity-relationship (ER) model describing the structure of the
relational data. In addition to standard SQL, SQL statements
embedded in Curry can include program expressions and also
relationships specified in the ER model. The latter feature is
useful to avoid the error-prone use of foreign keys. As a result,
our SQL integration supports a high-level and type-safe access to
databases in Curry programs.
Tuesday, 12:10–12:40
A simulation tool for tccp programs (paper, slides)
María-del-Mar Gallardo, Leticia Lavado and Laura Panizo
Abstract:
The Timed Concurrent Constraint Language
tccp is a
declarative synchronous concurrent language, particularly suitable
for modelling reactive systems. In
tccp, agents communicate
and synchronise through a global constraint store. It supports a
notion of discrete time that allows all non-blocked agents to
proceed with their execution simultaneously.
In this paper, we present a modular architecture for the simulation
of
tccp programs. The tool comprises three main
components. First, a set of basic abstract instructions able to
model the
tccp agent behaviour, the memory model needed to
manage the active agents and the state of the store during the
execution. Second, the agent interpreter that executes the
instructions of the current agent iteratively and calculates the new
agents to be executed at the next time instant. Finally, the
constraint solver components which are the modules that deal with
constraints.
In this paper, we describe the implementation of these components
and present an example of a real system modelled in
tccp.
Lunch
Session 2
chair: Michael Hanus
Tuesday, 14:00–14:30
Selene: A Generic Framework for Model Checking Concurrent Programs from Their Semantics in Maude (paper, slides)
Adrián Riesco and Gorka Suárez-García
Abstract:
Model checking is an automatic technique for verifying whether some
properties hold in a concurrent system. Maude is a high-performance
logical framework where other systems can be easily specified,
executed, and analyzed. Moreover, Maude includes a model checker for
checking properties expressed in Linear Temporal Logic. However,
when a property on a program written in a programming language
specified in Maude does not hold the counterexample generated by
this system refers to the Maude semantics, which can be difficult to
follow.
In this paper we present Selene, a generic framework for dealing
with asynchronous concurrent systems that allows users to manipulate
the counterexample generated by the Maude model checker to relate it
to the program being analyzed. This is achieved by providing a
kernel for dealing with messages and memory, which are later handled
in the counterexample; the user can specify the details of his
semantics on top of this kernel.
Tuesday, 14:30–15:00
A Framework for Extending microKanren with Constraints (paper, slides)
Jason Hemann and Daniel P. Friedman
Abstract:
We present a framework for building CLP languages with symbolic
constraints based on microKanren, a domain-specific logic language
shallowly embedded in Racket. A language designer provides the names
and violation conditions of atomic constraints. We rely on Racket's
macro system to generate a black-box constraint solver and other
components of the microKanren embedding. The framework itself and
the implementation of common Kanren constraints amounts to just over
100 lines of code. Our framework is both a teachable implementation
for constraint logic programming as well as a test-bed and
prototyping tool for constraint systems.
Tuesday, 15:00–15:30
Using Haskell for a Declarative Implementation of System Z Inference (paper)
Steven Kutsch and Christoph Beierle
Abstract:
Qualitative conditionals of the form "if A then usually B" are a
powerful means in knowledge representation, establishing a plausible
relationship between A and B. When reasoning based on conditional
knowledge consisting of a set of conditionals, a rich structure
going beyond classical logic is required, e.g. ranking functions
that assign a degree of implausibility to each possible
world. System Z is a popular approach, using a unique partitioning
of the knowledge base to generate the Pareto-minimal ranking
function. This ranking function is used to answer questions
plausibly based on the conditionals in the knowledge base. In this
paper, we describe a Haskell implementation of system Z. To keep the
Haskell code as close as possible to the formal definition of System
Z, we make extensive use of language features such as list
comprehension and higher order functions. For example, these are
used to generate the required partition of the knowledge base or to
represent the induced ranking function. The described system is used
as a backend in the conditional reasoning tool InfOCF.
Excursion and Dinner
Session 3
chair: Sebastiaan Joosten
Wednesday, 09:00–09:30
A Practical Study of Control in Objected-Oriented--Functional--Logic Programming with Paisley (paper, slides)
Baltasar Trancón y Widemann and Markus Lepper
Abstract:
Paisley is an extensible lightweight embedded domain-specific
language for nondeterministic pattern matching in Java. Using simple
APIs and programming idioms, it brings the power of functional-logic
processing of arbitrary data objects to the Java platform, without
constraining the underlying object-oriented semantics. Here we
present an extension to the Paisley framework that adds
pattern-based control flow. It exploits recent additions to the Java
language, namely functional interfaces and lambda expressions, for
an explicit and transparent continuation-passing style approach to
control. We evaluate the practical impact of the novel features on a
real-world case study that reengineers a third-party open-source
project to use Paisley in place of conventional object-oriented data
query idioms. We find the approach viable for incremental
refactoring of legacy code, with significant qualitative
improvements regarding separation of concerns, clarity and
intentionality, thus making for easier code understanding, testing
and debugging.
Wednesday, 09:30–10:00
An Agglomeration Law for Sorting Networks and its Application in Functional Programming (paper)
Lukas Immanuel Schiller
Abstract:
In this paper we will present a general agglomeration law for
sorting networks. Agglomeration is a common technique when designing
parallel programs to control the granularity of the computation and
thereby finding a better fit between the algorithm and the machine
on which the algorithm runs. Usually this is done by grouping
smaller tasks and computing them en bloc within one parallel
process. In the case of sorting networks this could be done by
computing bigger parts of the networks with one process. The
agglomeration law in this paper pursues a different strategy: The
input data is grouped and the algorithm is generalized to work on
the agglomerated input while the original structure of the algorithm
remains. This will result in a new access opportunity to sorting
networks well suited for efficient parallelization on modern
multicore computers, computer networks or GPGPU
programming. Additionally this enables us to use sorting networks as
(parallel or distributed) merging stages for arbitrary sorting
algorithms and thereby combining new hybrid sorting algorithms with
ease. The expressiveness of functional programming languages helps
us to apply this law to systematically constructed sorting networks
leading to efficient and easily adaptable sorting algorithms. An
application example is given, using the Eden programming language to
show the effectiveness of this law.
Wednesday, 10:00–10:30
Proving Non-Deterministic Computations in Agda (paper, slides)
Sergio Antoy, Michael Hanus and Steven Libby
Abstract:
We investigate proving properties of Curry programs using
Agda. First, we address the functional correctness of Curry
functions that, apart from some syntactic and semantic differences,
are in the intersection of the two languages. Second, we use Agda to
model non-deterministic functions with two distinct and competitive
approaches incorporating the non-determinism. The first approach
eliminates non-determinism by considering the set of all
non-deterministic values produced by an application. The second
approach encodes every non-deterministic choice that the application
could perform. We consider our initial experiment a
success. Although proving properties of programs is a notoriously
difficult task, the functional logic paradigm does not seem to add
any significant layer of difficulty or complexity to the task.