An important trade-off in the design of PPLs is between the expressiveness of the language of generative models and the power and efficiency of the inference methods. This trade-off between expressiveness and efficiency requires users to choose a different language, and associated toolchain, de- pending on the particular problem they are trying to solve. Additionally, even knowing where a particular problem lies on the continuum between expressiveness and efficiency, or where it is likely to lie as the problem evolves, can require a non-trivial amount of expertise and understanding of machine learning.

In this talk, we will describe ongoing work on GraPPa, the Galois Probabilistic Programming language, that addresses this concern. GraPPa, which is implemented as an embedded domain- specific language (EDSL) in Haskell, is a single PPL that allows users to choose where each model lies on the continuum between expressiveness and efficiency, simply by choosing what sorts of random variables to use in a model. The key technical idea that enables this approach is an encoding, in the type of each model, of the set of random variables and associated distributions used in that model. This approach is compositional, meaning that a model with random variables in one set can be combined with a model with random variables in another set, and the type of the resulting model will contain the union of the two sets.

See our full extended abstract for details: grappa-pps-2017