No Bug November, WILTW

WILTW #6

Hoof! Lots of papers, lots of writing!

Taking a peek into the Firefox verified crypto, and a bunch of Coq libraries and frameworks.

Firefox Crypto

I looked at the paper on HACL*, that fancy-pants verified cryptographic library in Firefox Quantum.

There were some concerns, like those voiced by Perry Metzger. As there should be! Paranoia is great!

Basically there was some conversation on the #coq IRC channel which lead to a little bit of investigating. I don’t really remember how it started, but there were some questions about undefined behaviours in CompCert. Using my trusty (rusty?) memory of the DSSS17 lectures, I was pretty sure that Xavier Leroy mentioned that CompCert will make assumptions about undefined behaviour, which makes sense, because of course it does! Every compiler has to.

Anyway, in this particular case we were wondering about the semantics of CompCert with respect to signed integer overflow. I quickly pulled up the link to the manual which happens to say “yep, CompCert wraps the result!”

http://compcert.inria.fr/man/manual004.html

Perry pointed out that LLVM definitely doesn’t make this same assumption on higher optimization levels, and since the CompCert semantics might differ from those of another compiler with respect to undefined behaviour, you could verify code against CompCert’s semantics and just completely ruin your guarantees by compiling with another compiler. Obviously this is the case anyway, as Clang doesn’t necessarily produce correct code, but it’s a little scary to consider that differences in the treatment of undefined behaviours could also result in issues even if Clang did produce correct code according to the spec.

So, who knows, maybe there’s a bug in HACL* because of this! Although, I have heard tell that the developers avoided signed integers, there could be something similar somewhere!

Additionally after going through the paper, I was a little bit surprised! After seeing claims of a “formally verified” cryptographic library in Firefox, I had initially made the assumption that this was an end-to-end proof of correctness, with a small trusted codebase (like the F* kernel, or whatever).

This isn’t quite the case, though! The paper itself is quick to point this out — the trusted codebase is still pretty large, at least compared to that ideal.

HACL* is implemented in a DSL embedded in F* called low*. This low* DSL is related to CompCert’s Clight by a manual proof, and the extraction of low* goes through a compiler called Kremlin, which is probably a pretty straightforward pass, but it’s not proven correct. There could be mistakes in this, there could be mistakes in the manual proof linking low* to the semantics of Clight.

This is still a great development. It’s certainly much MUCH better than trusting an entire large C codebase. There’s still lots of work that we can do to make this project even better, though!

Fiat / Using Coq to Write Fast Haskell

I reviewed a couple of papers this week. The paper on Fiat, and a paper on using Fiat for writing fast Haskell code.

Fiat is refinement framework in Coq. The initial paper documents its usage in verifying abstract data types, and in particular query structures, which may be used in an SQL like way.

The basic idea of refinement is that you start out with a specification, and then through various small steps you refine that specification into an implementation by replacing subterms of the program. Each small step will preserve the behavior of the previous specification / artifact from the previous step. Equivalence in this case is the usual “the resulting program is at least as deterministic”, i.e., any possible behaviour of the newly refined program must be a subset of the behaviours from the previous step. A refined program introduces no additional behaviours.

In Fiat much of the refinement rules can be automated, and since refinement is transitive a proof that the final program implements the initial spec falls out naturally. This is what is meant by “correct by construction”.

Refinements interact really nicely with monads in Fiat. This is what you would expect. If you have a command a which refines to aᵣ and you have a sequence x ← a;b(x), then x ← aᵣ;b(x) is obviously a refinement of this as well.

Similarly, if you know that bᵣ(x) is a refinement of b(x), then x ← a;bᵣ(x) is a refinement of x ← a;b(x).

This is great, because it means you should be able to refine different portions of a monadic computation independently of one another, which should allow you to build up an imperative / procedural program, which is correct by construction, with as little effort as possible.

Refined computations are expressed as computations paired with proofs that this computation is in fact a refinement of the spec, Fiat calls a pair like this a SharpenedComputation. And again, since refinement is transitive we can use these to make itty-bitty single steps towards a final implementation, and fiat provides a bunch of so-called “honing” tactics for making things itty-bitty refinements.

One realization that I had was that refining is actually very similar to the “Type Driven Development” approach advocated in Idris. This approach in Idris puts a bit more emphasis on changing the type of a function as requirements change, or as you develop and realize you want more restrictions, and so on… But, ultimately, the approach of taking a specification (type), and then finding a program which matches the specification (type) through successive refinements is very similar. In Idris this is done interactively through some keyboard shortcuts to automatically case split variables, and run proof search on holes. Fiat, however, relies upon Coq tactics to do this kind of thing. What’s interesting to me is that the tactics are kind of a more powerful way to do this kind of development, since a developer can easily write their own little pieces of Ltac to change how a proof search is done.

Fiat also has a library for handling “query structures”, which are ADTs which can support some SQL-like operations. Fiat would allow you to create constraints, like say a Foreign key constraint in SQL, and is actually able to automate these away, allowing a user to refine a program without having to consider the constraints.

Abstraction relations are used heavily for justifying refinements. My understanding is that this is just a mapping from the initial set-theoretic implementation onto the implementation?

The paper “Using Coq to Write Fast and Correct Haskell” extends upon Fiat in an interesting way, allowing the developer to reason about memory allocations on a heap. Within Fiat all of the data types used to build up the data structures are managed by a garbage collector, which might not be good enough if you want to write something that needs to be lightning-fast, like Haskell’s ByteString data type.

“Using Coq to Write Fast and Correct Haskell” uses a heap data type within Fiat to allow a developer to reason about things which will ultimately not be managed by the garbage collector. The idea uses a representation type for a heap, which exposes things like allocating memory, peeking at memory, and poking memory. Then when extracting the code to Haskell these calls can be replaced with appropriate calls to the Haskell heap manipulation functions.

A problem that has to be addressed is how to keep track of this state, different heaps and such. Initially the paper looks at incorporating the heap along with the ADTs, but this ends up causing problems if you have methods which interact with multiple ADTs — which heap do you choose! The solution was to just use a state monad to automatically weave this heap through operations.

This approach does not currently make any static guarantees with respect to the heap, and assumes the client is well behaved and doesn’t do things like free memory which it shouldn’t.

Bedrock

Read a bit about Bedrock as well, which is a framework for verifying low level systems code in Coq. It’s actually a pretty cute DSL, and it looks like it’s a nice and simple way to play around with verified low-level code. There’s a strong emphasis on high levels of automation — as is the standard with Dr. Chlipala’s projects. It seems chocked full of separation logic goodness. Functions are specified using preconditions and postconditions.

This was a fairly quick glance at Bedrock, so I’m not familiar with its handling of global mutable state or anything of the sort.

One aspect of Bedrock which I might revisit in the near future is that it actually has its own intermediate language. Might be interesting to look more deeply into how that’s designed.

I am hoping to look into VST in the near future, and it may be worth comparing how Bedrock compares to VST. I imagine it may actually be quite a lot more difficult to produce a verified program with VST, since it has to deal with a lot of C baggage.

Fearing the Automated Proof

I have had some fears about the automation of proofs in the past. I really enjoy writing proofs, so I find it kind of sad that at some point this will be a joy taken away from me by the fact that computers will probably be much better at proofs than most people in the future. It gives me some existential dread to have no reason to do this, and to think of the fact that we may start to make “human thinking” obsolete. Whether or not this will actually happen is a matter of speculation, but if nothing else a computer can already try a large number of proof strategies a heck of a lot faster than I can with a pencil and paper!

But here’s the thing… I don’t know that I think it’s sad if this happens anymore. Why? Because formal verification is such a niche and expensive thing right now that it’s essentially not really done anyway. Having computers be able to automatically prove theorems quickly could really change that, and give us better software much faster. It is more sad to me to not have any verified software.