Sunday, December 8, 2013

Better living through tools

In an earlier post I spoke about architecture versus implementation, and the process of successively refining an architecture to some implementation artifact. I didn't elaborate upon some of the techniques to demonstrate correspondence between the design goals and the implementation behavior. In the industry this can be naively thought of as just a simple matter of testing, but I believe it goes further than that, namely the broader concern of representing architectural intent in a product.

With respect to software development and testing, though, there are many different approaches. Recent efforts like Test Driven Development (TDD) have shown promise in practice, and I enjoyed Grenning's Test Driven Development in Embedded C on the same. Similarly, books like James Whittaker's How Google Tests Software and its associated blog provides a very pragmatic approach to the problem, namely all developers write test and a centralized test organization is both responsible for a consultative and automation/infrastructure role. In our world of UEFI and EDK2 we have the Self-Certification Tests (SCTs)

Now lets look at the problem from a broader level, namely expressing architectural intent. To me this is nowhere more important than in the world of building trustworthy systems, especially the "Defender's Dilemma" that Jeremiah reminds us of in slide 7 of Namely, the attacker only has to discover one flaw, whereas the defender has to provide no gaps. And it is the 'gaps' that are important in this post since flaws can span from the architecture down into the implementation.

To that end of mapping architectural intent directly to code, I have continually been intrigued by the work of Gernot Heiser at NICTA on Trustworthy Systems The trophy piece of that effort is seL4, a formally validated microkernel with correspondence between C code, a machine model, Haskell, and a theorem prover. This is undoubtedly a BHAG to scale more generally, but it does serve as a beacon to show what can be done given sufficient focus and incentives.

Gernot's effort is not alone, of course. There is the verification of the CMU SecVisor, UTexas Hypervisor verification, and application of formal methods to industrial problems like

Beyond seL4, though, there are other efforts that NICTA incubates under the banner of Trustworthy Systems, as best described in One of the authors of the latter paper in Leonid Ryzhyk, and in 4.2 the paper references work on the long goal of device driver synthesis, or correct-by-construction for this class of system software.

And it is the holy grail of 'correct-by-construction' for systems code that I want to mention next in a little more detail. Intel recently published a paper Device Driver Synthesis in the Intel Technology Journal, Volume 17, Issue 2, December 2013 titled Simics Unleashed - Applications of Virtual Platforms that goes into some detail on a real instance of code synthesis.

Regarding driver synthesis, an overview of the effort may best be described in a picture, such as

above. The idea entails taking a model of the hardware to be managed by a driver plus a formal interface of how the driver interacts with the system software environment, and then synthesize the reactive code for the driver. The ideal would be automation that simply emits code, but given the human aspect of software development, such as maintenance, review, evolution, the process can act as an interactive session to have the user add code as part of synthesis, and ensure those additions are correct. The effort also focuses on making the resultant code something that has seemly names and meets other psychological constraints in working with code, such as cyclomatic complexity

Within Intel, I had the pleasure of engaging with Mona Vij who has led the team in Intel labs on evolving this technology since the summer of 2012. She, along with the Intel and external university researchers, have proven valuable, innovative parties with whom to engage. You can see the elements of our collaboration via the UEFI aspects of the effort in the paper. I believe realizing a vision of this type of work-flow would complement other efforts for the UEFI community, such as

For additional details, the Termite page calls out the collaboration. More details on the engagement with Intel and the university can be found at  

From the perspective of evolving my skills as a technologist, the engagement offered an interesting view into another approach for system software Better living through tools. It also opened my eyes to look beyond my old friend of C code to a world of functional languages like Haskell, into DSL creation, use of Binary Decision Diagrams (BDD's), SAT solvers, hardware modeling languages like DML and SystemC, too.

The industrial advantages of functional languages, albeit Lisp and not Haskell, finds an interesting discussion in the writings of Paul Graham I recommend reading his essays, including the book version Hackers and Painters

The above paper will give you a feel for the effort, but if you are hungry for more details on the underlying mechanics, I recommend visiting, too.

So again, these are my thoughts and not a plan-of-record of my employer, as my obligatory blog bio reminds people. But what I did want to do with this post is enjoin system software engineers in a conversation to think differently about how we write specifications and the process by which we refine these specifications to code, ensure that the code matches the specifications, and finally, evolve code + spec over the life of our technologies.

September 2014 update.  Termite is now open source 
Post a Comment