Friday, December 13, 2019

rust, oxide, corrosion,....

Miguel de Icaza (@migueldeicaza)
All of us writing C and C++ are living on borrowed time.

The only safe future is Rust.

Prepare your code to go out of scope.
inspired me last night to share some of the EDKII Rust work underway. I filed to motivate the posting of

Within this package you can find which includes 
as an example of using Rust to build an EDKII capability. The idea is not to fork-lift upgrade the entire 1 million LOC EDKII upstream to Rust but instead to migrate critical flows and libraries to a safer language. The capsule example is especially important since the capsule is an attacker controlled data object and the parsing flows are quite complicated The UEFI PI modularity of PEIM's and DXE drivers, along with the language interop of the FFI of Rust to other languages, naturally lend themselves to this evolutionary approach.

And listening to folks like Alex reminds me of the value of assurance. Thanks Alex for the discussion on language-based-security, including Ada/Spark discussion. It also reminded me of my conversation with Aucsmith in DC

Also interesting to see other plays on "Rust" in the market, such as 'oxide'
or my favorite 'corrode' for converting C code to Rust. It'll be interesting to see usage of Rust for Oxide's firmware endeavors.

When I look at the above EDKII Rust work, I ask myself if it's a 'sustaining evolution' or 'disruptive innovation'
Specifically, do examples of disruptive innovations regarding Rust and firmware include things like  with formal or with all Rust and no blobs?

Interesting times.

I try to not to rewind the history machine, but I invariably find myself assessing my engagement with system software during pivots like this. Specifically, when I started working full time I joined a time chartered with writing firmware for a remote telemetry unit (RTU). It was essentially an embedded computer that would be strapped to a gas pipeline to control valves, measure gas flow, etc. It was IOT before IOT was vogue.

My first team requested that I write the code for an 8051 microcontroller in assembly. I asked for requirements and then requested purchase of a C compiler. I was able to produce the features in C and evolve quite readily with the ever changing requirements. Over time I bounced back to assembly for some severely resource constrained usages, but C has become my language of choice.

And worse I was afflicted by large legacy assembly code bases with rich algorithmic capability. For example, there was a PCI resource allocator written in assembly that had down bottom-to-top allocation based upon a specific hardware requirement. When the next generation design required top-to-bottom, the schedule became imperiled by having to rework thousands of lines of assembly to unwind this assumption. Although C doesn't necessarily yield 'better code'', it proved much easier to factor than assembly.

So C has made sense for development efficiency, but the last couple of decades have posed other challenges, such as making assurance claims. Modula-2 never made it mainstream, nor has C# for OS's And I've already mentioned my own less-than-successful earlier investigations into type safety for firmware

So nearly 30 years onward I'm happy to have another language transition, namely a assembly-to-C moment with C-to-Rust change.

Thursday, December 5, 2019

10 years on and a small exercise in package dependencies

This post marks the 10th anniversary of my first blog posting It also represents the 80th post.

Since a blog entry whose sole purpose would be to designate this milestone might not be so, er, 'interesting', let's add a little meat to this posting. Specifically, lets talk about package dependencies in the EDKII project. This is an interesting exercise since a large, package-based project might be daunting to new-comers trying to discern relationship among components. Correspondingly, long-term participants in the project working on refining the implementation, such as cleaning up packages to be more independent and maintainable (aka paying down technical debt) might find a higher level view interesting.

To help in these various causes, below is a recipe for generating dependency graphs across the packages. The steps include:

0. Clone locally
1. Install graphviz from
2. Go to the edk2 root
3. Run the following command in Windows shell:
     findstr /S /R /N /C:"^ *[^ #]*\.dec$" *.inf >
4. Open the file with an editor with regular expression support, such as, and perform a text replacement like below:
    Replace: ^([^\\]+)\\.+: *(.+\.dec)
    With: "\1/\1.dec" -> "\2"
5. Add the following text around replaced text in the file:

      strict digraph PackageDependency {
           graph [rankdir=TB];
           node [fontsize=9,shape=box];
           edge [arrowhead=vee,arrowsize=0.5,style=dashed];

6. Run the following command in Windows shell:
      graphviz\bin\dot.exe -Tsvg -Oall_packages.svg

Below is a sample output of this exercise.

So what?

This is an interesting exercise since it shows the strong dependency most EDKII packages have on the MdePkg at the bottom, which is the package containing generic interfaces and library classes. Not surprisingly, the second most leveraged package is the MdeModulePkg, which is the generic set of implementations based upon interfaces in the MdePkg. It's also useful to discover if there are any circular dependencies, such as between the ArmPkg and the embeddedPkg. And finally, it shows opportunities for clean up, such as removing the dependency of the NetworkPkg upon the ShellPkg. The latter is based upon the existence of some network-related shell commands that should really rely upon API definitions of the MdePkg instead of instances in the NetworkPkg.

Saturday, November 30, 2019

Beware Experts, Beware Disruptors

This blog entry hearkens back to a tale of expertise versus disruption. The scene is DuPont, WA in the late 1990's. Prior to Tiano (2000+) there was an effort to write a BIOS from scratch in the erstwhile workstation product group (WPG) at Intel. The code name of the effort was "Kittyhawk" (1998). Like all boot firmware, the initialization was broken up into phases. Instead of the tuple of {SEC, PEI, DXE, BDS} of UEFI PI of {BOOTBLOCK, ROMSTAGE, RAMSTAGE, PAYLOAD) of coreboot, Kittyhawk had {VM0, VM1, VM2, BOOTLOAD} phases.

Although the platform initialization of Kittyhawk was in native code protected mode C code (and cross-compiled to Itanium for the latter boot variant), this was prior to emergence of IA-32 EFI and it's OS's, which was a parallel effort in DuPont. Instead, the 32-bit version of Kittyhawk relied upon a PC/AT BIOS boot interface for MBR-based operating systems. To make this happen an engineer decomposed an existing PC/AT BIOS from it's POST (Power-On Self Test) component from it's 'runtime', or the 16-bit code that provided the Int-callable BIOS API's. The 32-bit Kittyhawk code did device discovery and initialization, and then the 32-bit code provided the device information in a data block into the generic 16-bit BIOS 'runtime.' This 16-bit code was called 'the furball' by management in anticipation of a day when it could be 'coughed up' in lieu of a native 32-bit operating system load.

This Kittyhawk effort on IA-32 was definitely deemed disruptive at the time. The DuPont Intel site was more of a rebellious effort, with the larger product teams in Oregon constituting the existing 'experts'. There were cries from Oregon that the disruptors in Washington would never boot Windows, and if they did, they'd never pass WHQL, or the suite of compliance tests for Windows. The furball booted OS's and passed compliance tests. It turned out that having a non-BIOS engineer look at the problem didn't entail the prejudices of what's possible, and the work about having a clean interface into a BIOS runtime was used in the subsequent Tiano Framework work such as the Compatibility Support Module (CSM) csm API design.

So this part of the story entailed providing some caution in listening to the experts, but....

You sometimes need to beware disruptors. Specifically, riding high upon the success of building a common C code based to support IA-32 and Itanium, with the furball providing 32-bit OS support and the Intel Boot Initiative (IBI)/EFI 0.92 sample provide 64-bit Itanium OS support, the next challenge was scaling to other aspects of the ecosystem. Part of this scaling entailed support of code provided by other business entities. The EFI work at the time had the EFI images based upon PE/COFF, so the Kittyhawk team decided to decompose the statically linked Kittyhawk image into a set of Plug In Modules (PIM's).

After the Kittyhawk features were decomposed into PIM's, I remember standing in the cubicle of one of the Kittyhawk engineers and a BIOS guru visiting from Oregon helping with the 64-bit bring-up, the topic ranged over to option ROM's. The Kittyhawk engineer said "let's just put our PIM's into option ROM containers." I was a bit surprised, since to me the problem with option ROM's wasn't carving out code to run in the container but more entailed 'how' to have the option ROM's interoperate with the system BIOS. That latter point is where standards like EFI came into play by having a 'contract' between the Independent Hardware Vendors (IHV's) that built the adapter cards and the OEM's building the system board BIOS.

So the work of the disruptors was laudable in showing that you could have shared, common code to initialize a machine, but to scale to new paradigms like OS and OpROM interfaces you really needed a broader paradigm switch. And as would be shown, it took another decade to solve this interoperability issue.

As such, sometimes you have to be a bit wary of the disruptors, although the disruptive Kittyhawk did provide many learning's beyond the furball/CSM concept, including how up to today EDKII builds its ACPI tables via extracting data tables from C code files. Alas the Kittyhawkfort was shut down as part of the shuttling of the Workstation Group, and efforts to build a native code BIOS solution fell into the hands of the EFI team as part of the emergent (1999+) Tiano effort. At this time there was the EFI sample that eventually grew into the EFI Development Kit (aka the first Tiano implementation), now referred to as EDKI versus today's EDKII. EDK leveraged a lot of the learning's of EFI to inform the Intel Framework specifications, including the phases of SEC, PEI, DXE, and BDS we know today.

This original PEI phase differed somewhat from the PEI you can find in the UEFI PI specification, though. Instead of the concept of C-based PEIM's and PEI core, the original instantiation of the PEI Core Interface Specification (PEI-CIS) was based upon processor registers. There was still the business requirement of separate binary executables and the solution of GUID's for naming interfaces was used. But instead of a PPI database, the PEIM's exposed exported interfaces through a Produced GUID List (PGL) and imported interfaces via a Consumed GUID List (CGL). During firmware construction and update the CGL and PGL were reconciled and linked together. And in order to having services that interoperate, the concept of a 'call level' was introduced. The call level detailed which registers could be used by services in a PEIM since there was no memory for a call stack with this early PEI-CIS 0.3.

At the same time we were defining PEI 0.3, there was another approach to writing pre-DRAM code without memory, namely the Intel® Applied Computing System Firmware Library V1.0 (hereafter referred to as Intel® ACSF Library).
ACSFL was described in a Intel Update article (same dev update magazine
that provided 32-bit callable libraries for boot-loaders. This effort came from the embedded products team and entailed delivering a series of libraries built as .a files. This simplified initialization code addressed the lack of a memory call stack by using the MMX registers to emulate a call stack. This differed from the PEI 0.3 model of reserving a set of registers for each call level. The ACSFL concept was smarter in that constraining PEIM's to a given call level really impacted the composition of these modules into different platform builds.

I do enjoy the quotation 'requires only 15KB of flash' when I wander over to with its size of 592k, or 39x size dilation of silicon initialization code over the last 20 years. This is similar to the 29x scaling in the number of transistors between the 7.5 million in a Pentium II and the 217 million in a Kaby Lake
The same article provides the software layering. One can see some similarity of ACSF Library to the Intel Firmware Support Package (FSP). This is no accident since the Intel FSP was intended to originally target the same embedded market, although Intel FSP in 2012 had the embedded mantle being carried by the Intel of Things Group. As another point of irony, the original FSP built w/ the Consumer Electronics Firmware Development Kit (CEFDK). The CEFDK was in fact the evolution of the ACSF library, going through various instances, like FDK. The latter were used to enable phones and tablets beyond just embedded.

So ACSF Library provided some learning's, and at the same time the LinuxBIOS (prior name of coreboot) solved this issue of supporting C code without initialized DRAM via the ROMCC tool.  ROMCC was a custom compiler that used processor registers to emulate a stack so that you could write things like DRAM initialization in C code.

The initial implementations of PEI-CIS 0.3 with just registers proved difficult to deploy, and since part of the Tiano effort entailed a requirement to use standard tools, techniques like ROMCC were deemed not tenable. As such, as the PEI CIS went from 0.3 to 0.5, we moved PEI to the concept of temporary memory, with using the processor cache as RAM (CAR) as the 'temp ram.' Regrettably we were asked to not disclose how we implemented temp RAM, and the technique and initialization code were not made open source or published (and a modern instance of this reluctance can be found in the FSP-T abstraction of FSP 2.0). The coreboot community, though, didn't have this constraint and provided details on how to use this technique.

As time progresses, I am amused about the various recollections. During the late 2000's someone told me 'you rewrote PEI many times' when in fact the only substantive change was going from registers in 0.3 to temporary memory in 0.5. Although not necessarily a full re-write, the EFI implementations did have a long history:

IBI Sample -> EFI Sample -> Tiano Release 1..8, Tiano Release 8.1..8.7,  EDK1117, ...

...UDK2015, UDK2016, .......

Also, some fellow travelers mentioned to me their fond recollections of EFI discussions in 1996. I smile because the first discussions of EFI (in the form of IBI) weren't until 1998. But I suspect everyone's memory gets hazy over time, with this blog post having its own degree of fog and haze.

And these efforts also remind me that changes in this space take time. A recent reminder that api support takes time, such as the discussion of the EFI Random Number protocol in Linux This is an API that was defined in 2010 by Microsoft for Windows 8 and then introduced in the UEFI standard in 2013. Or features like UEFI secure boot from 2011 UEFI 2.3.1 showing up in Debian Buster just in mid 2019.

The other interesting perspective is, namely you often hear about the success stories, not failures. Only for extreme cases like do you find a rich journal of failures. So in portions of this blog and the EFI1.2 discussion in I have tried to shed light upon some of the not so successful paths.

Saturday, September 28, 2019

Formal, Erdős, Rings, and SMM

This blog is a mix of a few topics. To begin, I have always been on the outlook for how to scale quality via tools, if possible. I continually hope that there techniques to help close that crazy semantic gap between documentation and code. To that end I enjoyed a recent talk on Everest which provides a practical realization of generating usable C code from a specification. I especially enjoyed the portion of the talk where the developer had to integrate feedback from the Firefox team on how to make the code 'look better.' When working with I saw that one of the primary challenges in UEFI code generation involving production of readable source code.

This does not mean there is no place for formal methods in the firmware space, though. For example, the formalization of EFI FAT32 provides confidence in the design of the structures, although it doesn't necessarily lead to formally validated software objects. And the space of employing computers for maths continues to get more exciting

In general, math can be your friend. And speaking of mathematicians, I was always intrigued by folks who mentioned their Erdős numberős_number. Viewing that specific wikipedia entry I noticed Leslie Lamport in the '3' category. This reminded me of the heady days of DEC research and my former Intel colleague Mark Tuttle who had worked there with Lamport. Not surprisingly, Mark co-authored a paper with Leslie, giving him an Erdős number of '4'. And since I co-authored a paper with Mark, that gives me an Erdős number of '5'. As wikipedia only mentions the cohort class up to 3, I suspect some exponential blow up of any numbers beyond that Nevertheless I still find it to be a pretty cool detail.

[1/11/2022] correction.
co-authored 4 papers with
Noga Alon
co-authored 1 paper with
Mark R. Tuttle
co-authored 1 paper with
distance = 3

And on the topic of cool details, it is always exciting to see the evolution of UEFI security in the market, including work done by Apple

for driver isolation. The UEFI specification has API's to abstract access to resources, and we even modeled said resources via a Clark Wilson analysis slides 73+.

The slides commenced with a summary of the isolation rules, and then a mapping of the rules to the important boot flows of host firmware.

The flows begin with the normal boot, or S5,

and continue with the S3 wake from sleep event (eschewed these days in lieu of S0ix)

and culminates with a boot flow for a flash update. This is typically the boot response to an UpdateCapsule invocation wherein an update-across-reset (versus runtime update in SMM or BMC) is employed.

With these rules, the OEM-only extensible compartment should be isolated from the 3rd party pre-OS extensible compartment (e.g., option ROM's) and extensible 3rd party runtime (e.g., OS). This analysis was used to inform work in the standards body and open source on what defenses we should erect. We refreshed some of this type of analysis recently in Regrettably we added a code signing guard in the mid-2000's (e.g., UEFI Secure Boot) but we didn't provide inter-agent isolation.

As a historical note, we talk about isolation, including rings, in for SMM using user mode and paging (page 10) in 2015 and an earlier mention of pushing EFI drivers into ring 3 in the now expired filed back in 2002 (17 years ago, gasp).

Given the 1999 inception of EFI and 2001 for the EFI driver model, the challenge has been application compatibility and delivering these features to market given their later addition. To that end I must given credit to Apple for their work in this space, especially as true innovation is delivering the solution to market in my view .

On other oddities from the past and SMM, I was curious about the first mention of System Management Mode (SMM).  This archaeology was also motivated by testing the claim wherein technology is most fully described in its initial product introduction, with further evolution having successively fewer details given industry practice in the domain. Since the CPU mode was introduced in the 386SL, I found the following in which the feature is first described, although the acronym "SMM" was never used. I especially enjoyed this quote from the datasheet:

"Since system management software always runs in the same mode, OEM firmware only needs to provide a single set of SMI service routines. Since real mode is essentially a subset of each of the other modes, it is generally the one for which software development is most straight-forward. SMI firmware developers therefore need not be concerned with the virtual memory system, page translation tables initialized by other tasks, interprocess protection mechanisms, and so forth. "
(page 58).

Especially ironic the mention of paging given the earlier topic in this blog on isolation and the document Some of the venerable collateral does describe the smi# pin, though.  And a later book on the 486SL has source code for an assembly language 'kernel' to handle dispatching of event handlers. In that latter book, it was nice to see a mention of my former colleague and co-author Suresh, too.

Well, so much for September 2019 blogging. I did my usual wandering across topics. I should probably produce more bite-sized blogs, one per topic, but what would be the fun in that.

Saturday, July 13, 2019

Evolving infrastructure takes time

There are many truisms you learn after working a while, such as the reality of meetings (although I hear some people perennially relish meetings in order to not feel 'lonely' at work). There are other facts, such as 'nothing significant can be done quickly', especially in evolving a technology. I don't believe it's just a matter of the 99% rule Instead, it entails a process of successively building something and learning from usage and feedback. This takes time. Also, it needs to be done in an open, transparent fashion with the stakeholders so that the 'tech transfer' doesn't have a valley of death between R&D and deployment. Maybe this latter sentiment is an instance of my musings from years ago

A couple of 'recent' examples of this arc of evolution includes the dynamics of the "Min Platform", which really started out as an element of the Min-Tree, or "Minimal Tree" effort described in
slide 16 of from 2015. Namely, move the EDKII code base from being a Hummer to a Yugo. I learned thereafter that a Yugo might not be the best example of a smaller end-state analogy.

Regarding moving toward the Yugo, construction had its first milestone in March 2016 via a 'code first' approach, as described in This 2016 work, in turn, expanded and added server class systems two years later in March of 2018. Now many of the elements of this work appear in the May 2019 Min Platform Architecture

Going back to the overall goal of a Min-Tree, the idea was to segregate silicon critical initialization code into the Intel Firmware Support Package (FSP) (described more below), minimize the platform code, and then right-size the generic 'core' code, such as the Efforts to the latter end can be found at where the packages needed to build a minimal platform's core can be derived. Another use case is an "Intel FSP SDK" where the minimal code to 'create' an Intel FSP could be derived, enabling a future where more of the FSP elements could be shared. Other advantages of 'less code' include cognitive complexity, fewer attack surfaces (and time for 1st and 3rd party code reviews), easier maintenance, easier integration, and potentially easier updates/servicing (more on that later).

Although the Min-Core mention above has yet to be up streamed, many of the packages in have been deleted in the last few months or migrated into Projects like also provide a minimized UEFI core, in addition to Rust based virtual firmwares UBoot and the hypervisor VMM only provide enough compatibility for the OS but avoid providing any PI capability. provides both UEFI and elements of DXE PI, such as dispatching drivers from FV's Going forward offers a venue to explore some of these directions in smaller profile cores and language-based security

Again, to build a full platform, you need platform + core + FSP in this model. A nice embodiment of bringing all three together is described in the Apollo Lake Another high level view of bringing all of these components together can be found in figure 6-19 of

Speaking of ModernFW and FSP's, I'm pretty excited by some of the examples of alternate boots, such as and built upon The latter is again an arc of evolution from primarily coreboot based solutions to coreboot and Slim Bootloader. Leveraging the FSP's across these different 'consumers' demonstrates the scalability of the technology. Since Slim Boot Loader and coreboot take 'payloads', which can include UEFI or Linux or....does that mean the X64 UEFI variant is a "CSM64", as a dual to the CSM16 (just kidding)?

Telescoping into the Intel FSP, its development followed a similar arc, with some of the direction intent described in The scaling of FSP commenced with codifying existing FSP practices as the 1.0 specification commencing in April 2014 and then point evolutions in 1.1 and 1.1a in April/November 2015. The 1.0 was really capturing the then-current practices and separating out the generic, class-like API's from the SOC specific "Integration Guide" dictum's. The 1.1/1.1a changes were derived from learning's with different flash layouts and roots of trust designs. These were still monolith FSP's. The 2.0 evolution in May 2016 was based upon SOC boot flow with non-memory mapped flash, think 'boot from eMMC/NAND/UFS', thus creating the FSP-T, M and S modules that could comprehend these boot flows. The Apollo Lake usage described above was one of the driving factors for this change.

Why the FSP2.1 evolution? In May of this year the FSP 2.1 specification was released. It was created in response to the overhead of creating 'wrappers' to invoke the FSP 2.0 from a native EDKII firmware. These wrappers are defined in the 'consuming FSP' document The design of 2.1 maintains FSP 2.0 interface compatibility via "API Mode" usage and extends the design to include "Dispatch Mode." The latter entails guidance of how to use the FSP 2.1 binary as a well-formed UEFI PI Firmware Volume that includes a PEI core, thus allowing for dropping the binary directly into a firmware device layout containing other FV's with PEI, DXE, and UEFI images. This 2.1 change was based upon learning's in scaling FSP 2.0 in the last 3 years.

And in the spirit of evolving code with specifications, there are FSP 2.1 binaries available at and platform code demonstrating "dispatch mode" at This in contrast to the Apollo Lake "FSP 2.0" example mentioned earlier.

In addition, there are more opportunities for streamlining platform construction with art like MinPlatform's, thinner core code, ModernFW, and Intel FSP's. These include optimizing servicing the platform. Work is available on microcode and monolithic firmware updates via and its code embodiment Moving forward  the separate elements are being made serviceable via the Firmware Management Protocol, and it makes sense to enable separate servicing of components like Intel FSP's and other elements of the system based upon provenance. This will potentially help remove some of the friction in delivering on requirements like

And speaking of security, there are nice call outs at and for some of the updates to guidance on securing the firmware and having a shareable threat model with the community. This is especially important as we treat issues submitted via As described earlier, minimal* (core, platform, etc) ease in the assurance analysis given there is less complexity, but such analysis still requires some base erudition upon which to lead the design and code assessments.

I'll close with a bit or irony and humor. I mention above the use of safer languages like Rust, in addition to extolling the virtues of open,
but there really is no silver bullet. Similarly, I mentioned smaller, simpler and less complex, but the product still needs to be useful
And on those parting thoughts I'll close this blog.


Saturday, May 25, 2019

modern, red, rust, retire

I have been on the road for a few weeks, but I'm happily back in town for the memorial day weekend. Some of the notable stops on my trek have included a nearby visit to the UEFI Plugfest to talk about how to accelerate pre-OS networking This included a reference to an open source implementation of the work

In the spirit of open source, my next was still relatively local to Bellingham, WA to deliver a talk at LinuxFest Northwest
Great community and interaction from people truly engaged on open source. Also, some interesting sightings on the way out

After the Saturday Bellingham event I hopped plan Sunday morning to commence a two week trek across various stops in North America

and Taiwan

Upon return from Tw I wandered to southern WA to an open source conference

where the topic of ModernFW was introduced

As part of those discussions on modernizing firmware the coreboot community mentioned Redleaf and an approach to elide C from coreboot with oreboot The latter mentions a first target of RISC-V. Pretty exciting to see discussions on many fronts, along with code artifacts, to advance the state of the art in host firmware.

On my last leg of the journey in the last week, I visited the Intel Oregon location. This sojourn included my colleague Lee Rosenbaum's retirement lunch.

I enjoyed many interactions with Lee in his 13 years at Intel, including some public artifacts like
 "A Tour Beyond BIOS into UEFI Secure Boot" and later work on testing

Ironically, of the 5 authors of the WOOT paper, Alex and John are  now at  where they are doing interesting work like, Mark headed over to Amazon where he has done some interesting work like,  and of course Lee leaving.  Or as I like to say about retirement, Lee had a sharp enough spoon to tunnel out of the Shawshank of corporate America. Looks like I'm the last man still with an active Intel address of the original 5 authors.

Beyond the WOOT paper bench clearing out, I was not too surprised that Lee didn't want to retire with the three books

The third one is definitely nostalgic since it has one of the first overviews of EFI in print beyond the de jure specification. It also treats the PAL and SAL firmware architectures, where the latter with its SAL_PROC mapping of the legacy BIOS API's (e.g., SAL_PROC (0x13) to read the disk) pre-dated EFI (e.g., EFI_BLOCK_IO_PROTOCOL). I onboarded with Intel in early 1997 to lead the firmware for the first Itanium platform Merced.

Good stuff.

I mentioned RISC-V in the context of oreboot above. This architecture proposes a lowest software layer called the system binary interface (SBI), essentially the equivalent of the Itanium PAL (which in turn resembled the DEC Alpha PAL code layer a bit). It's fascinating the watch the deliberations around this code, especially as they drive an open source variant The wheel of history continues to turn, and sometimes repeat.

I guess that this type of posting will continue over time, with the maudlin aspects resembling a bit,too. Safe retirement travels to another Lee and enough blogging for the holiday weekend.....