Saturday, October 14, 2017

The march of time

As I read, I couldn't help but think of the progression of IPV6 support in UEFI.
We compared the adoption arc of the two efforts in

Interesting milestones on both the technology front and in the specification / codebase curation are shown below

In 2010 my colleague Bob Hale and I noted the following about Tiano  9th generation in 2009.  Oh my.  Good thing we chose a different designation for the various types we snap EDKII trunk with the validated UDK releases

Lots of changes these last few years. Another journey on this trip was meeting Paul Otellini in 2005. I was definitely saddened by the news

But change is part of the trip. I recently said good bye to my colleague Lee Leahy. He and I collaborated for several years and had the opportunity to co-present

To remind us that he has retired to Hawaii, he left the following picture in the cubicle behind mine, taken from his new backyard.

A final marker of the change occurred when cleaning out some data books. The right hand side (RHS) is the programmers reference manual (PRM) for the 386SX, whereas the left hand side (LHS) stack includes the reference tomes for an Intel based CPU's from a couple of years ago (prior to all documents going online-only).

An example of the change include the RHS PRM description of an instruction such as SCAS. The description even listed the number of clocks to retire the instruction, which was possible in the days of simpler memory hierarchies, in order execution, etc.

The following shows the same instruction from the LHS.

More modes.  No clocks.  As times change, instructions change.

I look forward to chatting with people on Sunday about UEFI and security It's interesting to see this talk having been referenced in a few places, such as,,, and, too.

Speaking of change, time to break away from the PC.

Update from 10/18/2017
Thanks to the Lodge

for great questions and attendance this weekend

Wednesday, August 9, 2017

Accessing UEFI UpdateCapsule from the operating system runtime

"Accessing UEFI from the operating system runtime" represents my most frequently accessed blog posting. In fact I scrawled this quick posting in response to an engineer having recently sent me a mail referencing the above posting and decrying lack of information on access to the UpdateCapsule interface from the various OS's.

To begin, let's start with the API exposed by the UEFI firmware is defined as followed:
The capsule in memory follows:

From my perspective as a 'builder' of firmware I often focus on the underlying constituent elements, but that's a smaller audience than the consumers of the firmware. At the time of the posting, the UEFI Variable interface was the more important interface in order to access both UEFI specification defined variables, namely those {GUID, Unicode String} named pairs codified in the UEFI specification, and vendor-defined variable GUID's and Names.

In the five years that have followed that posting, there's another important extensible run time interface that has been exposed to the operating system run time, namely the UpdateCapsule interface. The Capsule infrastructure began as part of the Intel Framework corpus, but was eventually donated into the UEFI Forum in a similar specification arc as HII. Recall that much of the Intel Framework specifications, such as PEI and DXE, became pillars of the UEFI Platform Initialization (PI) specifications, but when an interface needs interoperability between the pre-OS ISV's and OS runtimes, that is purveiw of the UEFI (and ACPI) specifications. Microsoft complemented this Framework-era capsule infrastructure with the ESRT, or a list of updatable elements in the platform defined by a list of GUID's.

Although the UpdateCapsule API can be used to convey any information from the run into the pre-OS, including crash-dump, management information, etc, the 'firmware update' usage is the most important from a business perspective.

And regarding the API, having a definition of the interface and the data enveloping mechanism are necessary but not sufficient. You also need producers of the update interface on system boards and infrastructure software to invoke the interface. To that end, the EDKII community has published a rich set of infrastructure code to provide the interface with a detailed code explication in On the operating system side, there is infrastructure to support invoking the interface for both Linux and Microsoft Windows

The Linux kernel exposes the capsule loader interface via sysfs in a similar fashion to how the UEFI variable interfaces are exposed. The Windows implementation, though, doesn't expose the direct interface but instead conjoins issuing capsules on top of the infrastructure for installing drivers. This is where the distinction between capsules as a mechanism to pass a GUID-named data payload with a scatter-gather list in memory back to firmware compares to usage of this interface to pass payloads that are a firmware update. On the latter point of updates, the Linux community has build out the fwupd service to facilitate pushing out updates in a similar fashion to Windows Update provides an interesting view into steps involved in plumbing a Linux distribution for this end-to-end use case, too.

You can think of the UpdateCapsule invocation as a syscall back to the firmware. This is different than UEFI Variables where the expectation that the 'set' call persists immediately without and intervening platform restart. Instead, by having the UpdateCapsule take effect (typically) across a restart, the update of the underlying firmware can occur in the early boot of the firmware Trusted Computing Base (TCB) prior to running third party code. Or a capsule can just be passed through, such as the case of the OS runtime sending its panic screen to be displayed across a restart to its UEFI OS loader.

Philosophical postlude -
The difference between UpdateCaspule versus the Get/Set Variable interface is that the latter has been available in the EFI (and then UEFI) OS's since 1999. Update Capsule, and the corresponding ESRT, have only appeared more recently. If I had a chance to invoke George Cox's "I could do it better the 2nd time" penchant of engineering, I would have argued that art such as UEFI Authenticated Variables would have been better built as signed UEFI Capsules versus UEFI Variables since authentication-at-reset in the PI phase (BIOS TCB) is much easier to build than an authentication agent in the firmware that is isolated from the OS or hypervisor run time, as needed by the UEFI Authenticated Variables.
Sigh. Hindsight is 20/20.

Tuesday, August 1, 2017

Black Hat USA 2017 - Firmware is the new black?

Happy to be back from Black Hat in Las Vegas. I usually capture photos of my journey, but I must have lost my head on this trek

since I only captured a couple notable shots, including


Regarding the event itself, our presentation for can be found at I can  nowsafely hang my badge

among my dog pile of other badges.

In that archaeological pile I can find residue of preceding security conf presentations - ToorCamp 2012, BSides 2013, ToorCamp (again)  2014, and CanSecWest 2015.   

I was honored to be among the list of other speakers.

Surprisingly, I wasn't the last name on the list.

My Intel colleagues included Rodrigo from the offense side, I treated defense, and Bruce talked about response.
The talk began with an overview of the ecosystem, including the supply chain that often begins with the open source upstream. Within that upstream many of the core protection, detection and recovery UEFI-based EDKII features were reviewed. This section of the talk culminated in many of the open source EDKII platforms upon which these protect/detect/recover features can be integrated. 

These platforms allows for marrying the rich set of core components with representative platforms The most evolved includes the first Intel(R) Core-based open source platform using EDKII platform code, described in The chipsec project was also reviewed as one means by which to assess if the platform was configured correctly.

After the ecosystem and defense intro, the talk moved into the data set of issues and a proposed methodology. This portion of the talk generated the most interest, at least as evidenced by the number of people taking screen shots of the content.  This taxonomy included:

and a histogram of issue appearances

This class of information can help inform test strategies and investigation into new defenses.

After the data review, a cursory discussion of threat modeling was presented. This class of erudition also informs what type of defenses and testing needs to occur. Like the former topics, this portion of the talk wasn't intended to be complete so much as argue for the need to have this type of review with the broader research and platform building community.

And for any large effort, the collaborators for the deck and our colleagues are the most important part of the adventure.

The talk was picked up by the press ahead of time We were approached by other publications to give an interview but interleaving day-job and conference ate up the available hours, I fear. Thanks to a Charlie Miller tweet at least there's some evidence that we made it to the stage

To close today's blog, this should not be the end of this material for 2017. I promised to reprise this talk for at the lodge in my backyard here in WA

    The TENTATIVE schedule for DC206 Meetings:
    Sep: Josh, Coffee Roasting
    Oct: Taylor, intro to Bash
    Nov: Vincent Zimmer of Intel, UEFI security
    Dec+: CfP open

Hopefully I can recruit my co-presenters to trek up I-5 to help out, too.

Tuesday, May 30, 2017

UEFI and Security postings

I was pleased to see a few days ago and the associated Github repo The authors include

I'm glad to see this material, including a couple of my co-authors from

Hopefully I will get a chance to talk with some of these ex-Intel, now-McAfee engineers at Blackhat. I see they are giving the talk Myself and a couple of colleagues are giving a separate talk at the event

Interesting timing since it was in 2007 that I sat on the front row watching John Heasman talk about "Hacking the Extensible Firmware Interface." This was my first Blackhat trek a decade ago. I had coffee why John a few weeks afterward in Tacoma and he exhorted me to push the signing of UEFI drivers and applications. He also mentioned the venerable TCG specs on page 37.

Good times. Fast forward to today, UEFI seems to be in the news and conference scene quite a bit of late.

Beyond the training material and upcoming conference talks, though, I am especially happy to see the NIST Special Publications (SP) 800-193 on "Platform Firmware Resiliency Guidelines" get posted for purposes of public comment review. This work should complements the UEFI standards in provides a robust platform implementation. As always, the people involved in the process are as important as the output of the process itself.

So much for May blogging, and here's looking forward to a WA summer. Speaking of summer, it is interesting that the picture of Stevenson, WA from last week
looks quite similar to a picture from 2005, too.
The passage of time......

Friday, February 24, 2017

This one is for 20, or Anniversary.Next^5

As I reach my 20 year anniversary with Intel today, I reflect upon advice that resonates with me. I especially like the posting, including the admonition to "Play the Long Game."

20 years. And all doing firmware. Several different firmware architectures and many instances of EFI-style firmware (e.g., Release 1-Release 8.1/2/3/4/5/6, Release 9 "aka EDKII").....

Hopefully this won't encourage me to abuse logical fallacies like argument from authority, saying 'In my 20 years at Intel we.....' Instead you're only as good as the last game you've played, not your record of games.

Or having a Whiggish view of tech history. Instead it's more Kolmogorov probability that monotonically increasing (or decreasing) progress and determinism.

Speaking of history, my original badge from February 24, 1997 can be found below, with the drop-e logo and, gasp, a suit and tie.

And now

Ah, the thick head of hair that I had in the 90's. And my Harry Potter glasses. I recall visiting Shanghai and Suzhou in '01. In the latter city the locals pointed at me in those crazy glasses and a scratch on my forehead from my two year old daughter (that resembled the lightning bolt), reinforcing the Potter doppelganger experience. Pre-SARs in Shanghai, so it was still possible to eat snake, drunken shrimp, and dining colleague from the south China province whose restaurant jaunt truly lived up to the saying "... the Chinese eat anything with four legs except a table, and anything that flies that isn't an airplane..."

So my journey at Intel started in 1996 after contact from an Intel recruiter while I lived in Houston,TX. He exhorted me to join Intel, especially given the 'imminent' Merced CPU development. I interviewed in Hillsboro, OR in October 1996 and was told that I could go to Oregon for IA32 Xeon, or DuPont, WA for IA-64 Merced. Having grown up in Houston, Texas and not realizing that the Pacific Northwest even existed prior to this conversation, I naturally chose DuPont in order to be part of the 64-bit revolution.

Fast forward to February 1997. My wife and I moved to Olympia, WA. Given some of the, er, delay in Merced, I had the opportunity to pick up a Masters at the University of Washington

and at the same time work on developing getting our Itanium firmware ready. This included working on the System Abstraction Layer (SAL) with my BIOS hero/guru Sham D. in Hillsboro, along with Mani and Suresh in Santa Clara. The original boot flow entailed SAL-A for memory initialization, SAL-B for platform initialization and the "SAL_PROC" for the OS-visible API's to enable boot-loaders. The loader API into the firmware was a direct mapping of the PC/AT BIOS calls, with examples including instances like SAL_PROC 0x13 having a similar command set to int13h

As an arbitrary pedantic sidebar, you definitely see a pattern in firmware for 'phases' that typically include 'turn on memory,' 'turn on platform', and 'provide the boot loader environment.' Itanium had SAL-A, SAL-B, EFI. UEFI PI has SEC, PEI, DXE, BDS/TSL/UEFI API's. coreboot has bootblock, rom stage, ram stage, payload (including Seabios or UEFI or Depthcharge or ...). Power8 has hostboot, skiboot, and Petitboot (or EDKII UEFI). The workstation BIOS for IA-32 below had VM0, VM1, VM2, Furball. PC/AT BIOS has bootblock, POST, BIOS runtime. You see a pattern here?

Writing SAL_PROC code was pretty exciting. It could be invoked in virtual or physical mode. With hand-crafted Itanium assembly it was pretty reasonable to write position independent code (PIC) and use the GP register to discern where to find global data. But in moving to C, writing portable C code to abstract the SAL services was quite a feat. This is distinct from the UEFI runtime where were are callable in 1:1 mapping and then non-1:1 after the invocation of the SetVirtualAddress call by the OS kernel.

Regarding gaps with SAL_PROC as a boot firmware interface, as chronicled in page 8 of, Intel created the Intel Boot Initiative (IBI) as a C-callable alternative. The original IBI specification looked a lot like ARC Ken R., a recent join to Intel from an MS (where he had a lot of DNA for ACPI), helped turn IBI into what we know as EFI 1.02, namely evolving IBI to have discoverable interfaces like protocols (think COM IUnknown::QueryInterface) and Task Priority Levels (think NT IRQLs), and of course the Camelcase coding style and use of CONTAINING_RECORD macro for information hiding of private data in our public protocol interface C structures. Many thanks to Ken.

Building out EFI was definitely evolutionary. It started from the 'top down' with EFI acting as that final phase/payload in the first instances with alternative platform initialization instances underneath. This view even informed the thema of 'booting from the top down' that informed how we sequenced the chapters in the 2006 Beyond BIOS book, for example. The initial usage of EFI was the 'sample implementation' built on top of the reference SAL code and a PC/AT BIOS invoked by the EFI 'thunk' drivers.

As we moved into the 2000's, the Intel Framework Specifications were defined in order to replace the SAL for Itanium and PC/AT BIOS for Itanium and IA-32, respectively. We internally referred to things like SAL + BIOS + EFI Sample as a "Franken-BIOS." The associated code base moved from the EFI Sample to the EFI Developer Kit, or EDKI, to distinguish it from the EDKII done in the later 2000's. This internal code-base was called 'Tiano', thus the name of community sites like  Someone said the name came from the sailor with Columbus who first noticed America, but the only citation I could find publicly is the "Taino" tribe with whom Columbus engaged.

As a funny sidebar, I do recall the meeting where someone found "Tiano Island",4033365 on the web. At the time it cost some number of millions of dollars. The original director of our team, numbering just a few engineers in the room, said 'let's each pool a couple percent of our stock options and buy the island.' I guess Stu had a much more significant equity position than I did, as a lowly grade 7 engineer.

In late 90's at DuPont, SAL and EFI sample were not the only code base activities. While in DuPont the erstwhile workstation group also created a clean-room replacement for the early boot flow. This started on IA32 and the OS interface was the PC/AT BIOS. For this effort we didn't have an image loader and instead just used non-1:1 GDT settings in order to run the protected mode code. For booting the protected mode code provisioned the 16-bit BIOS blob with information like the disk parameters, etc, so that the 16-bit code was just the 'runtime interface.' The 16-bit BIOS blob was called the 'furball' since we hoped to 'cough it up' once the industry had transitioned into a modern bootload erenvironment, such as EFI.

I still recall colleagues in the traditional business units yelling 'you'll never pass WHQL' with the above solution, but it did work. In fact, the work informed the subsequent interfaces and development with the Intel Framework Compatibility Support Module

We then ported the workstation code to boot the first Itanium workstation. I left that effort and joined the full EFI effort afterward. I recall the specific event which precipitated the decision. I was chatting with Sham and the workstation BIOS lead in the latter's cube. The lead said 'Now that we have our BIOS in modular code code "Plug-In Modules" (PIM's) we can tackle the option ROM problem. I thought to myself that just refactoring code into separate entities isn't the challenge in moving from PC/AT 16-bit option ROM's into a native format, it's all about the 'interfaces, namely how would a 'new' option ROM snap into a modern firmware infrastructure. IBI (now called EFI) was on that path to a solution, whereas a chunk of 'yet another codebase with PIM's' wasn't. Thus I was off to chatting with my friend Andrew, then lunch with Mark D, and onto the EFI quest in 1999. Quite the firmware long-game.

Next we're of finishing the first EFI, going from IBI to EFI .98 to EFI 1.02.

Next we're off on a cross-divisional team to create the '20 year BIOS replacement' called Tiano and the Intel Framework Specifications are born.

Next we solve the option ROM and driver problem with EFI 1.10.  Along the way between 1.10 and UEFI 2.0 we incubate a lot of future technology with the never release 'EFI 1.20' work.

Next Andrew Fish and I ported EDK to Intel 64. And I had fun with a port to XScale back in 2001. I have always enjoyed firmware bring-up on new CPU's.

Fast forward to 2005. The EFI specification became the UEFI 2.0 specification, and many of the Intel Framework Specifications became the UEFI Platform Initialization specification. Wrote the first EFI interface and platform spec for TPM measured boot

Fast forward to the 2010's.  More open source. More device types. More CPU ports. Continue to evolve network booting, such as IPV6 and HTTP Good stuff. Helped deliver UEFI Secure Boot

In parallel, I often had side firmware engagements, including a fun tour of duty helping our the solid state disk (SSD) team on firmware.

I still believe in better living through tools, too, whether they have landed in the community, almost made it, or are in incubation

Fast forward to 2017. Year 20. It's still a lot of fun solving crossword puzzles with hardware and firmware.

During my time at Intel I've also appreciated the wisdom of others, whether through the mentoring of direct interaction or the written word. For the latter I heartily recommend keeping the following close at hand.

So am I done this morning? Let's do a final rewind to February 1992 when I jumped into industry in Houston. First I wrote firmware for embedded systems attached to natural gas pipelines - sensors, serial protocols with radio interfaces to SCADA host, control algorithms, I2c pluggable expansion cards, loaders in microcontroller mask ROM's, porting a lot of evil assembly to C code...  fun stuff. The flow computer/Remote Telemetry Unit (RTU) work was an instance of the Internet of Things before the IOT was invented. Then on to industrial PC BIOS and management controller firmware. Then on to hardware RAID controllers and server BIOS. And then Intel in February 1997. 5 years of excitement in Houston prior to my Intel journey.

So I guess that sums out to 25. Now I feel tired. Time to stop blogging and playing the rewinding history game. Here's looking to the next 25.


Thursday, February 9, 2017

Specifications and a New Book

I recently came across which reminded me of the world of firmware. Specifically, there is an interplay of de jure standards, such as the UEFI 2.6 specification, and then de facto standards, including open and closed source behaviors.

I'll give a quick example where these two venues collided. Specifically, during the drafting of the UEFI 2.5 specification, there was an operating system request to make the UEFI run time code produced in a way such that the hypervisor or OS could apply page protection. Recall that UEFI runtime code and data are co-located in ring 0 with the OS kernel. This change entailed several things, including the OS making the UEFI run time code read-only and the data pages non-executable. To that end, the EDKII was updated to align the UEFI runtime driver sections on a 4KiB boundary and not merge the code and data pages. In addition, the UEFI memory map was updated to have a memory descriptor for each code and data page, creating several descriptors for each UEFI runtime image, versus the former behavior of having one memory descriptor for the entire set of PE images.

We codified this behavior in UEFI 2.5 with the memory properties table

This bit let the OS know that the code was factored into these separate pages and validated by the firmware producer to be truly pure code and data (e.g., no self-modifying code). This was a de jure UEFI 2.5 specification addition.

What happened?  Namely, why did we move to the EFI_MEMORY_ATTRIBUTES_TABLE in UEFI 2.6 and add language to the specification?

After publishing the 2.5 specification and upstreaming patches responsive to this properties table, many OS kernels started to crash. Uh oh.

What we learned was that when OS kernels invoke SetVirtualAddress to map the UEFI runtime entries from a 1:1 pre-OS setting to a non-1:1 OS kernel mapping, the relative distance between entries were not preserved. This didn't appear in earlier implementations since one memory descriptor covered a single image. In fracturing the single descriptor covering the PE image into multiple entries, the un-documented requirement to keep relative offsets between sections of a PE/COFF image during the SetVa call was surfaced.  We essentially discovered a de facto requirement to have a single descriptor covering a single PE/COFF image.

Thus the change in the UEFI 2.6 de jure specification to have an 'alternate' table to the UEFI memory map (e.g.,  EFI_MEMORY_ATTRIBUTES_TABLE) and maintain the single descriptor per image given the circa 1999 and beyond OS's and their SetVa expectations.

This new attributes table is also called out in some OS requirements

This doesn't moot the value of the de jure specification, of course. OS and device vendors appreciate standards so that long-term support (LTS) variants of the OS can have an expectation that platforms produced during the support lifetime, such as 10 years, will be compatible. Given the complexity of modern systems, the de jure specification cannot always cover all of the system details. Thus the value of open source and products providing some de facto standardization, too, to complement the formal standard.

Speaking of industry standard firmware and code, I'd also like to let people know that the "Beyond BIOS" book is now available at and Since the original publication in 2006, many things have changed, including scaling of the industry standards efforts, but the basics remain the same.  And those areas that have evolved are deftly treated in the updated text.

This book serves as a good launching point for someone just diving into the world of industry standard firmware. I was happy to have the opportunity to work with my old friends and co-authors Mike Rothman and Suresh, along with new friends like Jeffrey, Megan, and others from De Gruyter. De Gruyter also allowed for me to share a sample chapter, too.

If you have the time please take a look.

And interactions of modern systems don't always behave as expected, too.

You can learn more about the UEFI Shell, which is nicely staged in the picture above, in a update of the UEFI Shell book later this year

Thursday, January 12, 2017

Whose bug is it?

My favorite quote from chapter 1, page 1 of includes "'If you can fix a hardware bug in firmware, it’s not a bug but a documentation issue.'  —An anonymous hardware manager." I still recall being aghast upon hearing that utterance early in my career, but over time I grew to understand the wisdom. In fact, this inaugural chapter from 2015 book further elaborates on work-around's of hardware concerns that can be implemented in firmware.

This same sentiment was echoed in the position paper and presentation for the 2010 IEEE International High-Level Design Validation and Test Workshop. Specifically, the firmware can be modified at the 23rd hour to fix a work-around in hardware, leading to a potentially long list of 'firmware changes' during a product's postmortem. This sentiment is expressed in 'Incentives to fix issues in firmware cause root cause to be commonly incorrectly assigned to firmware' of the presentation and the following section of the paper:

      "There could be confusion between the root cause and the fix
      for issues. In particular, there is great pressure to resolve
      hardware issues in firmware due to the order-of-magnitudes
      difference in the cost of resolution. A “spin” of a chip may
      take many weeks and cost millions of dollars whereas a
      firmware fix may cost a few thousand dollars and a day or two
      of total work. In fact, many modern chips are designed so the
      firmware can configure the chips to work around issues in the
      field rather than having the hardware recalled. The public is
      simply told that there is a firmware issue when, in fact, the
      firmware resolved what was a hardware issue. "

I also recall being given a harried call on a Friday night about a the number of 'firmware bugs' on a product. I replied to the caller with some of the sentiments above, including the caveat 'I think that we can get the firmware update out by Monday, but I don't believe we can get a new stepping of the SOC by then.' Regrettably programs don't roll up the reason for the firmware changes and people are just shocked by the cardinality.

And often these bugs are not easy to find Bohrbugs, but Heisenbugs or Hindenbugs where there is a subtle interaction between host firmware and opaque hardware state machines. A week of investigation may yield a solution wherein one line of code changing one bit in a register access yields a solution. A few months ago a firmware manager at a conference related to me that the promotion process in his company entailed upper management reviewing the Git commits of the engineers. The firmware manager had to defend the smaller number of code changes versus the OS kernel guys as each delivering similar business value. But I still recall the final quote from the manager when he smiled and said 'The coolest part is that they are reviewing code changes, right?'

Sometimes these work-around's mitigate errata in the hardware (board, silicon, etc), and sometimes the changes are for cost savings. An example of the latter I recall includes a board design where the integrated Super I/O could be decoded as I/O addresses 0x2E/0x2F or 0x4E/0x4F by application of a pull-down or pull-up resistor, respectively. The hardware design engineer omitted the resistor in order to save costs on the Bill of Materials (BOM), so the boot firmware had to probe for what port value being decoded on each machine restart. And this was a cost savings of a single surface mount resistor.

The above discourse isn't meant to be an apologist view about firmware bugs, though. In general there are many bugs based upon programming flaws, not just hardware work-around's. If one is interested in the latter, take a look at But hopefully this posting will provide an alternate view into firmware and firmware changes.

Sunday, January 1, 2017

Saying good bye to 2016

I meant to do a final blog of '16 but I instead opted to catch the fireworks at the Seattle Space Needle

I like the end of the year as it hosts the Chaos Communications Conference (33c3). I recall seeing Trammell Hudson's Thunderstrike 2 over the '15 holiday, and for this '16 holiday I watched his talk on Heads, variously described in

I like Trammell's threat model write up at, too. Today we only have the higher level

It was interesting to see his mention of Intel (R) FSP and also reference our work on pre-OS DMA protection

Another talk of interest for the pre-OS was the review of porting UEFI Secure Boot for virtual machines This entails the Open Virtual Machine Format (OVMF) variant of EDKII that executes upon QEMU and is used as the guest firmware in projects like KVM, Virtualbox, etc.

The latter talk included a reference to the EDKII lock box
work and emulating the full System Management Mode (SMM) infrastructure. The addition of more of the SMM infrastructure in was positively mentioned, too.

Speaking of security and 33c3, an interesting read about researchers and industry was posted to As long as the flaws are responsibly disclosed such that the conference presentations aren't zero-day events, I cannot argue with their sentiment.

One common element discussed in Heads and the Virtual Secure Boot topics entailed availability of full platforms. In that area there is great progress in having a set of full EDKII platform code in source that works with an Intel(R) FSP for the embedded Apollo Lake (APL) SOC (formerly known as "Broxton") in the repository

Regarding security and treatment of EDKII issues, we have moved our advisory update to gitbook from the former two PDF postings These recent postings represent fixes that honored the industry request for six month embargo of the project updates. Going forward we'd like to auto-generate the advisory from Bugzilla, but for now the document is manually curated. There have also been discussions of moving from the advisory document issue enumeration to things like CVE's which is an investigation in progress, too.

Moving into 2017, maybe I'll catch up to George Westinghouse's number of issued US Patents. I left 2016 with 354 issued, whereas George has 361

2017 should also feature an update to a couple of UEFI books, including Beyond BIOS and Harnessing the UEFI Shell Beyond BIOS was originally published in 2006, so this update will mark over a decade since its first appearance.

It has been an interesting run on this project, with over 17 years on the EFI team and nearly 20 years at Intel. I look forward to what the next wave of technology will bring in '17 and beyond.