Tuesday, June 10, 2014

300, networking, FSP, books, and good-byes

It was October of 2012 when I posted http://vzimmer.blogspot.com/2012/10/250.html.  Over a year and a half later I have finally hit 300 issued US patents 300 issued. In later posts I talked about innovation and invention http://vzimmer.blogspot.com/2013/12/invention-and-innovation.html, so I don't have much to add.

Probably closer to my mind includes recent events. Since my last posting I had the opportunity to talk with the industry about UEFI and testing. With Ricard Neri from Intel's Open Source Technology Center I spoke at the UEFI Plugfest in Seattle last month. Our presentation and video of the same on "Open Source Test Tools for UEFI” are located at http://www.uefi.org/sites/default/files/resources/2014_UEFI_Plugfest_04_Intel.pdf and https://www.youtube.com/watch?v=aV1DSF4cwGw, respectively. I will pick up on some of the topics of open source security and tools at the upcoming ToorCamp, too http://toorcamp.toorcon.net/talks/#16.
Other events that have occurred since my last posting include publication of the UEFI 2.4 specification. Items of note there include removal of the need to time stamp each UEFI network packet and manage volatile variables for the network stack. This is important because the former entails a call to the real-time clock (RTC), which is an expensive I/O operation on some systems and may entail a system management mode interrupt (SMI) trap, which has non-zero overhead to effect the world switch. Similarly on the volatile variables. Many implementations of UEFI implement all of the variables in SMM in order to protect authenticated variables, thus even an update to a volatile variable in the performance path of the network stack can entail significant overhead. My colleagues posted a paper on these issues of PXE performance at https://uefidk.com/sites/default/files/Intel_UEFI_PXE_Boot_Performance_Analysis.pdf.

Speaking of UEFIDK.COM site, I have been encouraged to blog there, but I have yet to wean myself off of this site. Hopefully my next blogs on UEFI will migrate to that location. From that site I'd also like to note the existence of
Intel® Firmware Support Package (Intel® FSP) For MinnowBoard
·         Intel® FSP is Available from the Intel® Embedded Design Center
(Intel® Atom™ processor E6xx series with Intel® Platform Controller Hub EG20T )
- See more at: http://www.uefidk.com/content/minnowboard-uefi-firmware-download#sthash.X6BoqM8T.dpuf

under http://www.uefidk.com/content/minnowboard-uefi-firmware.  This shows how to adapt the Intel Firmware Support Package, or 'consume' that binary, into an edk2 http://edk2.sourceforge.net tree. Similar infrastructure code in the coreboot upstream at http://www.coreboot.org can be found to do similar integration, or how to 'consume' the FSP http://review.coreboot.org/#/c/4018/.  These are two examples of workflows of open source firmware ecosystems that allow for building platforms with the FSP.

To provide guidance around the FSP construction, on both the 'producer' and 'consumer' side, the Firmware Support Package (FSP) External Architecture Specification (EAS) has been posted to the location
http://www.intel.com/content/www/us/en/intelligent-systems/intel-firmware-support-package/fsp-architecture-spec.html. This helps to lock down the interfaces so that the above listed firmware ecosystems can align their 'consumer' source infrastructure codes.

Why FSP?  Even UEFI/Edk2 stalwarts like my good friend Tim Lewis, CTO at Insyde Software, have expressed thoughts on using edk2 for embedded, including anecdotal commentary such as http://uefi.blogspot.com/2014/04/the-tale-of-three-conferences.html

Many discussions around UEFI have to do with complexity. And there is something to these discussions, since the very power and flexibility of UEFI has led to implementations (like that on tianocore.org) which are broken into hundreds of pieces, where assembling the right one requires the right recipes. Most embedded vendors don't need their firmware distribution to be as complicated as their Linux distribution (see yoctoproject.org).

So you can think of FSP + EDK2 as one 'recipe', among many, to ease the embedded workflow in various open source firmware ecosystems. FSP helps preserve the boundary of critical silicon initialization code, with some examples found in PI Silicon Init. This doesn't describe the only mechanism, of course. But given the end goal of booting an operating system, Mille viae ducunt homines per saecula Romam.

So enough of UEFI and FSP, an additional thought on this 'milestone day' of 300 include a mention of a book that I enjoyed reading over the last few weeks, namely Ben Horowitz's The Hard Thing About Hard Things  http://www.amazon.com/Hard-Thing-About-Things-Building-ebook/dp/B00DQ845EA/. Ben was a leader at Netscape during the startup days, and his book ranks among Andy Grove's High Output Management and Only the Paranoid Survive as more proactive business books, or managing a business in a crisis. Specifically, Horowitz speaks of the distinction between the "Wartime" versus "Peacetime" CEO. He notes how Eric Schmidt was a peacetime CEO of Google, with features like the 20% time, versus Larry Paige becoming a wartime CEO, and the more narrow but aggressive product focus.

After reading Ben's book, the question I would pose is if the same analogy works for engineers, or more generally, the employees. Namely, are some employees better at peacetime, with the less stress and ability to explore more things, than in wartime, with the enhanced focus?  I personally find constraints help with focus, and Paige's 'Scarcity brings Clarity' comes to mind. Food for thought.

Speaking of thoughts, I'll close with thoughts on co-workers who have departed. Over the last day I learned about a manager with whom I worked since the last 90's has passed away. I learned a lot from his mentoring, and I especially recall a quote he made about project development at a technology company: "If upper management knew the true cost of an R&D project, they would never fund it." Fare well, Doug.

A couple of additional thoughts are for technical colleagues still with the world, but who have recently retired from Intel. One is a Senior PE from Folsom on the CPU development team. I learned many things from Steve about the trade-off between hardware and software, especially disabusing me of the fact that 'the other guy's job is easier.' I relent and the CPU microarchitects win for complexity. Talking to Steve was like reading Colwell's tale
http://www.amazon.com/Pentium-Chronicles-Politics-Landmark-Practitioners-ebook/dp/B001CBCRCA/. A parting quote from Steve that sticks with me is You will not have all the answers, but in working with the right people, you can find them.“  So true. And thanks to the 'right people' who still answer my emails, pick up the phone, and look up from their monitors when I invade their cubicle.

And last but not least, I have to comment on the retirement of George Cox. George had a rich Intel career, including setting up the first intel.com website, CDSA, iWARP, the Intel 432, and one of his last achievements, the digital random number generator (DRNG) http://spectrum.ieee.org/computing/hardware/behind-intels-new-randomnumber-generator/0. On the latter I still recall George quoting John von Neumann, in Geoge's distinct West Texas accent: 'Anyone who uses software to produce random numbers is in a ``state of sin''' when discussing the DRNG work with me over lunch.  That work, and the 432 http://en.wikipedia.org/wiki/Intel_iAPX_432, count as epic events in just one career, and I'm still a fan of hardware capabilities http://www.google.com/patents/US8312509, especially as it's tough for 'software to protect software.' We all need a little help from our hardware friends.

Since I started the thread with patents, I'll end with patents. Intel has been a great place to work. I joked on my Google+ channel that this is the closest I'll ever get to the C-suite in response to the patent award photo at https://plus.google.com/+VincentZimmer/posts/R764RmhX7tg. In reality, though, Intel has offered me an opportunity to work with some of the smartest people in the industry and learn from them. I only hope that I can contribute back a small percentage of what they have provided to me over the years.

And with that, off to work.

Tuesday, February 25, 2014

Anniversary Day .Next .Next

Yesterday makes seventeen years since I started work at Intel. I broke my tradition of blogging on the anniversary day itself given my older age (i.e., I fell asleep before hitting the 'post' button). Given my last blog and retrospective view of firmware, I don't want to make yet another trek down memory lane.

For this quick blog, I want to begin with a recollection of how interesting the trip has been, including being a recent hire when Andy Grove was voted Time's "Man of the Year." We were all given copies of the magazine article, and I naively put mine into an inter-office envelope with a message for Grove to 'please autograph this copy for me.' To my surprise, I received the below back in the mail a few weeks later. Even given a company with 70,000 at the time the CEO took the time to make this small gesture for an employee. Quite exciting.


I hired into the DuPont, Washington Intel site on February 24, 1997. Fast forward from 1997 to 2014, and I am still saying 'good night' to the same campus.
Regrettably, the site is being sold and my software group has to relocate. So if I am still posting this same blog in late February on 'anniversary day,' I'll be at an alternate Intel site, maybe South Seattle?


Every year provides surprises like this, so I look forward to posting new year and whatever surprises the intervening twelve months have to present.

Cheers

Friday, January 24, 2014

"Advances in Platform Firmware 'Beyond BIOS'" - 10 years later.....

The last ten years have witnessed remarkable progress in the evolution of standards-based firmware. In this blog posting I review some of the events and changes that have occurred since the paper Advances in Platform Firmware Beyond BIOS and Across All Intel® Silicon was published in the Technology @ Intel online magazine on January, 2004.

Although back issues are no longer posted on Intel's website, a copy can be found at UPDATE or https://github.com/vincentjzimmer/Documents/blob/master/it01043.pdf

For this blog in January 2014, let's discuss what has changed since this paper was published in January of 2004. The first change at the Intel level is the logo. Instead of the drop-e in the logo, viz.,
2006 and onward featured the new Intel logo.

or a recent shot of the logo at the company headquarters.

At a personal level, the bio on the paper lists me as a 7 year Intel veteran w/ "100 US patents pending or issued." Of those 100, probably 10 were issued at that time. Today on the front door of my 17 year anniversary, my patent count is nearly 300 issued and 500 pending. Also, my job description has changed a few times, as has my business unit name (not my team & its charter), since 2004.

Another interesting aspect of this paper includes the number of translations. Works like Beyond BIOS http://www.amazon.com/Beyond-BIOS-Developing-Extensible-Interface/dp/1934053295/ are only available in English, as are the Intel Tech Journal papers http://www.intel.com/content/www/us/en/research/intel-technology-journal/2011-volume-15-issue-01-intel-technology-journal.html, whereas this 2004 paper was translated into Japanese JP, Portuguese PG, Russian RU, Spanish SP, and even showed up on a Chinese website CN in translation.

In addition to the translations, an Intel Press editor noticed this article and offered me the chance to write Beyond BIOS back in 2004, too, after several others had passed on a request to pen a book on the same. In the ensuing decade since this paper's publication I've used the term "Beyond BIOS" a few times, as can be seen in a search of CV for that 2-tuple of words. The dialectic becomes especially interesting when people classify UEFI as a type of BIOS, viz., "UEFI BIOS." So how to go 'Beyond BIOS' in that case?  Another logical antimony like Russell's Paradox http://en.wikipedia.org/wiki/Russell's_paradox for firmware, I suppose.

Beyond these personal details, though, the evolution of the firmware technology has experienced the most pronounced changes. Specifically, in 2004 we referred to the Framework specifications as "The Intel(R) Platform Innovation Framework for the Extensible Firmware Interface." We internally joked at the time that reading off the title consumed more time than the firmware needed to boot a system. In 2004 the "Tiano" code base was an internal project and formatted as a monolithic entity, namely the "Tiano release X", where 'X' varied as we evolved the implementation to support the 30 Framework specifications and EFI 1.10 specifications. At that time, these were the only public firmware specifications and were hosted at intel.com.

After 2005 the EFI, and then Framework, specifications evolved into the UEFI and PI specifications, respectively, as shown in the timeline below.

And during the ensuing decade after this paper's publication, the Tiano implementation went open source at http://www.tianocore.org, the UEFI specification evolved from UEFI2.0 to the most recent UEFI2.4 on http://www.uefi.org, and the UEFI Platform Initialization (PI) specifications evolved from PI1.0 to PI1.3. The specification timeline appears at the top portion of the figure, too. The bottom portion of the figure demonstrates the evolution of the code base, namely from the monolithic EFI Developer Kit I (edk1) to the package-based, modular, cross-platform buildable EFI Developer Kit II (edk2). The UDK is a validated, supported snapshot of a subset of the edk2 project.

And as far as hardware support is concerned, the 2004 paper discusses Itanium, IA32, and XScale as supported by the code-base. Since then, Intel divested of its XScale product line, ARM added its 32-bit and 64-bit bindings to the UEFI specification and the edk2, and Intel evolved to 64-bit with x64.

Regarding security, the paper also talks about 'cryptographically validated' modules before use. Since then, the industry has evolved Secure Boot technologies that span from the hardware, through the firmware phase, and culminating in the hand-off to the OS loader with UEFI Secure Boot.

The venerable boot flow of Figure 1 in the paper is largely the same as today, including liberal re-use of that figure across many subsequent publications. And the 'design-by-interface' nature of EFI/Framework, via today's UEFI/PI, still apply. With codified API's in the specification and GUID-managed namespaces to avoid collision between 3rd party innovations and API's managed by the standards group, UEFI/PI has been holding its own quite well.

As I pause and reflect, in the 1990's I recall having several volumes of the Win32 API in hard copy. At the time, I thought that API's came down from the deity in a fully-formed state.  Late 90's coincided with my joining the EFI team that ultimately delivered the EFI1.02, EFI1.0, UEFI2.0-2.4 specs, Framework 0.1-0.9x, and PI1.0-PI1.3 documents. In that ensuing decade and a half I realized that infrastructure doesn't just appear overnight, sort of like Athena being borne full-grown from Zeus's head. 

Now for my vantage of today, I realize that infrastructure evolves in an organic fashion over time, from a couple hundred to several thousand pages, a reference implementation from the 10's of KLOCs to millions of lines of code. But that's where the design principles and the underlying conceptual integrity really matter. And for this posting, the prevalence of the assertions in the above cited white paper and today's deployed UEFI + PI art with its corresponding implementation in open source attest to that fact.

Beyond this memory lane trek for the UEFI and PI specs, the most exciting recent culmination of this effort involves Galileo and Quark http://www.intel.com/content/www/us/en/processors/quark/intel-quark-technologies.html and its recent open source action. Specifically, the board support package (BSP) zip and documents for the Quark can be found at the URL 
https://downloadcenter.intel.com/Detail_Desc.aspx?DwnldID=23197. This package contains the file Quark_EDKII_v0.9.0.tar.gz which includes UEFI PI packages with full source code that build in an edk2 tree https://svn.code.sf.net/p/edk2/code/trunk/edk2/. This allows for using open source tools like GCC, edk2, and this .7z file to provide a full bootable solution for Quark. This package includes memory reference for the PEIM, SMM infrastructure and sample drivers, and other SI initialization code. Historically many of these codes have been withheld from open source for Intel hardware.

A new processor family and all of the edk2 source code to build the platform available in open source. Things don't get more exciting than this. Here's a picture of the one I purchased from Amazon http://www.amazon.com/Intel-Galileo1-DDR2-1066-Motherboard/dp/B00GGM6KJQ/ 



So has the journey ended with this decade-later milestone? In my opinion, 'no.' I view the evolution of technology in spatial terms, as shown in the figure below. 



The Left-Right, or East-West headings include scaling and broad adoption of the standards and source technology, which has been happening over the last decade, with the broad adoption of UEFI in the release of Microsoft Windows(R) 8 and the requirements around UEFI for booting. East-West also includes adoption of UEFI and PI by new device segments, such as the Quark device mentioned above.

In my mind, north includes accretion of additional functionality on top of UEFI, such as more networking capabilities like the HTTP-boot mentioned in http://tools.ietf.org/rfc/rfc5970.txt, richer usages, etc. Moving south involves leveraging the PI infrastructure for new distribution mechanisms of code like FSP, hardware/firmware co-design http://www.google.com/patents/US8522066, etc.

I try to be consistent in my view of life, so hopefully this 10-year retrospective exemplifies my sentiment of "I would argue that the landscape for invention, innovation, and creation has never been more fertile" from http://vzimmer.blogspot.com/2013/03/a-technical-career-path.htmltoo

Another ironic aspect of taking the long, retrospective view of the project includes the response of others. During the earlier portion of the decade I spent quite a bit of time engaging with both internal and external teams on adoption. Prior to the broad industry embrace of the technology, many teams were wary of engaging. This reminds me of the John F. Kennedy quotation of "Victory has a thousand fathers, but defeat is an orphan." In the early 2000's, I sometimes felt we were orphans, whereas today our work has many fathers.

As noted in the logo iconography above, Intel as a company has been around longer than I've been alive, so I am honored to have been given the opportunity to play a small role in the broad, exciting endeavor of riding Moore's Law and adding computation capabilities to the daily lives of so many around the world. 

And in the long tradition of Intel and open, programmable platforms, working with the Galileo board above reminds me of my first Intel platform, an Intel SDK-85 https://archive.org/details/bitsavers_intel80859nualJul77_5544585 (and with that familiar drop-e on the cover) that my father gave me back in those early days in Houston. This allowed me to experiment with coding and hardware interfacing, albeit in 8-bit assembly, viz.,



The ones on the web seem to have aged better than mine http://oldcomputers.net/intel-mcs-85.html

With that I should close this, my inaugural blog for 2014, tonight. Barring any life changes, the next blog should be "Anniversary Day.Next.Next," consistent w/ my February 24 postings of '12 and '13. 

Cheers



Tuesday, December 24, 2013

Technical Communications

My last post talked about invention and innovation http://vzimmer.blogspot.com/2013/12/invention-and-innovation.html, and in that post I mentioned an article from the Harvard Business Review. Touching on such business aspects in a blog with the theme of 'musing on technology' may appear to be a category error, but for me, business underlies many of our technical endeavors.

Specially, I believe that technology is really about people, and people interact via the macro-economy via business, so they are all related in my eyes. A strict logician could argue via parody of my assertion with the following: 'technology is made of atoms, and atoms interact via the laws of quantum mechanics, so where are your QM postings?' My reply to such a syllogism would be 'the post is upcoming.'

So enough preamble, let me explore what I mean by technical communications in this post. A few recent events inspired this posting. The first was a discussion at Seatac airport http://www.portseattle.org/sea-tac/Pages/default.aspx with a former Intel manager. We were both waiting to fly to San Francisco, and our flight was delayed by a couple of hours. Fog in San Francisco, who would have thought it possible? This manager now works for the hardware division of Amazon, namely Lab 126 http://www.lab126.com, or "A to Z." I asked him about the admonition by Amazon's CEO Jeff Bezos against misusing Power Point that I had read about on the web http://conorneill.com/2012/11/30/amazon-staff-meetings-no-powerpoint. He told me that the practice of writing white papers ahead of the meeting and using Power Point as 'speaker notes' finds practice throughout all of Amazon's groups. The manta of "think complex, speak simple" summarizes the intent of the behavior.

The ex-Intel manager brought the Amazon culture close to home for me when he said "Don't you realize how many presentations get stuck on discussing a bullet for the whole meeting? At Amazon, we can deflect such delays by referencing the white paper, for example." In my professional career where white papers are not fully embraced, I try to avoid the single bullet rat hole with 'lap rules.' The term 'rate hole' and 'rat holing' is common in the tech industry and described well in Johnson's book Absolute Honesty http://www.amazon.com/Absolute-Honesty-Building-Corporate-Integrity/dp/0814407811. To avoid a rate hole via lap rules I advocate the following, 'Lap #1' allows the speaker to present all of his material without interruption. 'Lap #2' is a re-review of the same slide deck, which allows for questions. Regrettably, many senior people cannot restrain themselves and will camp on a bullet, or even the title, during 'Lap #1.'

When it comes to rat holes, the "Highest Paid Person's Opinion" (HIPPO) http://www.forbes.com/sites/derosetichy/2013/04/15/what-happens-when-a-hippo-runs-your-company/ is especially prone to this behavior of jumping to early conclusions prior to hearing a full review. I once asked a HIPPO after a meeting if he/she really thought that being the most senior person justified the enforcement of an opinion without having all of the data, and the reply was "of course or else the company wouldn't pay me this much." Interesting observation and maybe sour grapes on my sub-HIPPO status, but I encourage such parties to temper that alacrity with the possibility of succumbing to the logical fallacy of confusing correlation ("I get paid a lot so I am right") and causality ("I reviewed the data and with my experience I assert that I am right"). To combat the attraction of correlation-based reasoning I advocate a bit more Socratic questioning in the venues where seniority provides access.

From my last posting I talked about the 'exit champion,' or the pejorative characterization of such parties as "Dr. No" or the "No-Bots", so a cocktail of a 'HIPPO plus No-Bot' may engage the most vigorously in the rat-holing. Or even worse, the trifecta of 'HIPPO + No-Bot + Architecture Astronaut http://www.joelonsoftware.com/articles/fog0000000018.html.'

Maybe some of this crazed behavior within companies can be explained by the theory of canine-stacking  that a colleague recently described to me.  As Mike Rothman noted:

"I suppose the theory is simple:
     Top Dog
     Middle Dog
     Lower Dog
     Lowest Dog
Lowest dog works like crazy, but Lower dog adds a little something and communicates up what he did (inclusive of lowest dog's work) - and each layer above adds a touch of something and fronts for all the work below....You just hope that each layer had added something useful other than passing the word around...."

On a personal note, I appreciate the Amazon-eque white paper sentiment and try to use the written word as a way to scale and convey complex thoughts, strategies, and technical designs. And on the topic of scaling my impact, I hearken back to a quote from a manager a decade ago who told me "you should produce the output of as many people as your grade level." Taken literally this can be the IT equivalent of John Henry http://en.wikipedia.org/wiki/John_Henry_(folklore), I fear. It’s tough to write specifications + code at the same level of ten people, but working through others, collaborations, training, writing things down, etc, are some of the tools by which I can scale my efforts.

The next event that started me down the path of thinking about communications was a presentation Yuriy Bulygin, the Chief Threat Researcher, myself, and John Loucaides gave at Cisco SecCon http://www.cisco.com/web/about/security/cspo/csdl/seccon-overview.html. The narrative proceeded from attacks (or offense), then into technology countermeasures (or defense), and finally answered the question of what to do in case of a vulnerability (or response).

One comment from an attendee included "Quite impressive. You told an epic tale in less than an hour." To me this was a reminder that the 10's of slides and the constrained time frame for explanation plus demonstration exceeded the medium of Power Point. This class of erudition needs a complementary discourse mechanism, such as the written word.

Another recent event, though, that reminds me that I may not be following my own advice was my presentation on "Platform Firmware Security" at Seattle BSides http://www.securitybsides.com/w/page/57847942/BsidesSeattle in Redmond, WA a couple Saturdays past. Specially, I cribbed a long Power Point deck https://docs.google.com/file/d/0BxgB4JDywk3MSncxUHlIN0tYdms/edit but I didn't produce an updated white paper for offline reading. I was told that as RSA becomes more professional, Black Hat is the new RSA, Defcon is becoming the new Black Hat, and the local BSides are becoming the new Defcon.




I posted these slides during one of the subsequent talks based upon the exhortations on Twitter
"we want the slides!" "we want the slides!" chants are heard in the twitterverse ;)

After my talk, I sat in on a talk by Jack Daniel https://twitter.com/jack_daniel on presentation techniques, including the use of more graphics than text and engaging the audience. Jack was in attendance during my talk in the morning, but luckily I preceded his presentation else I would have been especially shame-faced in having delivered my verbose deck that didn't follow his guidance.

I recently realized the impact of a white paper when I saw that my 2009 IBM and Intel white paper http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.161.7603 has been included into university curricula http://www-inst.eecs.berkeley.edu/~cs194-24/sp13/index_handouts.html. That shows scaling of the written word in one instance.

On a side note, I couldn't resist Tweeting a quote from Jack Daniel #bsidesseattle on Saturday.
"I am from Texas. It is an excuse for aberrant behavior for life."
Having grown up in Houston, TX, I have to agree w/ Jack on that point.

A final event that helped inform this posting was reading Joan Magretta's book What Management Is: How it works and why it's everyone's business http://www.amazon.com/dp/B00AQKIOFC/ref=r_ea_s_t during the round trip on Amtrack to Leavenworth last weekend. This was more of a meta-business book that provided an alternate way to think about modern management and leadership, versus today's books of recipes, sound-bites and aphorisms. I especially liked the sentiment where today's manager is possibly the last domain of the generalist since the modern worker is a super-specialist who necessarily 'knows more' than his manager. This leaves the role of the manager to inspire, lead, coordinate, and synthesis the efforts of this global workforce.

Pretty interesting read.

Well, I should summarize this post by reminding myself that the written word provides scale. Conference talks are great for networking and getting new insights, but for purposes of information dissemination they have understandable limits. In 2014 I need to convert more of my slides and talks into more deliberate writing activities. When I revisit this post in 12/14 we'll see how I have done in making progress against task on this sentiment.

With that I will close today and wish everyone a Merry Christmas.


Sunday, December 15, 2013

Invention and Innovation

I recently returned from Shanghai. I was greeted by the following when I boarded the plan from LAX to SH, though.
There was little hyperbole in that headline, I'm afraid. Since my first trip to the Shanghai to work with our team members at the Cao He Jing site in 2001, and then Shanghai Mart in the early 2000's, and finally the Zizhu campus, the pollution and traffic have increased.

But what has also increased is the team's experience and capabilities. The energy of the Intel team always recharges me, and this trip was no exception.

These trips include working sessions, open forums, talks, and other instances of collaboration. One talk I always offer to the site is patent training, for an engineer's perspective. I want to make sure that the audience doesn't believe that I can provide legal advice, but what I can do is to provide guidance on the process within the company.

Since the process by engineers capture and idea and propose it for inclusion in a potential patent filing varies per company, what I'll discuss below entails more of a higher level view of how I think about the process, and innovation in general.

For me I see a serial relationship between Invention and Innovation, as follows:

INVENTION
Company feeds $’s into Engineer + Engineer’s insight in response to a Problem Statement
Results == Patents
INNOVATION
Patents + Engineering + Marketing leads to product development
Company sells products
Results == earnings of $$$$’s (or RMB, YEN, Euro...)

-- Repeat this loop of INVENTION -> INNOVATION




So this frames the action of Invention versus the broader goals of the company, such as creating products which delight and fulfill end customer needs.  The process begins with invention, using a small 'i'. Therein an engineer is faced with a problem presently, or potentially in the future, faced by the market. The engineer devises a solution, and if it is 'novel' or distinct from other practice, it may be a candidate for formally pursuing a patent, or Invention with the capital 'I.' Of course most of our engineering problem solving is invention, but it is important to note the creation of patents because of the present nature of cross-licensing, company investment preservation in its Research and Development (R&D) spend, etc. These means the evolution of the small 'i' to the big 'I,' namely transforming invention into Invention and the associated patent applications for the latter.

With invention in hand, whether small 'i' or 'I', the long path of Innovation, or working with a team within, across, and possibly beyond the company, to deliver product. If the engineering concept was truly born of a problem statement enjoyed by the market and the team can execute on the associated reduction of the design to a shipping product, and the timing of the market is right, and....including many other factors such as the phase of the moon, resultant revenue should be borne of the activity. And as each company reinvests some revenue into R&D from whence invention is incubated, the virtuous cycle continues.

And note the emphasis on 'problem statement.' For me a good problem statement that is relevant to the business can be the source of unbounded invention, and hopefully, resultant innovation.

So invention is the idea pump that can inform product design, but Innovation is the mapping of invention into the creation of products that you ship to customers and for which you get paid.

This is how I think of invention in the context of business-driven-innovation ("BDI," oh not another acronym.....). It is perhaps a bit narrow minded and parochial, but just as I cannot see myself studying theoretical physics or maths, I feel most comfortable operating in a space where the business imperatives are manifest. Sort of like how after the Cultural Revolution in China I heard that all maths were "Maoist Mathematics", such as to support manufacturing and civil engineer, not pure maths. Maybe that explains why the Alma mater of many of my Shanghai colleagues, Shanhai Jiao Tong University, literally translates to "Traffic School" (or at least that's what they've told me)? To my mother's chagrin this is also the reason that my CV doesn't include the PhD moniker, too.

The cycle between Invention and Innovation at a large company is often tempered by the role of "Exit Champions," as well defined in the Harvard Business Review article http://hbr.org/2003/02/why-bad-projects-are-so-hard-to-kill/. It continually proves a delicate balancing act to ensure that Invention and Innovation are congruent with the business exigencies, especially given the finite resources for R&D budet allocation. But with our ever-changing market and internet-time pressure, I always feel the need to co-equally role model the 'entrance champion,' or party who delivers Invention+Innovation, as much as providing the guard rails of the 'exit champion.'

Given my love of business-driven-innovation, maybe I'll miss the next Kuhn-style paradigm shift and scientific revolution http://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions, but this is where I feel I fit into the flow on this dynamic.  Enough said for Sunday blogging. And thanks for reading if you made it this far.

Sunday, December 8, 2013

Better living through tools

In an earlier post http://vzimmer.blogspot.com/2013/09/end-of-summer-13.html I spoke about architecture versus implementation, and the process of successively refining an architecture to some implementation artifact. I didn't elaborate upon some of the techniques to demonstrate correspondence between the design goals and the implementation behavior. In the industry this can be naively thought of as just a simple matter of testing, but I believe it goes further than that, namely the broader concern of representing architectural intent in a product.

With respect to software development and testing, though, there are many different approaches. Recent efforts like Test Driven Development (TDD) have shown promise in practice, and I enjoyed Grenning's Test Driven Development in Embedded C on the same. Similarly, books like James Whittaker's How Google Tests Software and its associated blog http://googletesting.blogspot.com/2011/01/how-google-tests-software.html provides a very pragmatic approach to the problem, namely all developers write test and a centralized test organization is both responsible for a consultative and automation/infrastructure role. In our world of UEFI and EDK2 we have the Self-Certification Tests (SCTs) http://www.uefi.org/sites/default/files/resources/UPFS11_P3_UEFI_SCT_HP_Intel.pdf.

Now lets look at the problem from a broader level, namely expressing architectural intent. To me this is nowhere more important than in the world of building trustworthy systems, especially the "Defender's Dilemma" that Jeremiah reminds us of in slide 7 of http://www.uefi.org/sites/default/files/resources/UEFI_Summerfest_2013_-_Microsoft_Hardware_Security_Test_Interface.pdf. Namely, the attacker only has to discover one flaw, whereas the defender has to provide no gaps. And it is the 'gaps' that are important in this post since flaws can span from the architecture down into the implementation.

To that end of mapping architectural intent directly to code, I have continually been intrigued by the work of Gernot Heiser http://www.cse.unsw.edu.au/~gernot/ at NICTA on Trustworthy Systems http://www.ssrg.nicta.com.au/projects/TS/. The trophy piece of that effort is seL4, a formally validated microkernel with correspondence between C code, a machine model, Haskell, and a theorem prover. This is undoubtedly a BHAG http://en.wikipedia.org/wiki/Big_Hairy_Audacious_Goal to scale more generally, but it does serve as a beacon to show what can be done given sufficient focus and incentives.

Gernot's effort is not alone, of course. There is the verification of the CMU SecVisor http://www.cs.cmu.edu/~jfrankli/tr/franklin_secvisor_verification.pdf, UTexas Hypervisor verification http://arxiv.org/pdf/1110.4672.pdf, and application of formal methods to industrial problems like http://swtv.kaist.ac.kr/courses/cs350-08/ase08_submitted.pdf.

Beyond seL4, though, there are other efforts that NICTA incubates under the banner of Trustworthy Systems, as best described in http://www.nicta.com.au/pub?doc=4163. One of the authors of the latter paper in Leonid Ryzhyk, and in 4.2 the paper references work on the long goal of device driver synthesis, or correct-by-construction for this class of system software.

And it is the holy grail of 'correct-by-construction' for systems code that I want to mention next in a little more detail. Intel recently published a paper Device Driver Synthesis http://noggin.intel.com/content/device-driver-synthesis in the Intel Technology Journal, Volume 17, Issue 2, December 2013 titled Simics Unleashed - Applications of Virtual Platforms http://www.intel.com/content/www/us/en/research/intel-technology-journal/2013-volume-17-issue-02-intel-technology-journal.html that goes into some detail on a real instance of code synthesis.

Regarding driver synthesis, an overview of the effort may best be described in a picture, such as


above. The idea entails taking a model of the hardware to be managed by a driver plus a formal interface of how the driver interacts with the system software environment, and then synthesize the reactive code for the driver. The ideal would be automation that simply emits code, but given the human aspect of software development, such as maintenance, review, evolution, the process can act as an interactive session to have the user add code as part of synthesis, and ensure those additions are correct. The effort also focuses on making the resultant code something that has seemly names and meets other psychological constraints in working with code, such as cyclomatic complexity http://en.wikipedia.org/wiki/Cyclomatic_complexity.

Within Intel, I had the pleasure of engaging with Mona Vij who has led the team in Intel labs on evolving this technology since the summer of 2012. She, along with the Intel and external university researchers, have proven valuable, innovative parties with whom to engage. You can see the elements of our collaboration via the UEFI aspects of the effort in the paper. I believe realizing a vision of this type of work-flow would complement other efforts for the UEFI community, such as http://sourceforge.net/apps/mediawiki/tianocore/index.php?title=UEFI_Driver_Wizard.

For additional details, the Termite page http://www.ertos.nicta.com.au/research/drivers/synthesis/home.pml calls out the collaboration. More details on the engagement with Intel and the university can be found at  

From the perspective of evolving my skills as a technologist, the engagement offered an interesting view into another approach for system software Better living through tools. It also opened my eyes to look beyond my old friend of C code to a world of functional languages like Haskell, into DSL creation, use of Binary Decision Diagrams (BDD's), SAT solvers, hardware modeling languages like DML and SystemC, too.

The industrial advantages of functional languages, albeit Lisp and not Haskell, finds an interesting discussion in the writings of Paul Graham http://www.paulgraham.com/avg.html. I recommend reading his essays, including the book version Hackers and Painters http://www.amazon.com/Hackers-Painters-Big-Ideas-Computer/dp/1449389554.

The above paper will give you a feel for the effort, but if you are hungry for more details on the underlying mechanics, I recommend visiting http://www.ssrg.nicta.com.au/projects/TS/drivers/synthesis/, too.

So again, these are my thoughts and not a plan-of-record of my employer, as my obligatory blog bio reminds people. But what I did want to do with this post is enjoin system software engineers in a conversation to think differently about how we write specifications and the process by which we refine these specifications to code, ensure that the code matches the specifications, and finally, evolve code + spec over the life of our technologies.


PS
September 2014 update.  Termite is now open source https://github.com/termite2 

Saturday, October 19, 2013

Configuring an IPV6 network boot

Earlier blogs have described the UEFI stack and network booting. This entry will talk about configuration of the boot environment.

Specifically, how do you configure a server to provide a netboot6-based image?  SUSE has written a helpful document on configuring a Linux server to support this usage at http://www.novell.com/docrep/2012/12/sles_11_sp2_for_uefi_client_best_practices_white_paper.pdf.

Recall that Netboot6 is a combination of the wire protocol defined in both RFC 5970 http://tools.ietf.org/html/rfc5970 and chapter 21.3.1 of the Unified Extensible Firmware Interface 2.4 specification http://www.uefi.org. The UEFI client machine uses DHCP as a control channel to expose its machine type and other parameters as it attempts to initiate a network boot. This is referred to as 'client initiated' network boot, as opposed to 'server initiated.' Examples of the latter include Intel(R) Active Management Technology (AMT) Integrated Disk Electronics Redirection (IDE-R), or exposing the local hardware network disk interface to the management console for purposes of the management control provisioning a disk image http://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide/default.htm?turl=WordDocuments%2Fsetsoliderandotherbootoptions.htm. An implementation of Netboot6 can be found at https://svn.code.sf.net/p/edk2/code/trunk/edk2/NetworkPkg/UefiPxeBcDxe/ in order to demonstrate a client-initiated download.

For client-initiated network bootstrap art like Netboot6, what are the details of the parameters?  The most important parameter entails the architecture type of the .efi image that the boot server needs to provide. The client machine that has initiated the network boot needs to expose its execution mode to the boot server so that the appropriate boot image can be returned. Recall that UEFI supports EBC, Itanium, ARM 32, ARM 64, Intel 32-bit, and Intel 64-bit. This list may grow over time with corresponding updates to the UEFI Specification of machine bindings.  Beyond a UEFI-style boot, some of my co-authors on 5970 worked for IBM and wanted to network boot a system software image over 1) HTTP and 2) not based upon UEFI technology. As such, the parameters at http://www.iana.org/assignments/dhcpv6-parameters/dhcpv6-parameters.xml cover both UEFI and non-UEFI, with the latter class including PC/AT BIOS and both PowerPC Open Firmware and Power PC ePAPR, respectively.

So RFC 5970 can be used in scenarios beyond Netboot6's TFTP-based download. This is enabled by the architecture type field extensibility, and also by the fact that the boot image is described by a URI, not a simple name with an implied download wire application protocol of TFTP as found in PXE2.1 IPV4 usages.

A way to explain this further can be done by examining our Linux configuration use case. In Linux, the DHCP server actions are performed by the dhcpd, or "Domain Host Controller Protocol Daemon." The daemon is parameterized by the file dhcpd.conf.

Within dhcpd.conf we enable Netboot6 by way of the following lines:

option dhcp6.client-arch-type code 61 = array of unsigned integer 16;

if option dhcp6.client-arch-type = 00:07 {
  option dhcp6.bootfile-url "tftp://[fc00:ba49:1625:fb0f::137]/bootx64.efi";
} else {
  option dhcp6.bootfile-url "tftp://[fc00:ba49:1625:fb0f::137]/bootia32.efi";
}

The notable aspects are 'arch type' field and then the 'tftp' term. The bootx64.efi or bootia32.efi program, also known as the Network Boot Program (NBP), when executed on the local client (hopefully with UEFI Secure Boot logic applied prior to passing control into the image) can use any of the UEFI networking API's in the protocols defined in the UEFI Spec to download further .efi images, data files, or the operating system kernel. The device path protocol on the loaded image protocol of the NBP can be used by the NBP code's implementation to find the network address of the boot server from which the NBP was loaded, too.

As mentioned earlier, this technology isn't limited to a UEFI style boot, though. A Linux PowerPC Open Firmware boot could be done with the same dhcp.conf by adding

if option dhcp6.client-arch-type = 00:0c {
  option dhcp6.bootfile-url "http://[fc00:ba49:1625:fb0f::137]/linux-powerpc-kernel.bin";
}

to enable booting a PowerPC based native binary of Linux from a web server.

If you want to take advantage of the exciting world of network boot and have a new architecture type, let me know since I'm the expert reviewer who provides the IETF with additional types, too.

Processor Architecture Types

Registration Procedure(s)
Expert Review
Expert(s)
Vincent Zimmer
Reference
[RFC5970]



That's all for today. My Saturday blogging time budget is up. Back to work.