Network News

X My Profile
View More Activity

A Time to Patch

A few months back while researching a Microsoft patch from way back in 2003, I began to wonder whether anyone had ever conducted a longitudinal study of Redmond's patch process to see whether the company was indeed getting more nimble at fixing security problems.

For many years, Microsoft has been criticized for taking too long to issue patches, especially when compared with patch releases for flaws found in operating systems or software applications maintained by the open source community, such as Linux or Mozilla's Firefox browser. But I wanted to find out for myself just how long Microsoft takes on average to issue fixes for known software flaws.

Finding no such comprehensive research, Security Fix set about digging through the publicly available data for each patch that Microsoft issued over the past three years that earned a "critical" rating. Microsoft considers a patch "critical" if it fixes a security hole that attackers could use to break into and take control over vulnerable Windows computers.

For each patch, Security Fix looked at the date Microsoft Corp. was notified about a problem and then how long it took the company to issue a fix for said problem. In most cases, information about who discovered the vulnerability and when they reported it to Microsoft or disclosed it in public was readily available through various citations by Mitre, which maintains much of that data on the common vulnerabilities and exposures (CVE) list.

In some cases, however, that submission or disclosure date was not publicly available, and required Security Fix to contact the individual discoverer and get the dates directly from them. In about a dozen cases, the discoverer of a vulnerability did not respond to information requests or the flaw appeared to have been found internally at Redmond, and in those instances Microsoft filled in the blanks.

Here's what we found: Over the past three years, Microsoft has actually taken longer to issue critical fixes when researchers waited to disclose their research until after the company issued a patch. In 2003, Microsoft took an average of three months to issue patches for problems reported to them. In 2004, that time frame shot up to 134.5 days, a number that remained virtually unchanged in 2005.

Below are three spreadsheets detailing our findings for the past three years. The documents are downloadable either as Microsoft Excel files or regular HTML files:

Download 2005patchlist.xls
Download 2005patchlist.htm

Download 2004patchlist.xls
Download 2004patchlist.htm

Download 2003patchlist.xls
Download 2003patchlist.htm

In the first column of each spreadsheet, you should see a hyperlinked MS number that will take you to the Microsoft advisory for that patch. Next to that column is a link to the CVE entry, which contains quite a bit more information about how each flaw was discovered and by whom.

The data show that one area where Microsoft appears to be fixing problems more quickly is when the company learns of security holes in its products at the same time as everyone else. Advocates of this controversial "full disclosure" approach believe companies tend to fix security flaws more quickly when their dirty laundry is aired for all the world to see, and at least on the surface that appears to be the case with Microsoft.

It is important to note, however, that in nearly all full-disclosure cases cited here, news of the vulnerability was also issued alongside computer code demonstrating how attackers might exploit the flaw.

In cases where Microsoft learned of a flaw in its products through full disclosure, the company has indeed gotten speedier. In 2003, it took an average of 71 days to release a fix for one of these flaws. In 2004 that time frame decreased to 55 days, and in 2005 shrank further to 46 days.

The company also seems to have done a better job convincing security researchers to give it time to develop a patch before going public with their vulnerability findings. In 2003, Microsoft learned of at least eight critical Windows vulnerabilities through full disclosure. Last year, this happened half as many times.

I spoke at length about this project with Stephen Toulouse, a security program manager at Microsoft. (Toulouse's team also verified the data in the Excel spreadsheets that accompany this post). Toulouse said that if Microsoft is taking longer to release patches for known vulnerabilities, it is because the company has placed a renewed focus on ensuring that each patch comprehensively fixes the problem throughout the Windows operating system and that each fix does not introduce new glitches in the process.

Toulouse said developing a patch to mend a security hole is usually the easiest part. Things get more problematic, he said, during the testing process. If testers find a bug, the patch developers incorporate the fix into all relevant portions of the patch and the testing process is reset, forcing the testers to start from scratch. What's more, Microsoft also has been more willing of late to give the discoverers of each vulnerability the opportunity to vet the patches before they are released to the public.

Toulouse pointed to one particularly problematic patch that took the company 200 days to fix: a vulnerability in a component of Windows (and many other networking applications) known as ASN.1, at the time considered the largest vulnerability in the history of the Windows operating system. In the course of testing the patch for that flaw --  reported by security researchers at Aliso Viejo, Calif.-based eEye Digital Security -- Microsoft was forced to reset the process at least twice as internal developers found additional problems that were being masked by previously unknown glitches in the fix. 

"We learned that it's far better for us to find those issues than for customers to run into them," Toulouse said.

Some of those lessons Microsoft learned when it tried to fix a critical flaw in Windows that was later exploited by the infamous Blaster worm. Microsoft turned around patch for that vulnerability, reported by researchers in the hacker group The Last Stage of Delirium, in just 38 days because it recognized that while the initial fix for the problem might not have not eradicated the flaw, there was a great deal of concern within Microsoft "about the breadth and depth of the vulnerability."

Two days after Microsoft released the patch, researchers alerted Microsoft that the flaw was present in three other areas of the operating system that the initial fix did not address. Roughly two weeks after that, the Blaster worm would infect millions of Windows PCs worldwide. Some security experts believe the worm may have been developed with the help of the initial Microsoft patch, which could have given the worm's authors a better idea of how to exploit the flaw.

"It was a conscious decision at the time to release that patch so quickly, but we later looked back and decided we really should have conducted a more thorough review process," Toulouse said.

According to Toulouse, Blaster resulted in two key changes at Microsoft. For starters, the company instituted a more thorough patch-review process across all company product teams that had a hand in developing the original vulnerable code. Microsoft also "retasked" its Secure Windows Initiative Team to research and attack each vulnerability the way a malicious hacker might.

"That team's job is to take the vulnerability, turn it sideways and upside down and to think 'Is there any other way to exploit this?'" Toulouse said.

I shared some of this data with a few of the security researchers and organizations credited with discovering flaws in the above-mentioned advisories, and got mixed responses to Microsoft's claims.

Pete Allor, manager of the X-Force vulnerability research division at Atlanta-based Internet Security Systems, praised Microsoft for "doing a fantastic job over the past year and a half on the [quality assurance] side of patching. We're not seeing the recalls and reissues that we used to. What we're hearing in today's corporate environment is, 'Make sure you get it right the first time. We don't want to hear how a patch is broken because you didn't 'take the time."

Not everyone sees Microsoft's recent patch efforts in such glowing light. Marc Maiffret, "chief hacking officer" for the aforementioned eEye, noted that the longer a patch is in the works, the longer customers remain unprotected. Maiffret says it is not uncommon for exploits to be available in the malicious hacker underground for vulnerabilities that have been reported in the meantime by well-meaning security researchers. 

"You'd think that by taking that much longer on patches Microsoft is being more thorough, but that's not always the case as we've seen," Maiffret said. "The truth is that unpatched Windows flaws have a value to the underground community, and it is not at all uncommon to see these things sold or traded among certain groups who use them by quietly attacking just a few key targets. So, the longer Microsoft takes to patch vulnerabilities the longer they are leaving customers exposed."

Last Thursday, Microsoft released a patch to fix a critical flaw in the way Windows renders certain image files. That update, which mended a 0day ("zero day") vulnerability for which an exploit was publicly disclosed and very soon in use by attackers, took Microsoft just 10 days to produce, though the company was able to take some pointers from an unofficial patch that was released by an independent security researcher. Because the patch was issued in 2006, however, Security Fix did not include those 10 days in the 2005 time-to-patch averages.

I mention the WMF patch because earlier this week security researchers posted to the public Bugtraq software vulnerability list some exploit code for at least two more security flaws in the same WMF engine Microsoft patched last week. While those flaws (at least for now) are considered less dangerous than the problem Redmond fixed last week, it does raise questions about the Microsoft team charged with finding these problems. The vulnerabilities have apparently been present in the Windows operating system code dating back to Windows 3.0. Toulouse maintains that Microsoft had already flagged those glitches prior to the exploit code posting on Bugtraq, but because the company didn't see them as a big security threat, it did not hold up the WMF patch to include fixes for them.

One final note: Security Fix did not attempt to determine whether there was a correlation between the speed with which Microsoft issues patches and the quality or effectiveness of those updates. A real glutton for punishment might be able to learn just how many Windows patches were later updated with subsequent fixes -- either because the initial patch failed to fully fix the problem or introduced new troubles. I purposely did not undertake that task, in part because I figured I'd still be working on the project this time next year if I did.

I'd like to thank everyone who helped me assemble the data in the above graphs -- including (but certainly not limited to): Cesar Cerrudo, "Fozzy," Joao Gouveia, Maolin Gu, Kostya Kortchinsky, Marc Maiffret, Brett Moore, and Peter Winter-Smith. Please forgive me if I have forgotten to name anyone, and if I did just send me an e-mail and I'll update this post.

By Brian Krebs  |  January 11, 2006; 6:30 AM ET
 
Save & Share:  Send E-mail   Facebook   Twitter   Digg   Yahoo Buzz   Del.icio.us   StumbleUpon   Technorati   Google Buzz   Previous: Symantec Fixes SystemWorks Vulnerability
Next: Apple Fixes Quicktime Security Holes

Comments

"...Microsoft had already flagged those glitches prior to the exploit code posting on Bugtraq, but because the company didn't see them as a big security threat, it did not hold up the WMF patch to include fixes for them."

Consider this scenario...MS gets notification of an issue, and then says, "well, we aren't receiving any reports of an exploit actually being in the wild, so we'll just turn down the priority on this one."

So, is it the case that there are no exploits, or is it more likely the case that (a) there are exploits actively being used, and (b) the vast majority of Windows admins are wiping and reinstalling the boxes at any sign of trouble?

How does a company like MS expect anyone using their product to report something like this? The vast majority of tools used to respond to and analyze incidents come from outside of MS. O/S-specific training in how to recognize and respond to incidents is limited.

While I believe that business decision for alloting already limited resources is sound, in reality, it's not working. One can't expect someone to detect an incident when they have neither the tools, nor the training, to do so.

H. Carvey
"Windows Forensics and Incident Recovery"
http://www.windows-ir.com
http://windowsir.blogspot.com

Posted by: H. Carvey | January 11, 2006 9:39 AM | Report abuse

So what the article says is -
1. Over the past 2 years Microsoft has improved patch reliability & testing & is doing deeper looks at the security flaws that are reported.

2. When a disclosure is made public, the risk is immediately raised and Microsoft responds by cutting testing, reliability, and deeper looks and pushing a patch out sooner.

3. The difference in these two scenarios is about 90 days (some 15 days of this difference could also be influcenced by the fact that during your research they went from weekly releases to monthly releases - and patches like WMF are released as needed)

1 & 2 seem like the best of both worlds. Are you unsatisfied with 3?

Posted by: slightly confused | January 11, 2006 11:57 AM | Report abuse

To Slightly Confused -- I didn't set out to give Microsoft a report card on their patching process. The post stated some trends that became apparent when I looked at the data, which by the way I hadn't seen compiled by anyone in one place like that before. I tried to be fair in the post and let the reader draw his/her conclusion about whether these are positive developments. Personally, I think they are, but of course Microsoft could always do better and I think they know that as well.

Posted by: Bk | January 11, 2006 12:02 PM | Report abuse

Hi Brian, being gluttons for punishment, Steve Beattie, Crispin Cowan and I asked the patch recall question in our paper "Timing the Application of Security Patches for Optimal Uptime" (http://www.homeport.org/~adam/time-to-patch-usenix-lisa02.pdf)

Posted by: Adam | January 11, 2006 1:29 PM | Report abuse

I think a major missing piece of information is how many users were infected during the timeframes.

It would be far more interesting to see if the number of compromises went up after "full disclosure".

Obviously, ignorance of a problem won't limit the compromises, only ensure that users aren't looking for them. "Full disclosure" benefits users by alerting them to issues and forces companies to act.

Crackers don't benefit from "full disclose" as that community is well informed and most likely already aware of the exploit.

Posted by: Eric Hanson | January 11, 2006 1:50 PM | Report abuse

I think the only way to fix windows, is to release a patch that completely removes it from your computer.

Posted by: John | January 11, 2006 2:40 PM | Report abuse

Man. It must be a drag to constantly fret with viruses, patches, adware, spyware, crashes. Microsoft users must get an awful lot of pleasure out of their platform choice to compensate what sounds like a constant drone of annoyance. Kudos. I would have jumped ship long ago.

Oh, wait. I did. In 1997. Haven't looked back since. And I hardly remember what computer viruses are like.

Posted by: Mac Happy | January 11, 2006 3:15 PM | Report abuse

I like big butts.

Posted by: J Arthur Rank | January 11, 2006 3:32 PM | Report abuse

Microsoft: Where would you like to crash today?

Posted by: Arthur C. Korn | January 11, 2006 3:44 PM | Report abuse

The key to secure systems is to partition the operating system from applications, and applications from each other. MSFT's strategy has been to blur the lines between OS and applications ( making it difficult to run Windows applications under other OS ) and to have applications share resources; allowing them to fully integrate and invoke each other. Until MSFT abandon this strategy, they will be unable to produce secure and stable systems, and the patch cycle will continue to spiral and grow. Unfortunately for MSFT, they would then have to build software that could compete on merit, rather than monopoly, and historically MSFT never had the ability to do so.

Posted by: N. Meyer | January 11, 2006 4:27 PM | Report abuse

"... We're not seeing the recalls and reissues that we used to. ..."

I was feeling bad about my 52nd Birthday (in a few weeks).

No More.

I wrote my first Fortran program when I was 15. Obviously these Security Dudes are *really* old because I never remember a Microsoft recall. For details one might refer to the "Blue Screen of Death" which I read not so much due to my age but to frequency of occurance -- It all a users' fault anyway !!!

Posted by: GTexas | January 11, 2006 4:32 PM | Report abuse

I suspect the main problem with the whole process is that at places like Microsoft, bug fixing, i.e. maintenance work, is not considered sexy work and is usually assigned to the less talented and/or less experienced people. And that's a mistake in itself. Software code is more often than not very convoluted and fixing one piece of code usually affects and breaks other pieces of code especially if you are not the original developers and unfamiliar with the code. It would be interesting to ask Microsoft of say their top 100 programmers how many of those work in the bug fixing group. Also ask them what the average/median years of experience of the peole working in that group. On the other hand the opposite is probably true with the hacking/code breaking group. They are usually more motivated enthusiasts.

It may come as a surprise to outsiders that software development, despite its hi tech glamour, is a very adhoc and sloppy process thru out and everywhere.

Posted by: Tom | January 11, 2006 4:41 PM | Report abuse

MICROSOFT IS STUPID!

Posted by: ajldfaldfaljfd` | January 11, 2006 5:22 PM | Report abuse

I find it very interesting that Microsoft will not allow you to download a fix for a virus or worm or other assault on their operating system(s) unless you use Internet Explorer 5.0 or above. What does repairing an operating system have to do with a requirement that you use I.E.? Unless, of course, it's pure monopolistic greed.

Posted by: Bill Myers | January 11, 2006 5:47 PM | Report abuse

Keep in mind that, per the comment below, 25% of the people in Europe will NOT be able to download a patch for WMF from Microsft because they use Mozilla Firefox instead of I.E. 5.0. And, 10% of the people in the U.S.A> wopn't be able to download the patch. But, to hell with them if they don't use Microsoft web software. Right, Bill?

Posted by: Bill Myers | January 11, 2006 5:50 PM | Report abuse

http://www.microsoft.com/technet/security/Bulletin/MS03-008.mspx

I show this patch released on March 2003, yet your report states it was release November 2003.

Posted by: Indy | January 11, 2006 6:34 PM | Report abuse

I agree with the comment about OS partitioning SYSTEM and USER space like OpenVMS. Curious why a system that was created by and ex Digital employee ( Cutler ) which authored VMS didn't make NT as good. Maybe we should all go back to OpenVMS. Personaly I like the BSD based Apple products I have been buying. Didn't renew my NortonAV after running it one year on the MAC and never found one virus or intrusion. Microsoft makes users wait for a new OS only because it takes Microsoft that long to include in features and new interfaces for programmers to keep it's monopoly on the desktop with application that users realy don't need.

Posted by: B. Ferjulian | January 11, 2006 6:34 PM | Report abuse

Indy -- Not sure which graf you're looking at, but the 2003 graf above indeed states that MS03-008 was released in March. Are you sure you didn't mean another patch?

Posted by: Bk | January 11, 2006 7:26 PM | Report abuse

Does Microsoft have anything to do with national or banking security? That would explain a lot...

Posted by: klone | January 11, 2006 8:07 PM | Report abuse

Here's a windows patch for you "http://www.ubuntu.com/download" make sure you install it over the windows partition

Posted by: Wayne | January 11, 2006 8:25 PM | Report abuse

Mac Happy - you might want to stop reading this & go patch. OS X just today patched 5+ remote execution flaws in a bunch of image formats (TGA, TIFF, GIF).

http://www.us-cert.gov/cas/techalerts/TA06-011A.html

Posted by: Mike B. | January 11, 2006 8:46 PM | Report abuse

Microsoft XP
X=? P=patch

Posted by: Tom K | January 11, 2006 9:02 PM | Report abuse


Think "SMP/E" !! A truly automated zap/patch software maintenance solution. By, none other than, I-B-M!

Remember, Mainframes Run The World! PCs are nice for games and mid-sized businesses, but Mainframes still do run Fortune 100 Co.s.

Go T-Rex DB2,

Rick Molera
20+ year mainframer @ 43 years young, today - 01/11/1963!

Posted by: Rick Molera | January 11, 2006 9:30 PM | Report abuse

Good call, Mike B. Although this morning my Software Update check flagged me immediately and took care of it. Thanks for looking out.

Posted by: Mac Happy | January 12, 2006 7:40 AM | Report abuse

On the subject of our election system and what has happened to it in recent years, here is a magnificent article by Cheryl Gerber that shows what we are facing:

Arlene Montemarano
Silver Spring, Maryland
===============

Imagine this: A Trojan Horse unleashes thousands of illegitimate votes and disappears without a trace, election commissioners bypass laws, uninvestigated computer glitches and easily picked locks in voting systems, no federal oversight holding e-voting vendors accountable--yes folks, elections can be stolen.
Since the 2000 Presidential election, problems stemming from the use of electronic voting machines have called into question the foundation of American democracy--the US voting system. At the forefront of concerns are security issues surrounding the use of Direct Recording Electronics [DREs], better known as touch screen computer voting machines, and their lack of a paper trail in the form of an auditable paper ballot. Widely reported irregularities from voting districts around the US have alarmed many and opened claims of stolen elections. Some even doubt the legitimacy of the outcome of recent US elections. A team of top computer scientists has been working diligently to resolve the many underlying design problems in the e-voting system that leave it open to cheating. Stalled by the federal government, and with doubts about e-voting continuing to spread, these scientists have instead turned to state governments and the National Science Foundation for help.

"Maryland, where I live, uses Diebold DREs, which are an ideal opportunity for cheating," said Dr. Avi Rubin, Technical Director, Information Security Institute, Johns Hopkins University. "In fact, you couldn't come up with a better opportunity for cheating. There's no ability to audit or recount, and the entire process takes place inside the computer, which is not transparent."

In May 2004, Rubin co-authored an analysis of electronic voting systems, raising concerns about lack of security, for the Institute of Electrical and Electronics Engineers (IEEE), the world's largest professional organization for technical standards. He also served in 2004 as a poll worker and election judge in Baltimore County, Maryland, where he lives. These and other experiences have only served to raise his concerns about the possibility for cheating via the use of electronic voting machines.

Efforts to Secure E-voting Stalled
Apprehension about the lack of security in Diebold's DREs and other touch screen computer voting machines spurred David Dill, a Stanford University computer science professor, to establish the Verified Voting Foundation in November 2004. According to Dill, when federal legislators tried to create a law that would address e-voting security problems, it was "blocked by a committee chairman, so we focused on state legislation."

Since then, the group has been advising states on e-voting security problems and the need, at a bare minimum, for a verified voting paper audit trail.

Earlier this year, Congressman Rush Holt (D-NJ) submitted a bill, The Voter Confidence and Increased Accessibility Act of 2005 (HR 550), to the House Administration Committee. The bill requires a paper audit trail at the federal level. But Holt has not been able to get the chairman of the committee, Congressman Robert Ney (R-OH), to schedule a hearing on it all year long.

"Congressman Ney will not schedule a hearing on the bill, so it remains in limbo," confirmed Pat Eddington, Holt's press secretary.

Even the bi-partisan federal Carter-Baker Commission Report could not nudge Ney. Set up to review the entire electoral process and co-chaired by former president Jimmy Carter and former Secretary of State James Baker, the report strongly endorses the need for a paper audit trail. (Congressman Ney's office did not return repeated calls.)

In lieu of the refusal of some at the federal level of government to address the issues surrounding the legitimacy of electronic voting procedures and work toward safeguarding American elections, Verified Voting turned to state governments. Since its founding, Verified Voting has helped 26 states establish state legislation that requires a paper audit trail in e-voting machines, and 14 states have requirements pending, according to verifiedvoting.org.

However, paper receipts only begin to address the complexity of electronic voting problems. The most serious concern among computer scientists studying the problems is the "Trojan Horse," a computer code that can be programmed to hide inside voting software, emerge in less than one second to change an election, then destroy itself immediately afterwards, going undetected.

"Anyone who has access to the software--an insider--could easily insert a Trojan Horse into the software," said Barbara Simons, a past president of the Association for Computing Machinery and a retired IBM researcher who is co-authoring a book on the risks of computerized voting. The problem is that the Trojan Horse cannot be detected unless the software is inspected continuously--as in every second--for its presence.

No Oversight of E-voting Legitimacy
Three-voting vendors--Diebold, Election Systems and Software (ESS), and Sequoia--dominate the market. Since e-voting is unprecedented in the history of elections and law tends to lag behind technology development, there is no federal oversight body holding these companies accountable for the security and reliability of their electronic voting systems. Their machines are supposedly tested by independent testing authorities. "But it turns out that the vendors pay the independent testing authorities and the vendors keep the results confidential," said Simons. "So you have a huge conflict of interest right there."

In addition, said Simons, "There is no requirement to make any problems public or even to reveal them to election officials because this information is proprietary for the vendors. Also, the testers are only required to test for things on a list and aren't required to test for things that aren't on the list. If you are going to subvert software, you are not going to do something that will be found by a checklist. So it's easy to insert a Trojan Horse into the software because the testing won't find it. And even if they did find it, there are no requirements to report it." Vendors are the ones who decide what goes on the list and what doesn't.

The privatization of the US voting process means the public lacks access to, or the ability to inspect, election software, as well as information about or even the names of the computer programmers who created it. Private companies and e-voting vendors flatly state that their election systems must be kept confidential as exclusive property right products, and therefore refuse to release their software source code for inspection by independent third parties. They claim that to do so would violate their right to copyright secrecy and would open the door to rivals who could steal their products. But some wonder what else vendors might be trying to hide. For instance, according to information reported on www.blackboxvoting.org, a non-partisan, nonprofit consumer protection group that is conducting fraud audits on the 2004 elections, Diebold, one of the e-voting vendors, hired ex-felons, who were convicted in Canada of computer fraud, to program election systems software.

"I don't want to malign ex-felons," said Simons, "but you want to know the names of the people who are programming the machines that will be recording and counting our votes." On the other hand, it is not uncommon for major companies to hire, as programmers, former hackers who have proven themselves to be advanced enough to hack into even the most sophisticated and safeguarded systems. In some cases, to successfully gain entry into an ultra-secured system can guarantee a hacker a job.

E-voting machine companies like Diebold are, in essence, funded to the tune of $3.9 billion by a 2002 federal law, entitled the Help America Vote Act (HAVA) which appropriates these funds as only an initial amount to the states to purchase e-voting for all national elections. States are required to phase out punch-card ballots and other systems that seemingly were problematic in the 2002 presidential election in Florida and to standardize on electronic voting systems for national elections by January 1, 2006. The problem is that this does not give the states enough time to deal with the complexity of electronic voting systems. And HAVA does not require e-voting companies to provide the kind of good security in those systems that would prevent chances of cheating.

Concerns about the many anomalies in the November 2004 election and about the gross lack of security in touch screen computer voting machines, spurred Dr. Rubin to apply for funding from the National Science Foundation to research solutions to the problems. In August 2005, the NSF's Cyber Trust program responded by awarding Rubin and his team of computer science researchers $7.5 million to investigate ways to build trustworthy e-voting systems. Rubin is now the director of the NSF project ACCURATE (A Center for Correct, Usable, Reliable, Auditable and Transparent Elections). ACCURATE involves six institutions that will collaborate to investigate how public policy and technology can safeguard e-voting nationwide.

"The NSF recognized that this is a problem of tremendous significance to the country," said Rubin. "It's a deep-rooted, scientific problem."

The funded researchers are Prof. Avi Rubin, Drs. Drew Dean and Peter Neumann of SRI International; Prof. Doug Jones of the University of Iowa; Profs. Dan Wallach and Michael Byrne of Rice University; Profs. Deirdre Mulligan and David Wagner of the University of California at Berkeley; and Profs. Dan Boneh and David Dill at Stanford University, along with numerous affiliates.

However, scientists and academics can only partly address the complexity of e-voting problems, leaving many of the battles to be fought at the state legislative level.

Bypassing the Law
One especially salient example (as recorded on www.verifiedvoting.org), shows that in response to numerous and varied voting system malfunctions that occurred in the November 2004 elections, North Carolina passed tougher requirements for election systems in its Public Confidence in Elections Act in early 2005. Under the new law, manufacturers must place in escrow the source code, the blueprint that runs the software, and "all software that is relevant to functionality, setup, configuration, and operation of the voting system" as well as a list of all computer programmers responsible for creating the software.

However, implementation of this law has been stymied by an interesting turn of events fueling the belief of some e-voting critics that Board of Election officials are too partisan for a job that requires objectivity, or who feel that election commissioners have relationships with e-voting vendors that seem far too cozy. The events in North Carolina involve Diebold--the e-voting vendor whose bid was selected by North Carolina's Board of Elections--and the very same Board of Elections.

Diebold responded to the new requirements by asking to be exempt from them, but a North Carolina Superior Court judge refused to grant the exemption. After losing in court, Diebold withdrew from their bid to provide elections systems in November 2005. However, in a surprising turnaround in December 2005, the North Carolina Board of Elections certified Diebold Elections Systems to sell electronic voting equipment in the state, despite Diebold's admissions that it could not comply with the state's election law.

The Board was able to do so because its election commissioners--not judges or computer science experts--are the ones who have the ultimate authority to certify election systems in the state. Instead of rejecting the vendor's applications and issuing a new call for bids that complied with the law, the Board of Elections certified all of the vendors' systems. The Electronic Frontier Foundation (EEF), a nonprofit consumer advocacy group of technologists and lawyers formed in 1990 to protect digital rights in our increasingly networked world, took issue with the North Carolina Board of Elections, which certified the three elections systems companies: Diebold, Election Systems and Software, and Sequoia Voting Systems. Citing the Board's action as an example of election commissioners having too much authority, Keith Long, EFF advisor to the Board, who was formerly employed by both Diebold and Sequoia, stated that none of the vendors meet the statutory requirement to place their system code in escrow.

"The Board of Elections has simply flouted the law," said EFF staff attorney Matt Zimmerman in a release he issued on December 2, 2005. "In August, the state passed new rules that were designed to ensure transparency in the election process and the Board simply decided to take it upon itself to overrule the legislature. The Board's job is to protect voters, not corporations who want to obtain multi-million dollar contracts with the state."

An ESS spokeswoman stated that ESS computer systems are secure, owing to a back-up system. However, as Simons pointed out, that does not address the problem. "If the machine doesn't record the votes correctly to begin with, it does not matter how many copies of that original incorrect recording you have." ESS' spokeswoman countered by assuring that the company's systems are accurate.

How New York Measures Up
New York State amended its Election Reform and Modernization Act of 2005 to include a provision for escrow requirements, which all election systems vendors must comply with in order to have an e-voting system certified in the state. The provision requires programming, source code, and voting machine software to be placed in escrow with the state Board of Elections, and requires the election systems vendors to waive all rights to assert intellectual property or trade secret rights. The amendment also requires that elections systems be tested by independent experts under court supervision.

Putting software source code in escrow provides an opportunity to inspect the code when there are anomalies in the election. It is already difficult to track down malicious code like a Trojan Horse; however, as researcher Simons pointed out, "there's no chance you will find it if you can't look at it."

New York also passed a series of bills, including a voter verified paper trail requirement that is an addition to HAVA, since the federal law does not require it.

But New York's election law omits the requirement to turn over the names of all computer programmers who are responsible for creating the software code. Since programmers are the ones who would be able to create and insert a Trojan Horse code, they are the ones who could ultimately rig a national election. If you don't know who the programmers are, you can't find out who created the problem, or who asked them to do it. Not to mention that a Trojan Horse program is set up to erase evidence of itself once it has done its job.

"Having the software source code doesn't guarantee that you will detect critical software bugs or malicious code," said Simons. "Anyone with access to the election software of a major voting machine vendor can change the outcome of a national election and determine which party will control Congress. Election fraud can now be committed on a national, not just a local, basis."

Yes Folks, the Election Can Be Stolen
With the old lever machine method of voting, election fraud could only be committed on a local, or possibly a regional basis without high risk of getting caught. But now it would take only one well-placed programmer creating malicious code to rig a national election. "How do you know what software is running on Election Day?" asked Simons. "You could easily add a last-minute software patch to do something on Election Day, [and that would] then immediately erase itself."

Software bugs can also be programmed undetected. "Buggy software is an important problem in computer security," said Stanford University's Dill. "A huge number of problems we have are due to computer software buffer overflows, which overwrite computer functions to get control of the machine." Computer buffer overflows are a standard way for Trojan Horses to take control of a computer and make changes to it, while leaving no evidence behind.

The GAO report concluded that national initiatives to improve voting systems lack plans for implementation or are not expected to be completed until after the 2006 election, stating: "Until these efforts are completed, there is a risk that many state and local jurisdictions will rely on voting systems that were not developed, operated, or managed in accordance with rigorous security and reliability standards."
Reiterating the reality that there is no such thing as software without bugs, Dill explains, "Eliminating bugs from programs has been an unsolved problem since computers were invented. The problem grows harder every year, as the systems get more complicated. Anyone who says they can generate large software without bugs is not telling the truth. We don't know yet how to make computer programs perfectly secure. That is why you always have to have independent reliable ways to check the results. The election can be stolen, nobody can tell, and it's easy to do."

Another opportunity for election fraud is in software patches, which are the routine fixes to software bugs that work the same way a repair patch is put on a flat tire. A programmer can deliver a patch to a bug that is an election rig instead of a fix and, again, it would not be detected unless it was inspected.

"There's a tendency for people to regard computers as the epitome of accuracy," said Dill, highlighting the fact that the lack of security in the source code is fundamentally a human problem. "This is why computer scientists have gotten involved--because they understand the limitations of technology."

Dill and other computer science professionals have been trying to educate people about the current, serious limitations of using computers for voting. "People just don't believe it when we say computer voting machines are insecure since they don't understand how deeply complicated software can be. Because these are computers, you need much more security with them than you do with old-fashioned paper-based systems," he explained.

"The hardest people to convince are those who have signed multi-million dollar contracts to buy e-voting machines before they were made secure," added Dill, alluding to election officials who thought they were buying the latest, greatest technology in the DRE or touch screen machines and therefore later become defensive when computer scientists inform them that their purchase is unreliable and insecure. "They are understandably reluctant to admit that they made a mistake."

And some complain that the January 1, 2006 HAVA standardization requirement, and the vagaries within the law that omit major areas of concern, has set unrealistic goals for election officials and backed them into a corner. Given the complexity of these machines, it can be argued that officials need more time for discovery and resolution to the problems.

"If we find out after the purchase of these machines that they are not secure and Congress is given evidence that they are not secure, will they make a new set of regulations, which will cost X millions of dollars?" asked Lee Daghlian, public information officer of the NYS Board of Elections.

Cozy Relationships and Huge Profits
However, zooming in on the election commission business also reveals a close-knit community. As in the example mentioned earlier in which North Carolina's Board of Elections went ahead and certified Diebold systems despite the Superior Court judge's ruling, many see the close relationships between election commissioners and election systems vendors as overstepping certain ethical boundary lines. Huge profits are to be made by election-system vendors and they court election officials accordingly. "They wine them and dine them," said Dill. "Election officials have known the election systems vendors longer than they've known the computer scientists. And there's a revolving door. A good career path for an election official is to go work for a vendor."

In October 2005, the General Accounting Office (GAO), the nonpartisan independent investigative arm of the federal government, issued an illuminating report that raised a multitude of concerns about electronic voting security and reliability. The report found that cast ballots, ballot definition files in the voting software, memory cards, and computer audit files all could be modified. Election systems had easily picked locks and power switches that were exposed and unprotected.

The GAO report showed that voting-machine vendors have weak security practices, including the failure to conduct background checks on programmers and system developers and a failure to establish clear chain-of-custody procedures for handling voting software. It also found that voting system failures have already occurred during elections, identifying a number of cases in California, for instance, where a county presented voters with an incorrect electronic ballot, which meant they could not vote in certain races. And in Pennsylvania, where a county made a ballot error on an electronic voting system that resulted in the county's undervote percentage--that is when a candidate is given fewer votes that he or she actually won--reaching 80 percent in some precincts. And in North Carolina, where electronic voting machines continued to accept votes after their memories were full, causing more than 4,000 votes to be lost.

And these are only a few examples out of thousands that were reported but not investigated.

In addition, the GAO discovered that standards for electronic voting adopted in 2002 by the Federal Election Commission contain vague and incomplete security provisions for commercial products and inadequate documentation requirements; and that tests currently performed by independent testing authorities and state and local election officials do not adequately assess electronic voting system security and reliability.

The GAO report concluded that national initiatives to improve voting systems lack plans for implementation or are not expected to be completed until after the 2006 election, stating: "Until these efforts are completed, there is a risk that many state and local jurisdictions will rely on voting systems that were not developed, operated, or managed in accordance with rigorous security and reliability standards--potentially affecting the reliability of future elections and voter confidence in the accuracy of the vote count."

In response to the release of the GAO report, members of the House Committee on Government Reform issued a statement that highlighted a long list of voting system vulnerabilities, also reported by Dill's Verified Voting Foundation. But the reality behind the GAO laundry list is that electronic election systems are grossly inadequate and that vendors are not being held accountable by election commissioners to provide security in their election systems or, as in the case of the North Carolina Board of Elections, even to comply with the law.

Not to mention, "They have none of the security levels that computer scientists have been asking for," added Simons.

If election systems vendors are not required both by law and by state election commissioners to place their software source code in escrow, then voters will have no way of knowing whether the software contains malicious, election-rigging code or not.

But as the technical director of Johns Hopkins' Information Security Institute, Dr. Avi Rubin believes it is only a matter of time before the vendors are forced by legislators to give it up. "I think they will be forced by law to share their source code. But they will do it kicking and screaming."

Despite the steadfast work of the leading computer science experts and grassroots activists, it seems the problem of election rigging is still not taken seriously enough. That means it is still easy to rig an election via e-voting in the United States, and it will continue to be easy until election fraud is considered a priority.
==

Posted by: Arlene Montemarano | January 12, 2006 8:08 AM | Report abuse

Bill Meyers, your comments are just plain wrong.

Yes, Windows Update will only work in Internet Explorer. However, users can still get updates from the Microsoft Download Center using any browser they want. They can click a link in Microsoft's security bulletin to get a download using any browser, and Automatic Updates doesn't require any browser at all.

Your argument seems to say that because Windows Update only works with IE, non-IE customers can't get updates from Microsoft. Nothing could be further from the truth.

Posted by: MM | January 12, 2006 9:13 AM | Report abuse

Arlene,

> On the subject of our election system...

Any chance that we could see you post a comment that's on-topic?

Posted by: keydet89 | January 13, 2006 7:12 AM | Report abuse

I double checked Brian's calculations and they appear to be wrong. Please correct me if I am wrong, but I published the innacuracies and corrections at:
http://singe.rucus.net/blog/archives/687-Microsoft-Patch-Speed-Inconsistencies.html

Posted by: Dominic White | January 13, 2006 8:32 AM | Report abuse

Excent job, Brian :)

This is probably one of the most interesting researches on IT I´ve read lately.

Keep the good job. Regards,

Sergio

Posted by: Sergio Hernando | January 13, 2006 9:13 AM | Report abuse

Perhaps a 'two step' patch process is in order--a 'pre-patch' and a 'post-patch'. The Pre-patch would be for people who need protection now, and can handle issues (pretty much everyone due to history), while the Post-patch would be the followup, if neccessary. I realize they often do similar to this, but they do so more as "oops!" rather than "here is a quick fix, the 'proper' fix is in testing"

Posted by: Justin | January 13, 2006 7:02 PM | Report abuse

How long will take until Microsoft "patches" the fundamental design-flaw of its systems?
The basic concepts of the architecture of Windows-XX are faulty as they mix user-level and system-level functionalities (last example the wmf-hack). The Software-engineering community and Microsoft itself do know it for decades. Yet as the business-model of MS relies on bundling Operating Software and Application-Software -like Office etc.- this flaw is intentionally a part of the business-model.
Microsoft should be obliged by law and courts to produce consumer-safe products - as any other producer of technology for mass-consumption like cars, freezers or air-conditioners.

Posted by: Cornelio Hopmann | January 20, 2006 10:19 AM | Report abuse

The comments to this entry are closed.

 
 
RSS Feed
Subscribe to The Post

© 2010 The Washington Post Company