Threat Modeling using Microsoft’s TAM tool? Visit the new online forum

I’ve been working with Microsoft’s free Threat Analysis and Modeling Tool for most of the year, and I’ve lamented the fact that there’s no online forum/group where I can share questions, ideas or custom templates with other users of the tool.

Well screw that – now there is just such a group (’cause I just created it).

If you’ve used this tool for developing/documenting your Threat Models, or if you’re considering it, then please feel free to lurk or even participate in this online community.  I’ll be posting what I’ve learned so far in using this tool, making available some reusable templates and reports, and generally giving the new group the care & feeding these things usually require at the onset.

Hope to see you Threat Analysts there!

XSLT 1.0 defies the laws of physics – sucks AND blows simultaneously…

Holy crap, whoever came up with XSLT 1.0 must’ve really wanted me to suffer 🙂

I have spent the better part of a couple of weeks fighting with a simple piece of XSLT code to be able to generate some short, organized reports using the Microsoft Threat Analysis and Modelling tool (currently at v2.1.2).

It doesn’t help that the data model used by this tool has made a really poor choice in the way it classifies Threats:

  • The logical model would be to define an XPath like /ThreatModel/Threats/Threat[x], and then use an attribute or sub-element to assign the category of Threat
  • Instead, MS TAM v2.1.2 defines an XPath like this for each threat: /ThreatModel/Threats/$ThreatCategorys/Threats/$ThreatCategory
  • Thus, for a Confidentiality Threat, you get /ThreatModel/Threats/ConfidentialityThreats/Threats/ConfidentialityThreat

It certainly also doesn’t help that Microsoft has fallen behind all the other major XML engine vendors in implementing XSLT 2.0.  This article here indicates that not only did they wait to start any of the work until after the Recommendation was completed, but that they have NO planned ship vehicle or release date (despite the fact that XSLT 2.0 has been in the works for five years).

But really the fundamental problem that I (and about a million other people out there, over the last 7-8 years) am challenged by is the fact that you can’t pass what’s called an RTF (“Result Tree Fragment”) in an XPath 1.0 expression – the XSLT 1.0 standard just doesn’t allow dynamic evaluation of such an expression, AND they didn’t provide any reasonable way to actually get around the problem of RTFs.  It means that all the vendors providing engines to process XSL had to come up with their own extensions to handle this (e.g. [1], [2], [3]), and many people have also come up with creative (but horribly obtuse) ways to get around the problem.

So it goes – I’m stuck with (a) XSLT 1.0 & XPath 1.0 + proprietary “extension functions” [1], [2] in MSXML, because (b) the Microsoft TAM tool only uses the MSXML engine (which is fair – it’s gotta default to something).

What’s REALLY painful is learning that not only did I spend weeks banging my head against a wall learning some very obtuse and shouldn’t-be-necessary coding hacks to what in other languages are fairly trivial problems – but now I discover that it wasn’t even a question of RTFs at all, but rather that XSLT just ends up taking what I think is a reasonably well-thought-out design and dumping it on the floor:

http://groups.google.com/group/microsoft.public.xsl/browse_thread/thread/f3af4340991740e5

Oh, and how did the XSLT overlords solve this problem in XSLT 2.0?  The just eliminated the limitation on RTF in XPath expressions.  Done.  And done. 

Ugh – that’ll teach me to ever get lured into using a [functional?] programming language again.  Back to C# – that seems positively trivial by comparison…

SO: if you happen to have a masochistic streak in you, or you find that you absolutely must use XSLT 1.0 and not either XSLT 2.0 or System.xml.xsl, then (a) you have my sympathies and (b) here are some resources that I recommend you consult sooner than later:

You ask: why did Microsoft train ALL developers on Security?

One of you readers asked me to investigate why Microsoft decided to train all developers on Security, rather than targeting either (a) those developers who touch security-related features or (b) one designated “security expert” on each development team.

You asked, I answer with a collection of quotes from various sources, but basically all from the horse’s mouth (yes Michael, that makes *you* the horse in this analogy).  Please enjoy, and feel free to link others you might stumble across…

http://web.archive.org/web/20031202020539/http://blogs.gotdotnet.com/mikehow/PermaLink.aspx/1c7eb862-aec9-475e-bff3-c32bb3f063f5
“We need to teach more people about security. Now, you’re probably a geek, or a geek-wanna-be, and I bet you’re thinking, “ah, he’s trying to sell more copies of his book, he wants to teach people about writing secure code.” Ok, that’s true, I think software designers, developers & testers need to understand what it takes to build secure software; the threats have changed, and security no longer resides in the realm of the Security High Priesthood nor the Security Learned Few. Building secure software is simply part of getting the job done. Just like we learned the basics of optimal algorithms in school, kids coming out of school need to know the basics of building code that will run in that most hostile of environments – The Internet.”

http://blogs.msdn.com/sdl/archive/2007/05/02/security-education-v-security-training.aspx
“We require our SDL training to emphasize the basics of secure design, development and test – then allow employees and their management to select the training that meets the needs of their particular product or service.  There is one other point that bears mentioning – our training is constantly being reviewed or embellished to make sure that emerging security or privacy issues are being addressed. ”

http://msdn.microsoft.com/msdnmag/issues/05/11/SDL/
“If your engineers know nothing about the basic security tenets, common security defect types, basic secure design, or security testing, there really is no reasonable chance they could produce secure software. I say this because, on the average, software engineers don’t pay enough attention to security. They may know quite a lot about security features, but they need to have a better understanding of what it takes to build and deliver secure features. It’s unfortunate that the term security can imply both meanings, because these are two very different security realms. Security features looks at how stuff works, for example the inner operations of the Java or common language runtime (CLR) sandbox, or how encryption algorithms such as DES or RSA work. While these are all interesting and useful topics, knowing that the DES encryption algorithm is a 16-round Feistel network isn’t going to help people build more secure software. Knowing the limitations of DES, and the fact that its key size is woefully small for today’s threats, is very useful, and this kind of detail is the core tenet of how to build secure features.

“The real concern is that most schools, universities, and technical colleges teach security features, and not how to build secure software. This means there are legions of software engineers being churned out by these schools year after year who believe they know how to build secure software because they know how a firewall works. In short, you cannot rely on anyone you hire necessarily understanding how to build security defenses into your software unless you specifically ask about their background and knowledge on the subject.”

http://msdn2.microsoft.com/en-us/library/ms995349.aspx
(a) “But is it important to note that an education program is critical to the success of the SDL. New college and university graduates in computer science and related disciplines generally lack the training necessary to join the workforce ready and able to design, develop, or test secure software. Even those who have completed course work in security are more likely to have encountered cryptographic algorithms or access control models than buffer overruns or canonicalization flaws. In general, software designers, engineers and testers from industry also lack appropriate security skills.

“Under those circumstances, an organization that seeks to develop secure software must take responsibility for ensuring that its engineering population is appropriately educated. Specific ways of meeting this challenge will vary depending on the size of the organization and the resources available. An organization with a large engineering population may be able to commit to building an in-house program to deliver ongoing security training to its engineers, while a smaller organization may need to rely on external training. At Microsoft, all personnel involved in developing software must go through yearly “security refresher” training.”

(b) “One key aspect of the security pushes of early 2002 was product group team-wide training for all developers, testers, program managers, and documentation personnel. Microsoft has formalized a requirement for annual security education for engineers in organizations whose software is subject to the SDL. The need for an annual update is driven by the fact that security is not a static domain: threats, attacks and defenses evolve. As a result, even engineers who have been fully competent and qualified on the aspects of security that affect their software must have additional training as the threat landscape changes. For example, the importance of integer overflow vulnerabilities has increased dramatically in the last four years, and it has been demonstrated recently that some cryptographic algorithms have previously unrecognized vulnerabilities.

“Microsoft has developed a common introduction and update on security that is presented to engineers in both “live training” and digital media form. Microsoft has used this course as the basis for specialized training by software technology and by engineer role. Microsoft is in the process of building a security education curriculum that will feature further specialization by technology, role, and level of student experience.”

http://msdn.microsoft.com/msdnmag/issues/03/11/SecurityCodeReview/default.aspx
“Hopefully, you realize that reviewing other people’s code, while a good thing to do, is not how you create secure software. You produce secure software by having a process to design, write, test, and document secure systems, and by building time into the schedule to allow for security review, training, and use of tools. Simply designing, writing, testing, and documenting a project, and then looking for security bugs doesn’t create secure software. Code reviewing is just one part of the process, but by itself does not create secure code.”

The Security Development Lifecycle Chapter 5

“If your engineers know nothing about basic security tenets, common security bug types, basic secure design, or security testing, there really is no reasonable chance that they will produce secure software. We say this because, on average, software engineers know very little about software security. By security, we don’t mean understanding security features; we mean understanding what it takes to build and deliver secure features.”

Patenting security patches? Slimy, greedy, sad

Ugh.  As in ug-ly.  This is get-rich-“quick” parasitism at its finest.  I really wish bottom-feeders like this would find a way to use their obviously-untapped energies to contribute something constructive to the economy, society or culture.

How does it work?  “…a new firm is offering to work with you on a vulnerability patch that they will then patent and go to court to defend. You’ll split the profits with the firm, Intellectual Weapons, if they manage to sell the patch to the vendor. The firm may also try to patent any adaptations to an intrusion detection system or any other third-party software aimed at dealing with the vulnerability, so rest assured, there are many parties from which to potentially squeeze payoff.”

And how will they get around the lengthy patent application process?  “The company says that it may try to use a Petition to Make Special in order to speed up the examination process when filing a U.S. patent. Another strategy the firm proposes using is to go after a utility model rather than a patent-a utility model being similar to a patent but easier to obtain and of shorter duration-typically six to 10 years.”

“In most countries where utility model protection is available, patent offices do not examine applications as to substance prior to registration,” the company says. “This means that the registration process is often significantly simpler, cheaper and faster. The requirements for acquiring a utility model are less stringent than for patents.”

Patents and copyright in their current form have outlived their usefulness.  I can’t remember the last time I read a story about a “little guy” who actually benefited from the patent or copyright protections for whom they were originally meant.  Now it all seems to be about providing a stable base of income for multinationals to leverage when they can no longer actually contribute something genuinely new and useful to the planet.

How Do I Reduce the [Security] Defects in my Software?

I’ve spent a little time here and there trying to find the “best” tool to do static analysis of some software written in a language other than C/C++/Java.  Foolishly, I figured it would be an easy task – find an authoritative site/wiki on such tools, skim for the one(s) referencing the language of interest, and browse to the download page.

Learn something new every day…

I started with my boss, who pointed out that our group (and most of Intel – at least those who’ve made their opinions known) has settled on one commercial tool, and that’s the answer we’re giving everyone who inquires about static analysis for security.  [I won’t name the product here – you don’t think I’d be that stupid do you?  That’d just be a huge invitation to the hacker community…]

Here at Intel, there’s an internal wiki where much of this “tribal knowledge” has been consolidated.  However, Intel’s development community is heavily invested in C, C++ and Java, so other languages don’t get a whole lot of attention (for good or ill, I can’t say…yet).  There’s a few pointers to public web pages, including one to the List of tools for static code analysis.  OK, so scanning that page should yield the results I’m after, right?  Wrong.

The deeper I look into this, the more complex the question becomes.  Am I interested in just security defect identification, or in identifying defects overall?  Am I only interested in static analysis approaches, or should I also consider dynamic analysis tools and (whatever else I’m inferring is beyond my comprehension, based on the wealth of ways this information can be categorized)?

And on a philosophical level, should I focus my customers’ attention on a single-tool approach, or give them a chinese menu from which to make their own selection?

I know that my team has Security deeply embedded in its genes, but I’m of the continuing philosophy that security isn’t an end unto itself; security is just one means to a greater set of ends.  Why make something more secure?  To make it more

  1. (a) available
  2. (b) reliable
  3. (c) trustworthy
  4. (d) all of the above
  5. (e) none of the above

?

Personally, my bias leads me to believe that (c) begets (b) begets (a).  However, despite six years of Microsoft indoctrination, “Trustworthy” still feels too much like a buzzword to me – so I’m inclined to choose (b).  If my software is more secure, I’m likely to rely on it more readily (and advocate that others rely on it more heavily) for my critical activities.

Now, if it’s really secure [whatever that means] but horribly unreliable for non-security reasons, that still means I’m unlikely to bother with that software for any great length of time (e.g. Google Desktop crashes rather frequently on my current PC, and while I’m fairly convinced it’s not hackers causing it to fail, but rather the “security software suite” I’m forced to run that’s intercepting some low-level driver or filter, I’ve still abandoned it – and am looking for the next-sexiest desktop search software).

Back to my original train of thought: All other things equal, I’d prefer to have developers using static analysis tools for overall defect reduction, rather than recommend security defect reduction tools.  This would make the developers happier too: instead of having to remember to “run that security tool” too, they’d just get the benefit of overall quality improvements (of which security should just be one component).

As for the static/dynamic/other analysis angle, I feel like I’m just learning to doggie-paddle in this area.  I’m not even qualified to discuss the difference between these realms yet.

However, on the “single tool” vs. “chinese menu” question, I have a very clear opinion: neither and both.  Really, what my customers are asking from me is to reduce the burden of research and analysis.  Ideally they’d like me to give them the answer they’d have come to themselves (given enough time), but it’s usually acceptable to provide a shorter, more organized list than they’d get out of Google.  I can usually be a big hero if I can:

  • do the research with their inquiry in mind (e.g. “What tool or approach should I use to identify and eliminate the greatest number of significant security issues in my code with the least amount of effort?”)
  • eliminate the tools that obviously aren’t intended for their language/job role
  • write up a prioritized list of tools for them to review/try, ordered by which is most likely to meet their needs

Am I the only one out here that thinks this way?  If not, I certainly haven’t found such a list from anyone who shares my point of view.  I’ve got the “single approved tool” on the one hand, and the “canonical lists of tools” on the other.  NIST has made a good start by categorizing tools based on semi-abstract goals for their use (e.g. “safer languages”, “dynamic analysis”), but I have to wade through multiple lists and descriptions to figure out if each tool analyses code in my sought-after language.

Is there anywhere I can go to find a list of all tools that address a specific development language, and sort them at least by age, number of versions, number of users or number of features?

Security scrubbing of Python code – PyChecker or nothing?

I’m hardly versed in the history or design of the Python programming language (I just started reading up on it this week), but I know this much already: Python is intended to be a very easy-to-use scripting language, minimizing the burden of silly things like strongly typing your data (not to mention skipping the arguable burden of compiling your code).

Most developers don’t have two spare seconds to rub together, and are hardly excited at the prospect of taking code that they finally stabilized and having to review/revisit it to find and fix potential security bugs.  Manually droning through code has to be about the most mind-numbing work that most of us can think of eh?

On the other hand, static analysis tools are hardly an adequate substitute for good security design, threat modelling and code reviews.

Still, static analysis tools seem to me a great way to reduce the workload of secure code reviews and let the developer/tester/reviewer focus on more interesting and challenging work.

Is it really practical to expect to be able to perform complex, comprehensive static analysis of code developed in a scripting language?  I mean, theoretically speaking anyone can build a rules engine and write rules that are meant to test how code could instruct a CPU to manipulate bits.  It’s not that this is impossible – I’m just wondering how practical it is at our current level of sophistication in terms of developing software languages, scripting runtimes and modelling environments.  Can we realistically expect to be able to get away with both easy development, ease of maintenance (since the code isn’t compiled) and robustness of software quality/security/reliability?

I’m certainly not trying to disparage the incredible work that’s gone into PyChecker already – anything but.  However, when a colleague asks me if there are any other static analysis tools in addition to PyChecker, I have to imagine that (a) he has some basis for comparison among static analysis tool and (b) that PyChecker doesn’t quite meet the needs he’s come to expect for checkers targeted at other languages.

How to get a Process’ current security context – mystery and teaser…

…so I’ve crossed the threshold, and now I’m writing VB code in .NET 2.0.  It’s been a fascinating experience – going through fits and starts of trying to find a project motivating enough to keep me working on it through the inevitable “slump”.

For anyone who’s new to coding, and self-taught (like me), there’s the initial rush of being able to construct whatever it is your favoured “for morons” teaching book walks you through.  Then there’s the first tentative steps into adding something for which you don’t have stepwise instructions – which is just about anything else that might be useful – which is quickly followed by the frustration of knowing that you *should* be able to construct that next section of code, but having no idea why it doesn’t work the way you want it to work.

I’ve done this probably a half-dozen times, and every time I get discouraged that the damned code just doesn’t flow from my fingers.  I’ve been stymied by, in no particular order:

  • How to cast an Object to (as? into?) an Interface
  • How to use a GetEnumerator method
  • What the hell goes into a DataGrid
  • How to Dim an Object as something other than a String
  • When and where to define and instantiate an Object (e.g. inside the For loop?  outside the Private Sub?  Inside a Public Sub?)
  • How to write code in separate classes and still be able to take advantage of variables defined in the “other” Class

However, I think I’ve come up with sufficiently self-interested projects to complete at least ONE of them before I let myself fail at this AGAIN.

The latest fiasco was the last three attempts in which I’ve been trying to filter out only those processes that were launched in my user context (e.g. Run key, Startup folder, Start menu).  I’ve been failing to (a) identify an actual username from the info supplied in the System.Diagnostics.Process class, (b) construct an equivalent username to what comes from the My.User.Name property, and most recently (c) actually filter out the processes started in other users’ context (e.g. svchost.exe, wininet.exe, csrss.exe).

Here’s the current code mess I’ve constructed:

Dim process As New System.Diagnostics.Process
Dim dictionary As New System.Collections.Specialized.StringDictionary
Dim entry As New System.Collections.DictionaryEntry
Dim UsernameFromProcess As String = “”
Dim DomainFromProcess As String = “”
Dim Username As String = My.User.Name
Dim MyApplications As New Collection

dictionary = process.StartInfo.EnvironmentVariables

For Each entry In dictionary
    If entry.Key.ToString = “username” Then
        UsernameFromProcess = entry.Value.ToString
    End If

    If entry.Key.ToString = “userdomain” Then
        DomainFromProcess = entry.Value.ToString
    End If
Next entry

Dim QualifiedUserName As String = “”
QualifiedUserName = DomainFromProcess + “\” + UsernameFromProcess

If QualifiedUserName = Username Then
    MyApplications.Add(process)
EndIf

So why does this always result in adding the process to the MyApplications collection?  I woulda figured that the environment variables for processes started in other users’ contexts would reflect that user’s environment.  E.G. if csrss.exe starts in the SYSTEM context, then it should have USERDOMAIN =  [nul] and USERNAME = SYSTEM.  Whereas, when I launch Word from the Start Menu, its environment will include USERDOMAIN = REDMOND and USERNAME = mikesl.

If you’d like to see how I finally solved/worked around this little mystery, check out the CacheMyWork project on Codeplex.

Windows OneCare + VPN connections: manual configuration, with no warning?

I thought I was going nuts I tell ya. I’d been a Microsoft VPN end-user for years, and had even administered an MS VPN infrastructure back in the dark ages of NT4. I’d used the MS VPN client (aka “Connection Manager”) in all kinds of network environments and under the whole spectrum of security conditions, and I’d never been denied like I was denied this weekend.

Blame it on Windows OneCare I say – no, wait, that’s not fair – can’t blame it on a beta product. Heck, I guess it was my own fault for putting a beta product in production, eh? Live and learn. Hopefully this tale will help you avoid the same hair-pulling foolishness.

So: Windows XP Professional SP2, Toshiba Tecra M2 notebook, MN-700 802.11b/g wireless router, Comcast broadband service. I’d configured the MS VPN client connectoid for default settings, filled in the appropriate authentication details, and couldn’t complete the connection. The client would connect to the VPN server, and would count approx. 33 seconds while attempting to authenticate my credentials, and just kicked me out.

According to all the googl’ing I did, all suggested solutions revolved around configuring port forwarding on my wireless router. I hadn’t had to configure the router’s network settings for a year or so, and I’d had to reset the firmware once this summer, so while I didn’t think this was the problem, I certainly wasn’t sure. I certainly did know for sure that the Windows XP SP2 firewall would allow any outbound communications, and would allow back any responses to requests initiated from the computer, so I really didn’t think about it any further.

I diddled with the router’s configuration a few different ways:

  • I tried to find the setting in the Connection Manager software that would allow me to override the automatic protocol selection, but despite my best efforts, it’s been well-hidden by the good folks in our IT department who setup this well-designed end-user configuration.
  • I forwarded 1723/tcp, 1723/udp, 1721/tcp, 1721/udp, thinking each time I added one, “Well maybe I’ve just forgotten my protocol settings – I’ll just try one more”.
  • I forwarded 500/udp, since one article reminded me that IPSec NAT-T (NAT Traversal) worked over 500/udp.I used dynamic forwarding; I used persistent forwarding (I ifgured dynamic was sufficient, since the router would detect my requests, but after that failed I figured persistent *had* to work. Nope.)
  • I finally configure the virtual DMZ to point to my computer’s IP address. I’d avoided it to this point since it would remove most protections the router afforded from my PC, but at this point I was getting desparate.

No dice. That’s when I finally gave in, and despite my better judgment (I’d NEVER had to do this before), disconnected the wireless router and connected the computer directly to the broadband “modem”. When I couldn’t make the connection even then, I knew the problem wasn’t with the port forwarding…

I finally had another look at the Windows Firewall configuration, and this time I really wondered why it continually reported that the firewall was “Off”, even though it also said that “For your security, some settings are controlled by Group Policy”. Did our IT group really disable the Windows Firewall on us through GPO? If so, what was it they were using to secure our systems? I knew I hadn’t installed any third party firewall like BlackIce… [oh hell. That’s right.]

That’s when it finally dawned on me to dig into the Windows OneCare software. Now, when I look at the client, there’s nothing that jumps out at me related to Windows Firewall – the three main blocks of reported info in the main window are “Protection Plus”, “Performance Plus” and “Backup and Restore”. Buried in the middle of the Protection Plus category is a single line simply labelled “Firewall: Auto”, which had until now escaped my attention.

I engaged my brain and chose the “View or change settings” selection, then grabbed the Firewall tab and hit the “Advanced settings…” button. While you can choose either “Program List” or “Ports and Protocols” to enable new exceptions in the OneCare firewall, I knew that there was no typical executable that uniquely identifies the VPN client connectoid, and thus it’d be difficult to nail down an .exe to add to the “Program List”.

Turning to the “Ports and Protocols” list, I finally had a stroke of luck. There appears to be a default configuration already set up for the “GRE” protocol – IP protocol 47, the control channel used by PPTP. I simply added another exception that I named “PPTP”: Protocol TCP, Port range 1723 to 1723, and retried the VPN client.

Of course it went through immediately.

I assume this’ll help any of those of you who are also running the beta of Windows OneCare Live, but I hope this’ll be made easier for folks by the time this releases. I’ll file a bug on this and see if the OneCare Live folks can’t help automate this somehow – if I got tripped up by it, I’m sure there must be others who’ll also be stumped.

Epilogue: I haven’t bothered to check which of the router configurations are still necessary once the OneCare firewall was properly configured. It may be that the DMZ setting is still needed, or perhaps the MN-700 actually does tranparently forward MS VPN traffic correctly (as I’d originally expected). Let’s leave that as an exercise for the class, shall we? Until next time…

[category: general security]

Digital Cameras being called a "hacker tool" now?

This article focuses on the use of the camera as a “digital storage device”, as if the fact that the camera is somehow a “more surreptitious” way to copy data off the computer than any other USB & similar storage device (flash drive/thumb drive/memory stick/MMC/SD card).

I really hope that the author of the article was the only one surprised by this “unexpected” use of a digital camera as a way to slurp data off a computer. I also hope that we don’t see a wave of specific “no digital cameras allowed” security policies spring up in response to this. I would think any reasonably well thought out security policy would either (a) forbid the use of all portable storage devices, or (b) accept the risk of any and all such devices equally (since they all have the potential of being used maliciously).

I really thought I misread the title of the article – I had to read it three times to make sure I wasn’t the one with the big misunderstanding.

I figured they must be talking about the use of digital cameras to take pictures of the screen (a totally unpreventable vector), or they were talking about camera-enabled cell phones (which at least are more difficult to separate from “legitimate use” than a simple camera).

Big deal.

So you can use yet another bulky USB-enabled device to copy data from a computer and take it off-premises. If there’s ANY organization left out there that still hasn’t thought through the threat of the use of portable storage media to copy large quantities of data off-premises, I doubt they’re going to finally say “oh crap!” when they read this.

It’s far cheaper and easier to hide from prying eyes the use of a tiny little USB drive (most as small a digit on your hand) – far less likely to draw attention than plugging in a fist- (or larger) sized camera into a work computer.

To steal a phrase from Bruce Schneier, this is yet another example of a “movie plot threat” that has little relation to any reasonable assessment of overall security risk to most any organization.

[category: general security]

Agree with Keith Brown’s "do not display last user name" rant

I’m with Keith here [note: in the interests of minimizing duplication, I’ve hacked his post down to the most stinging statements. Go read it yourself if you’re interested in a good discussion of the problem.]


A security countermeasure that isn’t all that

The password that you just entered went into the user name text box of the login dialog. When you hit enter, you attempted to log into your workstation using your password as the user name and a blank password. Because this login failed it’s logged in the Event Log. Guess what’s in there? Yep, it’s your password!
So in the interest of making your machine more secure, it is actually compromised…

… As Schneier constantly reminds us, security is all about tradeoffs. What do you gain by turning on the DontDisplayLastUserName feature? Given that it only takes effect when you’re logged out, not when your workstation is simply locked, not much! There are an awful lot of people who rarely log out of their machines (me included), and rather lock their workstations instead.
… If a countermeasure makes things harder (and more risky) for legitimate users, and doesn’t provide any real impediment for an attacker, it’s a bad tradeoff.
… I’d suggest picking up a copy of Jesper & Steve’s book, which provides really practical advice for securing Windows. It’ll help prevent these sorts of mistakes in the future!


This kind of blind use of security “countermeasures” really bothers me. I used to be a blind follower of security checklists in my early career too, so I can’t say I don’t understand the impulse that drives this sort of behaviour.

Still, I can’t believe that after all these years of people publishing these checklists, and lots of other people using them and seeing the consequences of their use, they still get published and used like this – i.e. ignorant of the consequences.

I get pretty frustrated when I see people take security measures like this and end up shooting themselves in the foot. At best, they’re no further ahead overall. At worst, they’ve taken a giant leap backwards, and made it even *easier* for an attacker to escalate themselves and do some *real* damage to your computing assets.

Damn. I really want this setting to be discarded, just like I want to see the “account lockout” setting retired in favour of a more sophisticated, goal-oriented, actually-accomplishes-what-it-sets-out-to-do countermeasure. I am all in favour of more configurability in a system, to give people more options so they can accomodate special circumstances when required – BUT – when a “special purpose” setting like this actually ends up being used blindly by everyone in unsuitable circumstances, and ends up making things WORSE, well that’s when it’s time to seriously reconsider.

Creating the Saved Password

How often does “DontDisplayLastUserName” actually do something security-useful:

  1. Computer boots up
  2. Computer is Restarted
  3. User logs off

VS. times when it can potentially hurt:

  1. User locks computer
  2. User places computer on Standby (and computer is set to lock on resume)
  3. User places computer in Hibernate mode (and computer is set to lock on resume)
  4. Computer goes into Standby or Hibernate according to Power Management configuration (and computer is set to lock on resume)

I don’t have any statistics to back up the opinion I’m about to assert, so I’ll just have to use my own user behaviour as a model and let you decide how often it happens from there:

  • I rarely power down my computer:
    • perhaps once a week or so because something has leaked too many resources over time (e.g. Virtual Memory, GDI Objects, Handles) and I need to free them up
    • perhaps once every couple of weeks because I’ve installed something that includes a kernel-level driver (display, network) or because I’ve installed an update that replaces an in-use system-level file
  • I almost never log off my computer – why bother? It’s a single-user machine almost all the time:
    • My home desktop is used by my wife or houseguests maybe once a month
    • My work notebook is used almost never by anyone else, and if I let them use it, I’ll usually just fire up a fresh browser instance (or RDP client) and let them borrow it while I’m there – I just don’t let people log on to my work computer – no reason to, that I’ve found
  • I very frequently (e.g. dozen times a day or more) end up with my work notebook locked:
    • anytime I move from the house to the office, I’ll put it in Standby or Hibernate
    • I’ll pull it open for a while on the bus to or from work and then Hibernate when I walk off
    • anytime I go from my office to a meeting (usually 1-3 per day), I’ll S/H while I carry it around
    • anytime I walk away from my notebook, I’ll lock it (Windows-L was a wonderful addition to XP)

Under such circumstances, how often do you think I’d accidentally enter my password in a blanked-out username field? Thankfully, I haven’t had that setting forced on me since I forced it on the domains which I administered in my old job as a sysadmin (i.e. 6+ years ago, before I “saw the light”). So I don’t know how often that’d actually happen now – I have no immediate experience to back it up. But if a smart guy like James gets tripped up by it once in a while, then I’m sure I’m no smarter/more attentive than he is.

Exploiting the Saved Password

OK, so let’s assume that for a significant number of computers configured to not display the last username, the user’s password ends up saved in a Security Event Log entry. That log is only readable by members of BUILTIN\Administrators and any process in the LOCALSYSTEM context on Windows up to and including XP (but can be modified on Windows Server 2003, as per Eric Fitzgerald’s article here).

So what’s the big deal? On systems where both (a) physical access is unavailable (e.g. servers) and (b) all patches have been been applied, the risk of a random attacker who doesn’t already have an Admin-level account of getting an admin-level account is usually pretty small (let’s hope – okay, this is probably asking too much, but let’s just assume for the moment, okay?).

However, on systems where either (a) or (b) is FALSE (e.g. (a) on a desktop or especially notebook computer – physically accessible to many classes of attacker; e.g. (b) on a computer where root-level exploits have not been patched), I caution you strongly that “Do not display last user name” may end up giving an attacker a means to retrieve the user’s logon password IN CLEARTEXT and be able to access any resources to which that user account has been granted access.

EFS/RMS Alert!

If you are using a Windows logon-based encryption technology (e.g. EFS, RMS), then you should be doing everything in your power to make it difficult for a physical attacker to discover or guess the user’s logon password – right?!? So my advice: along with all the other things that I’ve recommended in the past (and continue to recommend), I strongly urge you to NEVER set the “Interactive logon: Do not display last user name” setting on any client PC (desktop, notebook aka Windows 2000, Windows XP) where you believe Windows logon password-based encryption is being used.

Note: I am NOT trying to steer you away from these technologies. What I AM attempting to do is to (a) illustrate one cogent, real-world example of why this “Do not display last user name” setting can be more harm than good to your overall security posture, and (b) emphasize yet another way that attackers could be “assisted” in attacking EFS- or RMS-protected data – and what you can to do prevent that.

So there.

[category: general security]