News (?): Americans willing to pay (a little) more for privacy

I don’t know if I should be surprised at this or smug – right now I’m leaning towards pleasantly surprised.  I guess I’m surprised that (at least according to this study’s methodology) there isn’t more of a price differential for good privacy but hey, I can certainly understand people being a little skeptical that any privacy can be protected in this day and age.

They found that people will, in fact, pay more to purchase from sites with a solid privacy policy, but only if that policy is easy to see and understand.

For those of you developing products and who wonder whether it’s worth the effort to spend time on “privacy issues”, take heart:

It could also be good news for retailers, who can use robust privacy policies as a selling point…

See the full article here: Americans willing to pay (a little) more for privacy

Patenting security patches? Slimy, greedy, sad

Ugh.  As in ug-ly.  This is get-rich-“quick” parasitism at its finest.  I really wish bottom-feeders like this would find a way to use their obviously-untapped energies to contribute something constructive to the economy, society or culture.

How does it work?  “…a new firm is offering to work with you on a vulnerability patch that they will then patent and go to court to defend. You’ll split the profits with the firm, Intellectual Weapons, if they manage to sell the patch to the vendor. The firm may also try to patent any adaptations to an intrusion detection system or any other third-party software aimed at dealing with the vulnerability, so rest assured, there are many parties from which to potentially squeeze payoff.”

And how will they get around the lengthy patent application process?  “The company says that it may try to use a Petition to Make Special in order to speed up the examination process when filing a U.S. patent. Another strategy the firm proposes using is to go after a utility model rather than a patent-a utility model being similar to a patent but easier to obtain and of shorter duration-typically six to 10 years.”

“In most countries where utility model protection is available, patent offices do not examine applications as to substance prior to registration,” the company says. “This means that the registration process is often significantly simpler, cheaper and faster. The requirements for acquiring a utility model are less stringent than for patents.”

Patents and copyright in their current form have outlived their usefulness.  I can’t remember the last time I read a story about a “little guy” who actually benefited from the patent or copyright protections for whom they were originally meant.  Now it all seems to be about providing a stable base of income for multinationals to leverage when they can no longer actually contribute something genuinely new and useful to the planet.

Encrypting %TEMP% with EFS: software installation concerns

One of the biggest bones of contention in the use of EFS is whether to encrypt the user’s %TEMP% folder or not.  It starts off pretty innocuously: many applications create temporary files in the user’s %TEMP% directory, and often these files can contain the same sensitive data that is contained in the original data files the users are opening.  That means the %TEMP% folder should be encrypted, right?

Microsoft originally recommended that %TEMP% be encrypted when using EFS.  Then reports of application compatibility issues came in, which created new “don’t encrypt %TEMP%” advice which has lingered long after those issues have been a real issue for most customers.  And yet there’s still varying opinions on this (e.g. here and here).

However, there’s one case that continues to dog those of us trying to enforce protection of sensitive data using EFS: software installation.  If I encrypt my %TEMP% folder and then try to install a bunch of applications myself (e.g. download and run the install files through the Windows UI), chances are I’ll find a few applications that either (a) won’t install (e.g. an older version of MSN Messenger had this problem) or (b) won’t work correctly after install (see this KB article for example).

While at Microsoft, I doggedly reported these app compat issues every time I ran into one, getting them fixed one by one (at least in MS apps).  Then I heard that the Windows Installer team had implemented a fix around the time that Vista shipped, and I figured we’d finally licked the problem.

However, there are recently KB articles (here and here) that indicate this is still a problem with Windows Vista and Office 2007.

So here’s one more attempt to clear up the confusion this issue creates, and provide definitive guidance on how to avoid problems with encrypted %TEMP%.  [John Morello got it right in a recent Technet article – but I suspect he may have cribbed this tip from some of the talks I’ve given over the years. ;)]

The only scenario in which installing software could fail due to encrypting the user’s %TEMP% folder is when:

  1. The software is being interactively installed by the user, not by a software distribution package (e.g. SMS, Tivoli, Altiris, etc.).
  2. The installer doesn’t understand EFS.  (e.g. The version of Windows Installer that shipped with Windows Vista knows to decrypt any encrypted folders it creates before handing off to the Windows Installer service running as SYSTEM)
  3. The installer moves (rather than copies) the files that it unpacks into the %TEMP% directory.  (Moving encrypted files to an unencrypted directory will leave the files encrypted
  4. The %TEMP% folder is left encrypted while the install takes place.  (You could distribute software installs with pre- and post-install actions that run simple command-line scripts to decrypt/encrypt the %TEMP% folder  e.g.
         cipher.exe /D %TEMP%
         cipher.exe /E %TEMP%

So:

  • If all software installs are performed by a software distribution system such as SMS, Tivoli, Altiris, then you should be safe encrypting %TEMP%.
  • If your users are on Windows Vista, and
    • If the software being installed is packaged with MSI or other EFS-aware installers, then
    • You should be safe encrypting %TEMP%
  • If your users aren’t on Windows Vista, and
    • If your users install software themselves (e.g. download and run MSI install files), and
      • You can’t edit the install packages for the software that your users need to install, then
      • You should not encrypt %TEMP%.

Hey, in the long term I hope this issue gets buried once and for all – either EFS will become so ubiquitous that customers will report these issues in droves, and all the installer ISVs will finally fix their apps (including backports to earlier versions of Windows).  Or, EFS will be supplanted by some future implementation of ubiquitous encryption, making the need for file-based encryption a moot point.  [I don’t see that in the next few years, but never say never.]

Something broke my CacheMyWork app!

Ever since I joined up with my current employer, I’ve been unable to get consistent results out of my CacheMyWork application. It wasn’t exactly professional quality when I released it, but it did what I wanted nicely on some XP & Vista systems I’d been using.

Since getting my IT-issued notebook, however, I’ve been unable to get the darned thing to work consistently. When I “cache” a half-dozen or more apps, I’ve never yet seen *all* of them start up at my next logon; sometimes I’ve even seen that NONE of them run. And yes, I’m quite certain that the Registry entries are getting successfully created (under HKCU\…\RunOnce) – which means that something is interrupting the execution of these once I’ve logged on.

My suspicions are heavily weighted towards the McAfee suite of security apps, especially AntiSpyware and HIPS (Host Intrusion Prevention Service aka “entercept”). I’ve been trying to figure out how to block their activity, even temporarily (which admittedly is pretty much what I’m sure they were built to withstand), but no luck.

I’ve tried escalating through the IT service desk folks, but they are pretty much lacking in cluefulness – all I get is pointers to a couple of web pages and a vague escalation process (which seems to terminate in “single-app exceptions that *might* be added to the configuration). I need to be able to unblock whatever it is that’s intercepting CreateProcess (presumably) by the shell when it’s iterating through the HKCU\…\RunOnce values – the whole point of my app is to let me restart *any* user app, so one-by-one exception allowances is hardly an efficient solution.

I’ll keep digging, but if anyone has ever seen this kind of behaviour and/or has any pearls of wisdom on how to log/troubleshoot RunOnce activity, I’d sure appreciate a nice smack upside the head. [figuratively speaking – I’ll provide other rewards for anyone that actually helps me make progress…]

Twits

http://www.macnn.com/articles/07/05/24/microsoft.ipod.amnesty.bin/

Lighthearted competitive morale booster? Nope, the culmination of years of fear infused in MSFT culture over any technology that employees happen to use that doesn’t spring forth from the innards of campus (even when there are no good MSFT alternatives). Ugh, I don’t miss that retardedly childish fear of competitive pressure and/or groupthink-driven ostrich behaviour. Not one bit.

How Do I Reduce the [Security] Defects in my Software?

I’ve spent a little time here and there trying to find the “best” tool to do static analysis of some software written in a language other than C/C++/Java.  Foolishly, I figured it would be an easy task – find an authoritative site/wiki on such tools, skim for the one(s) referencing the language of interest, and browse to the download page.

Learn something new every day…

I started with my boss, who pointed out that our group (and most of Intel – at least those who’ve made their opinions known) has settled on one commercial tool, and that’s the answer we’re giving everyone who inquires about static analysis for security.  [I won’t name the product here – you don’t think I’d be that stupid do you?  That’d just be a huge invitation to the hacker community…]

Here at Intel, there’s an internal wiki where much of this “tribal knowledge” has been consolidated.  However, Intel’s development community is heavily invested in C, C++ and Java, so other languages don’t get a whole lot of attention (for good or ill, I can’t say…yet).  There’s a few pointers to public web pages, including one to the List of tools for static code analysis.  OK, so scanning that page should yield the results I’m after, right?  Wrong.

The deeper I look into this, the more complex the question becomes.  Am I interested in just security defect identification, or in identifying defects overall?  Am I only interested in static analysis approaches, or should I also consider dynamic analysis tools and (whatever else I’m inferring is beyond my comprehension, based on the wealth of ways this information can be categorized)?

And on a philosophical level, should I focus my customers’ attention on a single-tool approach, or give them a chinese menu from which to make their own selection?

I know that my team has Security deeply embedded in its genes, but I’m of the continuing philosophy that security isn’t an end unto itself; security is just one means to a greater set of ends.  Why make something more secure?  To make it more

  1. (a) available
  2. (b) reliable
  3. (c) trustworthy
  4. (d) all of the above
  5. (e) none of the above

?

Personally, my bias leads me to believe that (c) begets (b) begets (a).  However, despite six years of Microsoft indoctrination, “Trustworthy” still feels too much like a buzzword to me – so I’m inclined to choose (b).  If my software is more secure, I’m likely to rely on it more readily (and advocate that others rely on it more heavily) for my critical activities.

Now, if it’s really secure [whatever that means] but horribly unreliable for non-security reasons, that still means I’m unlikely to bother with that software for any great length of time (e.g. Google Desktop crashes rather frequently on my current PC, and while I’m fairly convinced it’s not hackers causing it to fail, but rather the “security software suite” I’m forced to run that’s intercepting some low-level driver or filter, I’ve still abandoned it – and am looking for the next-sexiest desktop search software).

Back to my original train of thought: All other things equal, I’d prefer to have developers using static analysis tools for overall defect reduction, rather than recommend security defect reduction tools.  This would make the developers happier too: instead of having to remember to “run that security tool” too, they’d just get the benefit of overall quality improvements (of which security should just be one component).

As for the static/dynamic/other analysis angle, I feel like I’m just learning to doggie-paddle in this area.  I’m not even qualified to discuss the difference between these realms yet.

However, on the “single tool” vs. “chinese menu” question, I have a very clear opinion: neither and both.  Really, what my customers are asking from me is to reduce the burden of research and analysis.  Ideally they’d like me to give them the answer they’d have come to themselves (given enough time), but it’s usually acceptable to provide a shorter, more organized list than they’d get out of Google.  I can usually be a big hero if I can:

  • do the research with their inquiry in mind (e.g. “What tool or approach should I use to identify and eliminate the greatest number of significant security issues in my code with the least amount of effort?”)
  • eliminate the tools that obviously aren’t intended for their language/job role
  • write up a prioritized list of tools for them to review/try, ordered by which is most likely to meet their needs

Am I the only one out here that thinks this way?  If not, I certainly haven’t found such a list from anyone who shares my point of view.  I’ve got the “single approved tool” on the one hand, and the “canonical lists of tools” on the other.  NIST has made a good start by categorizing tools based on semi-abstract goals for their use (e.g. “safer languages”, “dynamic analysis”), but I have to wade through multiple lists and descriptions to figure out if each tool analyses code in my sought-after language.

Is there anywhere I can go to find a list of all tools that address a specific development language, and sort them at least by age, number of versions, number of users or number of features?

Security scrubbing of Python code – PyChecker or nothing?

I’m hardly versed in the history or design of the Python programming language (I just started reading up on it this week), but I know this much already: Python is intended to be a very easy-to-use scripting language, minimizing the burden of silly things like strongly typing your data (not to mention skipping the arguable burden of compiling your code).

Most developers don’t have two spare seconds to rub together, and are hardly excited at the prospect of taking code that they finally stabilized and having to review/revisit it to find and fix potential security bugs.  Manually droning through code has to be about the most mind-numbing work that most of us can think of eh?

On the other hand, static analysis tools are hardly an adequate substitute for good security design, threat modelling and code reviews.

Still, static analysis tools seem to me a great way to reduce the workload of secure code reviews and let the developer/tester/reviewer focus on more interesting and challenging work.

Is it really practical to expect to be able to perform complex, comprehensive static analysis of code developed in a scripting language?  I mean, theoretically speaking anyone can build a rules engine and write rules that are meant to test how code could instruct a CPU to manipulate bits.  It’s not that this is impossible – I’m just wondering how practical it is at our current level of sophistication in terms of developing software languages, scripting runtimes and modelling environments.  Can we realistically expect to be able to get away with both easy development, ease of maintenance (since the code isn’t compiled) and robustness of software quality/security/reliability?

I’m certainly not trying to disparage the incredible work that’s gone into PyChecker already – anything but.  However, when a colleague asks me if there are any other static analysis tools in addition to PyChecker, I have to imagine that (a) he has some basis for comparison among static analysis tool and (b) that PyChecker doesn’t quite meet the needs he’s come to expect for checkers targeted at other languages.

Slashdot | Is Assembly Programming Still Relevant, Today?

Seemed an especially relevant discussion, given my recent immersion in Intel (where they build chips that run the assembly code spat out by compilers – or that generated by really hardcore software engineers).

Every once in a while I get lured in by the taunts from these demi-gods: until I learn the depths of what’s going on behind the scenes, having a true understanding of how the software works (and how to make it work/work better) will continue to elude me.

I pride myself on being able to dig deep to troubleshoot weird/opaque issues, and to come up with better ways to make existing software work. No matter how much I already know about TCP/IP headers, registry structures, sysinternals tools, APIs, and DLL dependency mapping, my natural curiosity always gets the better of me. I want to be able to answer the next layer of questions, and to do that it seems I have to keep peeling away the layers of how my computers work.

I must’ve hit a natural inflection point recently, as I finally made the leap from skimming other people’s code to writing my own. I can’t exactly say whether it was just time + frustration that finally drove me over the edge, or if I recently reached some level of expertise in my problem-solving skills. One way or another, something finally allowed me to be able to articulate the problems (and possible steps to solving them) that is necessary to both articulate achievable software requirements and write the code that might actually approximate fulfilment of those requirements.

All that said, I think it’s hard to imagine going from VB.NET to Assembler without envisioning a cliff-face learning curve. Good freakin luck making that leap. I’ll be lucky if I get my brain wrapped around C++ and Java in the next year or two, let alone non-OO languages or their machine-level equivalents. Heh.

So what does this thread on Slashdot tell me, exactly? It’s a typical array of folks like me just trying to earn a living, along with the requisite grumpy old farts and disrespectful kiddies. Seems to me that there are three main bodies of opinion:

  • The largest number of folks are fine having some level of understanding of what goes on between the CPU microcode and the high-level programming languages – i.e. the actual compiled code that’s run by the CPU. If you don’t understand what’s going on under the hood, then you’re likely to make some pretty classic mistakes that lead to less-than-stellar performance, security or both.
    • So long as you can *read* some assembler and maybe add a dash of ability to predict how your own code might behave at this level, then you’re worthy of being called a professional coder.
    • One poster commented, “It’s good to know what goes on under the hood, sure. But in many, many software developer tasks, early optimization is the root of all evil.” I’m in full agreement with this philosophy – diving into the code head first (or head-up-arse first) does no good if the purpose for writing the code isn’t understood, and the major functionality hasn’t been demonstrated to be possible (let alone workable as a whole).
    • Multiple posters made the point that, while a really skilled developer can write really efficient and elegant assembly code, 99% of developers are (a) hacks or (b) wasting time doing something that quality, modern compilers do really well these days. “It’s no use being a mediocre assembly programmer.
  • The folks I really love are the hardcore “That’s no’ oatmeal!” types – until you can write your code directly in assembler, don’t bother me kid, and I hope your code never tries to infiltrate my box. Keep your crap code out of my way, ’cause obviously you suck until you’ve mastered byte-by-byte electron pushing.

However, there was one insight that really caught my eye: [among the reasons you might need proficiency in assembly language] “You’re writing malware to exploit a buffer overflow vulnerability.” Those writing the really quality malware these days are taking advantage of any and every layer of the software stack – that’s where the money is, and it’s where you can stay as far ahead of the originators of the software you’re exploiting as possible. [Given the increased entree of organized criminals into this arena, I expect the incentives for skilled coders here to only increase.]

And I can’t wait for the hackers to focus their attention on exploiting implementation idiosyncracies in the virtualization stacks and the underlying hardware – CPUs, chipsets, video, I/O, storage/mobo/video/networking firmware, etc. etc. etc. Imagine trying to roll out fixes for hardware… (rolled eyes)

Link to Slashdot Is Assembly Programming Still Relevant, Today?

Week One at Intel

Feels like a comfortable pair of shoes… strangely, despite all the advice I’ve heard from friends and colleagues that Intel’s corporate culture is very different from Microsoft’s, I felt pretty relaxed with the information that’s been thrown at me so far, and so far I feel confident I’ll be able to take on the responsibilities that are thrown my way.

I also feel welcomed at Intel. Obviously there were a few folks who wondered who “the new guy” was who took over the empty cubicle there, but everyone I’ve met so far has made me feel welcome and respected.

Respected? What the h*** does he mean by that? Well, I must confess I worried that folks would think that a “software guy” from Microsoft wouldn’t have much to contribute at a hardware company like Intel. And in the first few hours of being there, I got a really overwhelming sense that Intel is incredibly “engineering-friendly”. [Hell, the maps on the walls of the buildings that tell you how to get around look like they’re straight out of AutoCad.] Not unlike Microsoft, where I always felt a little “outside” because I didn’t know how to code, I get this sense from Intel that if you don’t grok hardware, and aren’t an engineer, then you’re second-class and will always be climbing uphill to prove yourself.

The jury’s still out on whether a non-engineer can really earn “first-class citizen” status at Intel, but given the number of times I’ve heard my security colleagues here reference Microsoft as an organization that’s well ahead on the security front, I feel like my credentials should be reasonably intact for now.

Tidbits that I didn’t know until I got here:

  • Intel is a cubicle farm – everyone has a cubicle here, allegedly up to the executive class. It’ll take me a while to get used to the cacaphony of shared conversations and random noises, but I really hope I adjust soon.
  • The cafeteria here is even nicer than the ones on Microsoft campus – I had a Tempeh curry dish yesterday (yum!), and today I discovered the self-serve sandwich bar – take your bread, load up whatever fillings and toppings you like, and pay by the ounce. [I had inch-think tuna salad, a muffin and an apple for $5.31 – darned reasonable.]
  • These guys seem to have standardized exclusively on Thinkpads – it’s amazing after seeing all the wild variations of hardware at Microsoft to see just *one* OEM’s PCs everywhere. It’s almost…cultish. Still, if you’re going to choose only one notebook, I can think of much worse choices than these.
  • Microsoft’s concept of “long-timer” is pretty paltry compared to Intel. Ten years at MS is an accomplishment, and anyone with 15+ years at MS is considered a “volunteer” (i.e. part of the generation who earned enough cheap stock options to not need the paycheque, but still comes to work for some reason). I’m working with a guy who’s been with Intel for 27 years, and I’ll meet another on Friday who’s been with Intel for 30. Holy crap – thirty years ago, I was mastering finger-painting, while these guys were pioneering circuit designs.

I’m still figuring out what I’ll be doing around here, but so far it looks pretty exciting. I’ll tell you more about it in the near future.