Just one of the many reasons why Vista pisses me off…

I’ve spent the better part of three nights a week, for at least a month, trying to figure out how to reinstall my Linksys WUSB54G USB Network Adapter.  I’d bought this nice little device little while ago, and I was foolish (!?!) enough to think that I could disconnect it and plug it into any old USB port on my Vista PC, and have it work again.  [After this many years of working with USB devices in this manner, what was I thinking ?!?]

Instead, I found out when I plugged it back in that its attempts to “reinstall the driver” (during creation of the “new” device — oops, I guess plugging it into a different USB port was NOT to Vista’s liking) were being stymied by one of the most impenetrable errors I’ve ever encountered: ERROR_DUPLICATE_SERVICE_NAME.  Oh sure, you think this’d be an easy one to resolve eh?  Sure – just try to find the duplicated name anywhere in the Services hive of the Registry.  Nothing with “Linksys” in the name, and simply deleting anything with “Linksys” or “WUSB54G” in any of the setting, value or data didn’t cut it.  Vista still bitched about the duplicate name.

The error has plenty of references online (e.g. peruse here or here), but no one seemed to have any decent solutions on resolving this for any of the Linksys network devices that were at all similar to the one I have.  Plenty of speculation, just no good results.

Yes, I tried KB 823771, I’ve tried crawling through the SETUPAPI.LOG file, and I’ve tried a number of other brick walls to bang my head against.  The closest I got with the SETUPAPI.LOG was to look for references to “xxxxx” (can’t recall what that said exactly anymore), as in:

#E279 Add Service: Failed to create service “xxxxxx”. Error 1078: The name is already in use as either a service name or a service display name.
#E033 Error 1078: The name is already in use as either a service name or a service display name.
#E275 Error while installing services. Error 1078: The name is already in use as either a service name or a service display name.
#E122 Device install failed. Error 1078: The name is already in use as either a service name or a service display name.
#E154 Class installer failed. Error 1078: The name is already in use as either a service name or a service display name.
#I060 Set selected driver.

Aside: Why I Hate Vista

I’m having a bitch of a time trying to get Vista to preserve a network connection through its Sleep & Resume states.  I know that part of it is the fact that the networking hardware vendors haven’t written solid, stable drivers for Vista, but considering how widespread this issue is (even to this day — what, almost a year since release?), it’s really making me more frustrated with Vista [or perhaps it’s really I’m just pissed off at myself for having bought into the hype around Vista, when all it’s been for me since bringing it home has been needless hardware replacement and constant crashes, freezes, and troubleshooting].

This is the third network device I’ve purchased for my Vista box, and the third one that has had driver issues.  The first one just didn’t have a Vista driver, and the claimed “should be compatible” XP driver just gave Vista too many bluescreens.  The second one had a Vista driver and really good reviews on newegg.com, but the device would lose its driver as soon as Vista went to Sleep (and then resumed), and wouldn’t reload until I rebooted the box.  I’m not kidding — I spent a month trying to get that one to work like it should’ve.

I’ve been a Windows bigot for most of my adult life, and I even spent six years working for Microsoft, every day spent trying to make sure that Windows would work reliably and securely for my customers.  If *I* have this much trouble with Vista, my sympathies to those of you who’ve been trying to get by on just being a *part*-time Windows geek.  [And my sarcasm should be apparent, as I am firmly of the belief that *no* one should have to learn the ins-and-outs of a computer, just to be able to operate it.  If you *want* to geek out, by all means c’mon aboard.  But if you have *other* interests, then the device should be your servant — not the other freakin’ way around.]

Resolution (?)

What did I finally do that did (or seems to have done) the trick?

I finally went through the Registry and deleted any key that in any way shape or form referred to “USB\VID_13B1”.  The HARDWAREID for the Linksys WUSB54G USB Network Adapter is USB\VID_13B1&PID_000D (or some derivative thereof), and while this was never mentioned as the source of the error in any of the logs I crawled through, it finally seemed to me to be the most likely commonality among all the “duplicate names” that must’ve been detected by Vista during the attempted install of the device.  I only found a few such entries, but obviously they were the underlying showstopper for re-introduction of this wireless device into my setup.

Grrr…

Memory leaks, GDI Objects and Desktop Heap – Windows registry changes for high-memory systems

In case I haven’t blogged about it this year, I wanted to share the usual fix-up that needs to be done to make full use of more than say 512 MB of RAM:
http://www.blogcastrepository.com/blogs/mattbro/archive/2006/08/21/2013.aspx

I had to swap “shells” recently, dropping my laptop’s hard drive into a replacement chassis. I realized later that it had half the usual RAM, and to get back to the 2 GB I was supposed to have took a few weeks.

On the suspicion that Windows might readjust its memory allocation parameters if it detects less memory than it started with, I figured I’d check on it after getting the RAM upgraded back to 2 GB. Sure enough, things are back to the defaults:

  • the “Windows SharedSection” portion of the Subsystems\Windows Registry setting was configured to 1024,3072,512, and like Matt I boosted it to 1024,8192,2048
  • the “SessionViewSize” Registry setting was configured to 48 MB, and I boosted it to 64 MB (just another multiple of 16, and figured a little more probably goes a long way).

Now go and do likewise.

Intriguing question: What standards does Authenticode use?

A colleague of mine just asked a very interesting if potentially misleading question: what standards are used/implemented by Microsoft Authenticode?

I felt pretty dumb at first, because I couldn’t even grok the question.  Authenticode implements a set of APIs, usually derived from IE and its dependencies – that’s the nearest I can think of to a reasonably relevant answer to the question.

When pressed for details, it turns out the context was a security investigation of a particular set of software being developed by a non-security group.  The security auditor was looking for answers to questions like which digital signature standards are implemented by their code, what crypto algorithms, etc, and the responses from the developers were of the form “don’t worry, we’re using crypto, it’s all well-understood”.

I’ve been in this situation many times, and I have a permanent forehead scar from the amount of time I’ve beaten my head against a wall trying to get answers to such questions out of Developers.  I have learned (the hard way) that this is a fruitless exercise – it’s like asking my mom whether her favourite game is written in managed or unmanaged code.  Either way, the only response you should expect are blank stares.  [Yes, there are a small minority of developers who actually understand what is going on deep beneath the APIs of their code, but with the growth of 3rd & 4th-generation languages, that’s a rapidly dying breed.]

Advice: Look for Code, not Standards

My advice to my colleague, which I’m sharing with you as well, is this: don’t pursue the security constructs implemented (or not) by their code.  If you don’t get an immediate answer to such questions, then switch as fast as possible to unearthing where in their code they’re “using crypto”:

  • What component/library/assembly, or which source file(s), are “doing the crypto operations”?
  • Which specific API calls/classes are they using that they believe are implementing the crypto?

With a narrowed-down list of candidate APIs, we can quickly search the SDKs & other documentation for those APIs and find out everything that’s publically known or available about that functionality.  This is a key point:

  • once the developers have implemented a piece of code that they believe meets their security requirements, often they cannot advance the discussion any further
  • once you’ve found the official documentation (and any good presentations/discussions/reverse-engineering) for that API, there’s usually no further you can take the investigation either.
  • If you’re lucky, they’re using an open-source development language and you can then inspect the source code for the language implementation itself.  However, I’ve usually found that this doesn’t give you much more information about the intended/expected behaviour of the code (though sometimes that’s the only way to supplement poorly-documented APIs), and a security evaluation at this level is more typically focused on the design than on finding implementation flaws.  [That’s the realm of such techniques as source code analysis, fuzzing & pen testing, and those aren’t usually activities that are conducted by interviewing the developers.]

Specific Case: Authenticode

Let’s take the Authenticode discussion as one example:

  • the developers are almost certainly using Win32 APIs not managed code, since managed code developers more often refer to the System.Security namespace & classes – however, ShawnFa makes it clear that Authenticode also plays in the managed code space, so watch out.
  • Authenticode is implemented by a number of cryptographic APIs in the Win32 space
  • This page leads one to think they ought to read works from such esteemed authorities as CCITT, RSA Labs and Bruce Schneier, but as with most Microsoft stuff you’re better off looking first at how Microsoft understands and have interpreted the subject.
  • My understanding of Authenticode is that it’s more or less a set of tools and common approaches for creating and validating digital signatures for a wide array of binary files
  • However, its most common (or perhaps I should say most attention-generating) usage is for digitally signing ActiveX controls, so let’s pursue that angle
  • A search of MSDN Library for “activex authenticode” leads to an array of articles (including some great historical fiction – “We are hard at work taking ActiveX to the Macintosh® and UNIX“)
  • One of the earliest (and still one of the easiest to follow) was an article written in 1996 (!) entitled “Signing and Marking ActiveX Controls“.  This article states:
    • Once you obtain the certificate, use the SIGNCODE program provided with the ActiveX software development kit (SDK) to sign your code. Note that you’ll have to re-sign code if you modify it (such as to mark it safe for initializing and scripting). Note also that signatures are only checked when the control is first installed—the signature is not checked every time Internet Explorer uses the control.
  • Another article indicates “For details on how to sign code, check the documentation on Authenticode™ in the ActiveX SDK and see Signing a CAB File.”  The latter also says to use SIGNCODE; the former wasn’t linked anywhere I looked on the related pages.

Further searches for the ActiveX SDK led to many pages that mention but do not provide a link to this mysterious SDK. [sigh…]  However, I think we can safely assume that all APIs in use are those implemented by SIGNCODE and its brethren.  [If you’re curious which ones specifically, you could use Dependency Walker (depends.exe) to make that determination.]

  • However, one of the articles I found has led me to this, which I think provides the answers we’re after: Signing and Checking Code with Authenticode
    • “The final step is to actually sign a file using the SignCode program. This program will:
      • 1. Create a Cryptographic Digest of the file.
        2. Sign the digest with your private key.
        3. Copy the X.509 certificates from the SPC into a new PKCS #7 signed-data object. The PKCS #7 object contains the serial numbers and issuers of the certificates used to create the signature, the certificates, and the signed digest information.
        4. Embed the object into the file.
        5. Optionally, it can add a time stamp to the file. A time stamp should always be added when signing a file. However, SignCode also has the ability to add a time stamp to a previously signed file subject to some restrictions (see the examples that follow the options table).”
    • –a = “The hashing algorithm to use. Must be set to either SHA1 or MD5. The default is MD5.
    • -ky = “Indicates the key specification, which must be one of three possible values:

      1. Signature, which stands for AT_SIGNATURE key specification.
      2. Exchange, which stands for AT_KEYEXCHANGE key specification.
      3. An integer, such as 3.
      See notes on key specifications below.”

    • “The ChkTrust program checks the validity of a signed file by:
      1. Extracting the PKCS #7 signed-data object.
      2. Extracting the X.509 certificates from the PKCS #7 signed-data object.
      3. Computing a new hash of the file and comparing it with the signed hash in the PKCS #7 object.

      If the hashes agree, ChkTrust then verifies that the signer’s X.509 certificate is traceable back to the root certificate and that the correct root key was used.

      If all these steps are successful, it means that the file has not been tampered with, and that the vendor who signed the file was authenticated by the root authority.”

    • “The MakeCTL utility creates a certificate trust list (CTL) and outputs the encoded CTL to a file. MakeCTL is supported in Internet Explorer 4.0 and later.

      The input to MakeCTL is an array of certificate stores. MakeCTL will build a CTL which includes the SHA1 hash of all of the certificates in the certificate stores. A certificate store can be one of the following:

      • A serialized store file
      • A PKCS #7
      • An encoded certificate file
      • A system store”

 

For those who still want more detail, I’d recommend digging into CryptoAPI and especially reviewing the FIPS submissions Microsoft has made for the Windows components that FIPS evaluated.

 

Aside: Here’s a really neat easter egg I stumbled on: the Internet Explorer Application Compatibility VPC Image.  You can download a Virtual PC image pre-installed with the stuff you need to troubleshoot compatibility issues for IE apps, add-ins etc.  Very helpful – save you a few hours of setting up a clean testing environment every time you run into a problem (or if you’re like me, it’ll save you weeks of hair pulling when trying to troubleshoot such issues when using your own over-polluted browser).

Porting Word2MediaWikiPlus to VB.NET: Part 7

Previous articles in this series: Prologue, Part 1, Part 2, Part 3, Part 4, [no Part 5 – apparently I lost the ability to count], Part 6.]

Troubleshooting ThisAddIn.Startup() continued…

Still struggling with getting the CommandBar and CommandBarButton instantiated in the ThisAddIn.Startup() Sub.  I’m finding that the initial exploration of the CommandBar to see if there is a pre-existing instance of the “W2MWPP Convert” button is not working.  The code starts off like this:

        Dim MyControl As Microsoft.Office.Core.CommandBarButton
        MyControl = Application.CommandBars("W2MWPPBar").FindControl(Tag:="W2MWPP Convert")

Then when I debug (F5) this addin, Word reports an unhandled exception with the error “Value does not fall within the expected range”.  I seem to recall having this same problem with my previous VSTO Word AddIn until I first had the button created — then, the next time I ran the addin, it had something to FindControl().  At present, since the button doesn’t exist, it appears that FindControl() is getting “jammed” and I’m never going to get anywhere (kind of a chicken-and-egg problem).

It will be easy to get around this problem on my computer, but I’m afraid that when I build and release this add-in for others to install, if I start the code with a FindControl() call when there’s no button to find, no one else will be able to use this addin either.

Alternative approach to creating the CommandBarButton?

I have to imagine that there’s another way to skin the cat: if we need to determine if the button exists before attempting to create it, but trying to find it by name isn’t working, then perhaps there’s some CommandBar control collection that we could iterate through, and compare the Tag value for each (if any) to find the one we want.  That should go something like this:

Dim commandBarControlsCollection As Office.CommandBarControls = W2MWPPBar.Controls
Dim buttonExists As Boolean

For Each control As Microsoft.Office.Core.CommandBarControl In commandBarControlsCollection
      If control.Tag = "W2MWPP Convert" Then
          MyControl = control
          buttonExists = True
      End If
Next

If buttonExists = False Then
      'Create a new ControlButton
      MyControl = Application.CommandBars("W2MWPPBar").Controls.Add(Type:=Microsoft.Office.Core.MsoControlType.msoControlButton)
End If
Is it a Variable Scope issue?

This still doesn’t resolve the error, so I’m continuing to search for good example code from folks who should know how to construct VSTO code.  This blog entry from the VSTO team has an interesting thing to say:

You should declare your variables for the command bar and buttons at the class level so that your buttons don’t suddenly stop working.

The referenced article (which I’ve linked from Archive.org — the Internet Wayback Machine”) says:

The solution is to always declare your toolbar/menu/form variables at the class level instead of inside the method where they’re called. This ensures that they will remain in scope as long as the application is running.

I wonder whether this advice was more relevant to document-based VSTO projects rather than the application-level add-ins that are possible today — but something tells me it can’t hurt either way, and it’s worth trying to see if it changes anything about the errors above.

Result: unfortunately, this isn’t a problem of variable scope.  In taking a closer look at the exception, here’s the first-level exception:

System.ArgumentException: Value does not fall within the expected range.
   at Microsoft.Office.Core.CommandBarsClass.get_Item(Object Index)

Am I looking at the wrong object?

What exactly is the problem?  Is this saying that get_Item() is failing to get the CommandBar, or the CommandBarButton?  I’ve assumed up to now that it’s a problem referencing the CommandBarButton, since the CommandBar is getting created in Word each time I Debug this add-in.  However, now that I’m looking at it, CommandBarsClass.get_Item() seems more likely to be acting on the CommandBar than the button (or else it’d refer to something like CommandBarButtonsClass.get_Item(), no?).

What’s odd, however, is that the VS Object Browser doesn’t even have an entry for CommandBarsClass — when I search for that term, no results come up, and when I search on “CommandBars”, the closest thing I can find is the “Class CommandBars” entry, which doesn’t have a get_Item() method.

Searching in MSDN, I found the entry for CommandBarsClass Members, which doesn’t reference the get_Item() method but does mention an Item Property.  That page says a very curious thing:

This property supports the .NET Framework infrastructure and is not intended to be used directly from your code.

I wonder what that’s all about then?  In fact, the documentation for the CommandBarsClass Class also says the same thing.  I can understand that there are some “internal functions” generated by the compiler that aren’t really meant for use in my code, but it’s really tough to debug a problem when these constructs barely get a stub page and there’s no information to explain what I should think when one of these things pops up in my day-to-day work.

I feel like I’m chasing my tail here — now I’m back on the Members page, hoping that one of the Properties or Methods that are documented will help me deduce whether this class references the CommandBar or the CommandBarButton when it calls get_Item() [and maybe even help me figure out why a just-created CommandBar object can’t be referenced in code].

The best clue I’m coming up with so far is that the page documenting the Parent Property shows that under J#, there’s what appears to be a get_Parent() method, not existing in the other languages mentioned, which leads me to believe that the get_Item() method is something generated by the compiler when it needs to get the value of the Item Property.  [At least I’m learning something for all my trouble…]

The only other tantalizing tidbit so far is that the CommandBarsClass page indicates that this class implements interfaces that all refer to CommandBar, not to any controls associated with the CommandBar: _CommandBars, CommandBars, _CommandBarsEvents_Event.  I can’t tell the difference between the first two (at least from the docs), but obviously the Event interface is its own beast.

Success: it’s the CommandBar!

I think I have confirmation, finally: the docs for _CommandBars.Item state that the Item Property “Returns a CommandBar object from the CommandBars collection.”  OK, so now I finally know: my code is barfing on trying to access the just-created CommandBar, not the CommandBarButton as I thought all along.  Whew!

Aside: Re-Using Variables

I’m not much of a code snob yet — there’s very few things I know how to do “right”.  However, I’ve found something that just doesn’t seem right in the original VBA code that I’m going to change.

The original code sets up three separate buttons in the CommandBar, and each time the code looks for the button, then creates it, then configures it, it’s using the exact same variable each time: MyControl.  I know this obviously worked (at least in the VBA code), so it’s hardly illegal, but it seems much safer to create three variables and instantiate them separately.  Maybe it’s just so that I can follow the code easier, I don’t know.  In any case, I’m having a hard time with it so I’m going to call them each something else.

However, I’m not so much of a snob that I’ll create three boolean variables to track whether I’ve found an existing instance of the button, so I’m going to re-use the buttonExists variable.

 

Keep tuning in… someday I’ll actually start getting into Wiki-related code (I swear!)

Troubleshooting RunOnce entries: Part One

I’ve been investigating the root cause for a critical issue
affecting my CacheMyWork app (for those of you paying attention, it has come up in the past in this column). Ever since I received my (heavily-managed) corporate laptop at work, I’ve been unable to get Windows XP to launch any of the entries that CacheMyWork populates in RunOnce.

Here’s what I knew up front

  • On other Windows XP and Windows Vista systems, the same version of CacheMyWork will result in RunOnce entries that all launch at next logon
  • On the failing system, the entries are still being properly populated into the Registry – after running the app, I’ve checked and the entries under RunOnce are there as expected and appear to be well-formatted
  • The Event Log (System and Application) doesn’t log any records that refer even peripherally to RunOnce, let alone that there are any problems or what might be causing them
  • The entries haven’t disappeared as late as just before I perform a logoff (i.e. they’re not being deleted during my pre-reboot session).

Here’s what I tried

UserEnv logging
  • I added HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon\UserEnvDebugLevel = 30002 (hex).
  • This is able to show that the processes I’m observing are firing up correctly, but there is nothing in the log that contains “runonce” or the names of the missing processes, and I haven’t spotted any entries in the log that point me to any problems with the RunOnce processing.
ProcMon boot-time logging
  • I’ve got over 3.3 million records to scan through, so while I haven’t found anything really damning, I may never be 100% sure there wasn’t something useful.
  • After a lot of analysis, I found a few interesting entries in the ProcMon logs:
Process Request Path Data
mcshield.exe RegQueryValue HKLM\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\BehaviourBlocking\FileBlockRuleName_2 Prevent Outlook from launching anything from the Temp folder
mcshield.exe RegQueryValue HKLM\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\BehaviourBlocking\FileBlockRuleName_10 Prevent access to suspicious startup items (.exe)
mcshield.exe RegQueryValue HKLM\SOFTWARE\Network Associates\TVD\Shared Components\On Access Scanner\BehaviourBlocking\FileBlockWildcard_10 **\startup\**\*.exe
BESClient.exe RegOpenKey HKLM\Software\Microsoft\Windows\CurrentVersion\RunOnce Query Value
Explorer.exe RegEnumValue HKLM\Software\Microsoft\Windows\CurrentVersion\RunOnce Index: 0, Length: 220
waatservice.exe RegOpenKey HKLM\Software\Microsoft\Windows\CurrentVersion\RunOnce Desired Access: Maximum Allowed
Windows Auditing

I finally got the bright idea to put a SACL (audit entry) on the HKCU\…\RunOnce registry key (auditing any of the Successful “Full Control” access attempts for the Everyone special ID). After rebooting, I finally got a hit on the HKCU\…\RunOnce key:

Event Log data

First log entry

Second log entry

Third log entry

Category Object Access Object Access Object Access
Event ID 560 567 567
User (my logon ID) (my logon ID) (my logon ID)

And here are the interesting bits of Description data for each:

Description field

First log entry

Second log entry

Third log entry

Title Object Open: Object Access Attempt: Object Access Attempt:
Object Type Key Key Key
Object Name \REGISTRY\USER\S-1-5-21-725345543-602162358-527237240-793951\Software\Microsoft\ Windows\CurrentVersion\RunOnce [n/a] [n/a]
Image File Name C:\WINDOWS\explorer.exe C:\WINDOWS\explorer.exe C:\WINDOWS\explorer.exe
Accesses

DELETE
READ_CONTROL
WRITE_DAC
WRITE_OWNER
Query key value
Set key value
Create sub-key
Enumerate sub-keys
Notify about changes to keys
Create Link

[n/a] [n/a]
Access Mask [n/a] Query key value Set key value

Not that I’ve ever looked this deep into RunOnce behaviour (nor can I find any documentation to confirm), but this seems like the expected behaviour for Windows. Except for the fact that something is preventing the RunOnce commands from executing, of course.

Blocking the Mystery App?

Then I thought of something bizarre: maybe Explorer is checking for RunOnce entries to run during logon, and it isn’t finding any. Is it possible some process has deleted them during boot-up or logon, but before Explorer gets to them?

This flies in the face of my previous theory, that the entries were still there when Windows attempted to execute them, but something was blocking their execution. Now I wondered if the entries are even there to find – whether some earlier component hasn’t already deleted them (to “secure” the system).

If so, the only way to confirm my theory (and catch this component “in the act”) is if the component performs its actions on the Registry AFTER the LSA has initialized and is protecting the contents of the Registry. [It’s been too long since I read Inside Windows NT, so I don’t recall whether access to securable objects is by definition blocked until the LSA is up and ready.]

Hoping this would work, I enabled “Deny” permission for Everyone on the HKCU\…\RunOnce key for both “Set Value” and “Delete” (not knowing which one controls the deletion of Registry values in the key). This also meant that I had to enable Failure “Full Control” auditing for the Everyone group on this key as well.

However, while I’ve firmly confirmed that the Deletion takes place when I remove this Deny ACE, I can’t get Windows to log any information to indicate what process or driver is deleting the registry entries (and thus preventing Windows from executing them). It looks like – beyond what I’ve already found – there’s nothing else for which the LSA is asked to make any access control decisions for the HKCU\…\RunOnce key.

“Run Away!”

That’s all for now – I’m beat and need to regroup. If anyone has any bright ideas on ways to try to dig deeper into this system and figure out what’s missing, I’d love to hear it.

To be continued…

Encrypting %TEMP% with EFS: software installation concerns

One of the biggest bones of contention in the use of EFS is whether to encrypt the user’s %TEMP% folder or not.  It starts off pretty innocuously: many applications create temporary files in the user’s %TEMP% directory, and often these files can contain the same sensitive data that is contained in the original data files the users are opening.  That means the %TEMP% folder should be encrypted, right?

Microsoft originally recommended that %TEMP% be encrypted when using EFS.  Then reports of application compatibility issues came in, which created new “don’t encrypt %TEMP%” advice which has lingered long after those issues have been a real issue for most customers.  And yet there’s still varying opinions on this (e.g. here and here).

However, there’s one case that continues to dog those of us trying to enforce protection of sensitive data using EFS: software installation.  If I encrypt my %TEMP% folder and then try to install a bunch of applications myself (e.g. download and run the install files through the Windows UI), chances are I’ll find a few applications that either (a) won’t install (e.g. an older version of MSN Messenger had this problem) or (b) won’t work correctly after install (see this KB article for example).

While at Microsoft, I doggedly reported these app compat issues every time I ran into one, getting them fixed one by one (at least in MS apps).  Then I heard that the Windows Installer team had implemented a fix around the time that Vista shipped, and I figured we’d finally licked the problem.

However, there are recently KB articles (here and here) that indicate this is still a problem with Windows Vista and Office 2007.

So here’s one more attempt to clear up the confusion this issue creates, and provide definitive guidance on how to avoid problems with encrypted %TEMP%.  [John Morello got it right in a recent Technet article – but I suspect he may have cribbed this tip from some of the talks I’ve given over the years. ;)]

The only scenario in which installing software could fail due to encrypting the user’s %TEMP% folder is when:

  1. The software is being interactively installed by the user, not by a software distribution package (e.g. SMS, Tivoli, Altiris, etc.).
  2. The installer doesn’t understand EFS.  (e.g. The version of Windows Installer that shipped with Windows Vista knows to decrypt any encrypted folders it creates before handing off to the Windows Installer service running as SYSTEM)
  3. The installer moves (rather than copies) the files that it unpacks into the %TEMP% directory.  (Moving encrypted files to an unencrypted directory will leave the files encrypted
  4. The %TEMP% folder is left encrypted while the install takes place.  (You could distribute software installs with pre- and post-install actions that run simple command-line scripts to decrypt/encrypt the %TEMP% folder  e.g.
         cipher.exe /D %TEMP%
         cipher.exe /E %TEMP%

So:

  • If all software installs are performed by a software distribution system such as SMS, Tivoli, Altiris, then you should be safe encrypting %TEMP%.
  • If your users are on Windows Vista, and
    • If the software being installed is packaged with MSI or other EFS-aware installers, then
    • You should be safe encrypting %TEMP%
  • If your users aren’t on Windows Vista, and
    • If your users install software themselves (e.g. download and run MSI install files), and
      • You can’t edit the install packages for the software that your users need to install, then
      • You should not encrypt %TEMP%.

Hey, in the long term I hope this issue gets buried once and for all – either EFS will become so ubiquitous that customers will report these issues in droves, and all the installer ISVs will finally fix their apps (including backports to earlier versions of Windows).  Or, EFS will be supplanted by some future implementation of ubiquitous encryption, making the need for file-based encryption a moot point.  [I don’t see that in the next few years, but never say never.]

How to get a Process’ current security context – mystery and teaser…

…so I’ve crossed the threshold, and now I’m writing VB code in .NET 2.0.  It’s been a fascinating experience – going through fits and starts of trying to find a project motivating enough to keep me working on it through the inevitable “slump”.

For anyone who’s new to coding, and self-taught (like me), there’s the initial rush of being able to construct whatever it is your favoured “for morons” teaching book walks you through.  Then there’s the first tentative steps into adding something for which you don’t have stepwise instructions – which is just about anything else that might be useful – which is quickly followed by the frustration of knowing that you *should* be able to construct that next section of code, but having no idea why it doesn’t work the way you want it to work.

I’ve done this probably a half-dozen times, and every time I get discouraged that the damned code just doesn’t flow from my fingers.  I’ve been stymied by, in no particular order:

  • How to cast an Object to (as? into?) an Interface
  • How to use a GetEnumerator method
  • What the hell goes into a DataGrid
  • How to Dim an Object as something other than a String
  • When and where to define and instantiate an Object (e.g. inside the For loop?  outside the Private Sub?  Inside a Public Sub?)
  • How to write code in separate classes and still be able to take advantage of variables defined in the “other” Class

However, I think I’ve come up with sufficiently self-interested projects to complete at least ONE of them before I let myself fail at this AGAIN.

The latest fiasco was the last three attempts in which I’ve been trying to filter out only those processes that were launched in my user context (e.g. Run key, Startup folder, Start menu).  I’ve been failing to (a) identify an actual username from the info supplied in the System.Diagnostics.Process class, (b) construct an equivalent username to what comes from the My.User.Name property, and most recently (c) actually filter out the processes started in other users’ context (e.g. svchost.exe, wininet.exe, csrss.exe).

Here’s the current code mess I’ve constructed:

Dim process As New System.Diagnostics.Process
Dim dictionary As New System.Collections.Specialized.StringDictionary
Dim entry As New System.Collections.DictionaryEntry
Dim UsernameFromProcess As String = “”
Dim DomainFromProcess As String = “”
Dim Username As String = My.User.Name
Dim MyApplications As New Collection

dictionary = process.StartInfo.EnvironmentVariables

For Each entry In dictionary
    If entry.Key.ToString = “username” Then
        UsernameFromProcess = entry.Value.ToString
    End If

    If entry.Key.ToString = “userdomain” Then
        DomainFromProcess = entry.Value.ToString
    End If
Next entry

Dim QualifiedUserName As String = “”
QualifiedUserName = DomainFromProcess + “\” + UsernameFromProcess

If QualifiedUserName = Username Then
    MyApplications.Add(process)
EndIf

So why does this always result in adding the process to the MyApplications collection?  I woulda figured that the environment variables for processes started in other users’ contexts would reflect that user’s environment.  E.G. if csrss.exe starts in the SYSTEM context, then it should have USERDOMAIN =  [nul] and USERNAME = SYSTEM.  Whereas, when I launch Word from the Start Menu, its environment will include USERDOMAIN = REDMOND and USERNAME = mikesl.

If you’d like to see how I finally solved/worked around this little mystery, check out the CacheMyWork project on Codeplex.

Vista’s IPv6 & IPv4 will flood Internet DNS servers?

I’m surprised Steve Gibson isn’t leading the charge on this one:
http://slashdot.org/articles/06/09/07/1441204.shtml

I can’t believe that Vista’s side-by-side IPv4 & IPv6 stacks would cause the entire Internet’s DNS infrastructure to fall to its knees in agony. Were this true, I gotta figure the whole lofty goal of converting the Internet as a whole to IPv6 is a doomed prospect.

To me, the addition of the IPv6 stack to Vista will create the most likely opportunity for smooth transition over to IPv6. If the DNS infrastructure – which must’ve anticipated IPv6 for years now – can’t handle a slow, gradual influx of Vista clients making queries and submitting update requests, then we’re all in a lot more trouble than people are letting on.

Windows Vista’s Full Volume Encryption & TPM, part 6: more oddball TPM 1.2 links

Semi-random links to information I’ve used as reference for some of my rambling thoughts…

Whew! Now back to your regularly scheduled surfing.

Windows Vista FVE in the news

http://archives.seattletimes.nwsource.com/cgi-bin/texis.cgi/web/vortex/display?slug=bizbriefs20&date=20051220

The enterprise edition of Vista will have a feature called “BitLocker” that can encrypt systems that have an optional security chip.

The feature debuted Monday on a test version of Vista that Microsoft released to get feedback from software developers and customers.

“So essentially if a machine is lost … it renders it useless to whoever steals it or takes it from them,” said Shanen Boettcher, a senior director in the Windows group.

Commentary: This further supports the idea that FVE will only be available to those customers who license the Enterprise edition of Windows Vista. Will this be available to the consumer? I would suspect not, based on Microsoft’s history and its planned set of SKU’s:

  • the Enterprise editions of Windows (2000, 2003) in the past haven’t shown up on the shelves of retail stores
  • What with plans for SKUs such as Windows Vista Home Basic, Windows Vista Home Premium and Windows Vista Ultimate – all presumably oriented for the consumer market – I personally doubt there’ll be room in the OEM lineups for a fourth SKU directed at their consumer market.
  • Previous rumours indicated that the Vista Enterprise edition will only be available to Microsoft customers who have signed up for (the not inexpensive) Software Assurance plan, which is definitely not something consumers (or even small/medium-sized businesses) can usually afford.

However, I feel obligated to point out that the (obviously out-of-context) quote from Shanen Boettcher seems pretty misleading/overreaching in its current form. If I’m interpreting correctly, the “BitLocker” feature is nothing more than Secure Startup (SSU)/Full Volume Encryption (FVE).

While SSU does make it more difficult to discover on-disk secrets and sensitive data files, its mere presence or default configuration hardly makes the machine or its data “useless to whoever steals it”. So long as the disk contents remain undisturbed, the simple configuration of SSU will allow Windows to boot up and allow an attacker to attempt to access its data (e.g. via console logon, network logon, shares access, unpatched vulnerabilities, previously-installed malware, or other as-yet-unimagined attack techniques).

Seems it’s time to discuss the Full Volume Encryption technical whitepaper that’s available for download – make sure we’re all understanding it the same way (or not), and raise the obvious questions worth asking.