The Realm of the Verbal Processor

Jarvis's Ramblings

Image Build–Manual or Build & Capture?

WHY Series #2

Late last week I got the following email via my contact form. It seemed like the ideal topic for the next post in the series. (Thanks Matt for the message!)

I have a question for your WHY series. I was debating with a co-worker yesterday why you would use the "Build and Capture" task sequence for OSD instead of capturing a system that you already have or have built with another method. I have a few ideas on advantages and disadvantages, but I would like to hear your opinion.

I am going to make a couple of assumptions based on what I read in the question. I interpret “a system that you already have” to mean an existing physical machine that would be captured to create an image. This might not be what the reader intended, but it should be addressed in this post regardless. Best practice is to create a hardware independent image on a virtual machine. (Need to address reasons why for that one in a future post.) I also see the phrase “built with another method”…which I interpret to be essentially a manually built image (as opposed to one using a B&C task sequence).

At the core, those are your options for image creation…automated with a Build & Capture task sequence or build it manually. A slight variation is to use the “Pause task sequence” step in an MDT task sequence to perform a step that can’t be automated…essentially automate all of it except for this one step.

Factors Impacting the Image Creation Process

When looking at the question of whether to manually build the image or use a Build and Capture task sequence, there are several key components that should be considered:

  • Image updates. Don’t consider an image to be “golden”…think of it as “current”. This can be a key distinction. Gold implies that it will never change. Current deals with the reality that an image is going to need to be updated. (Let’s not even get into the Thick/Thin/Hybrid image scenario…that’s a discussion for another day…perhaps another “WHY” post.) With that said, unless you are the most hardcore of “thin image” proponents, your image will at least have the OS and updates. Which means that within a month of image creation (Patch Tuesday), the image will be missing necessary updates. How often do you update it? Remember, anything that isn’t in your image has to be installed after the image is laid down…which adds time. I know of a very major company (if you live in the US, you have their products in your home) that had not updated their XP image in several years. The post image update process took a couple of hours to deploy somewhere around 200 updates that were not included in the image. Application updates/upgrades are also part of this equation. Basic gist is that images MUST be updated…ideally on a regular basis.
  • If applications are included in the image, are the applications packaged and able to be installed silently? If so, then that process can be automated. If not, then it has to be a manual step. Same goes for image tweaks.
  • Ideally you would like to use the same processes for managing apps and updates that go in your image that you use for managing the existing systems in your environment. You already have a “Patch Tuesday” process. Use the same process when building the image. You already have a process for pushing out application upgrades/updates. Use the same process in your image build.
  • In the end, you MUST have consistent repeatable results. You need a process that produces a reliable image every single time.
  • Lastly, you are busy. I’ve never met an IT person who had too much time on their hands. You need this process to take as little time out of your day/week as possible.

With those factors in mind…lets run them through the grid of our methods for image creation and see how things shake out.

Build and Capture Image Creation Process:

If your core applications that will go in the image can be installed silently…and if you are using either WSUS or SCCM for deploying updates, then this is the ideal situation. Your B&C task sequence could be as simple as “Click Next” and come back later to see your shiny new WIM file. Once you’ve got it working (which I won’t deny could be challenging) it couldn’t be any easier. Once it is going, you will never look back. I know of at least one company that has a recurring Task Sequence deployment to a virtual machine…to create a new image the day after Patch Tuesday each month. Completely automated. Score!

Because the task sequence is automated, there is very little time involved. Just click next and check on it later. Because all of the tasks are automated, there isn’t any room for admin error. Because it is automated, you are more likely to update your image on a regular basis. The process IS standardized and repeatable. Oh…and if a step does have to be performed manually, use an MDT task sequence with the “Pause” step to automate as much as possible…and only do the non-automatable tasks manually.

Manual Image Creation:

Manual is…well…manual. You install the OS from DVD/ISO. You install each app. You apply all the updates. You run Sysprep. You capture the image. All manually. Hopefully you are following a checklist. Hopefully you don’t forget a step. Good luck with that.

The manual image creation process is characterized by the following:

  • Slow. All those manual steps take time.
  • Time consuming. Because it is slow, realistically, you will not update the image as often as you should.
  • Open for admin error (i.e. forgetting a step or installing a component slightly differently upon image rebuild)
  • Not standardized/repeatable

Overall…friends don’t let friends use a manual image creation process. You might wish it on your enemies though! ;-) However…see my conclusion below for one instance where you might use an existing image.


If you’ve followed my blog for long or have seen my presentations at MMS or TechEd, then you should have known I was going to land on the side of using the Build and Capture Task Sequence before you even started this article. In my opinion (that I think I’ve adequately backed up with solid logic), using a B&C task sequence to create your image is the only way to go. It just makes sense from a time/automation/repeatability/manageability standpoint.

The ONLY exception that I see to this is if you are migrating from an old technology (i.e. Ghost) to SCCM, AND you are migrating from XP to Windows 7 / Windows 8. In that instance…would I recommend going through the process of recreating all of your Windows XP images…that you are going to be getting rid of soon anyway? No. In that instance I would say go ahead and capture that existing image (or if it is already a WIM file…see if you can deploy it as-is). Don’t spend the time recreating the image that you are going to be dumping (since XP EOL is coming up very soon!).

Would love your comments and feedback. Keep the ideas for future posts coming!

Until next time…keep asking the right questions.

April 29, 2013 Posted by | ConfigMgr, ConfigMgr 2012, MDT 2012, WHY Series, Windows 7, Windows 8 | 4 Comments

The “WHY” Series

One of my sessions at MMS this year was titled “The WHY of Configuration Manager”. It focused on why would you choose to do things a particular way in SCCM. There are many tasks that can be performed multiple ways in SCCM…and plenty of resources to tell you how to do those things. But there aren’t many resources to answer the question of “Why”. Why would I choose to do a task (or configure a setting…or design a hierarchy…etc) one way instead of another. The session took on several of these questions and attempted to answer the question of “Why?”.

With that in mind, my plan is to start a series of blog posts that I’m calling “The WHY Series”. The plan is to think through the options of a task/setting/design/etc and lay out the reasons why you might choose to implement things one way or another. At this point I don’t foresee a specific outline for the topics to be covered. I also don’t know that it will be solely limited to SCCM questions…although that is where many of the initial posts in the series will come from.

Also…I would love some feedback. Is this something you are interested in? If so…what topics would you like to see covered? Either leave a comment on this post, send me a message via my contact form, or ping me on Twitter.

Check back soon…I hope to have the first post up this week.

April 21, 2013 Posted by | ConfigMgr, ConfigMgr 2012, MMS | 1 Comment

Speaking at MMS 2013

A little over a week ago I found out that I get to speak at MMS again this year…and this year I get to speak twice! My sessions will be:

The WHY of Configuration Manager

There are plenty of resources to tell you HOW to perform various tasks with Configuration Manager. For that matter, there are multiple ways of doing many tasks. This session will use lessons learned from numerous Configuration Manager deployments to teach you WHY you would choose one method over another. This will be a broad fast paced session that digs into the questions you should ask to ensure you implement Configuration Manager the right way for your company.

Microsoft System Center: I’m "All In" (Co-present with Phil Pritchett)

Ever wondered what impact deploying all of System Center could have on your business? Join us for a look at a real world example of a company who did just that. We will look at the impact of deploying SCCM, SCOM, SCSM, and Orchestrator all in one environment.

So, if you are going to be in Vegas for the Management Summit, come on by…would love to meet you out there!

February 14, 2013 Posted by | ConfigMgr, ConfigMgr 2012, MMS | 1 Comment

SQL Server Version Numbers (Updated)

A couple of years ago I created a post with the major SQL version numbers. While working with a client this morning, I realized that I had not updated it to reflect several updates that have been released since the original post. Here is an updated table of major version numbers. To see all major and minor version numbers (i.e. versions for cumulative update versions), see this post. I’m also using this post to clean up some inconsistency in how the version numbers were listed in my previous post.

SQL   Version Version Number
SQL   Server 2012 RTM 11.0.2100.6
SQL   Server 2012 SP1 11.0.3000.0
SQL   Server 2008 R2 RTM 10.50.1600.1
SQL   Server 2008 R2 SP1 10.50.2500.0
SQL   Server 2008 R2 SP2 10.50.4000
SQL   Server 2008 RTM 10.0.1600.0
SQL   Server 2008 SP1 10.0.2531.0
SQL   Server 2008 SP2 10.0.4000.0
SQL   Server 2008 SP3 10.0.5500.0
SQL   Server 2005 RTM 9.00.1399
SQL   Server 2005 SP1 9.00.2047
SQL   Server 2005 SP2 9.00.3042.01
SQL   Server 2005 SP3 9.00.4035
SQL   Server 2000 RTM
SQL   Server 2000 SP1 8.00.384.0
SQL   Server 2000 SP2 8.00.534.0
SQL   Server 2000 SP3 8.00.760
SQL   Server 2000 SP3a 8.00.760
SQL   Server 2000 SP4 8.00.2039
SQL   Server 7.0 RTM 7.00.623
SQL   Server 7.0 SP1 7.00.699
SQL   Server 7.0 SP2 7.00.842
SQL   Server 7.0 SP3 7.00.961
SQL   Server 7.0 SP4 7.00.1063
SQL   Server 6.5 RTM 6.50.201
SQL   Server 6.5 SP1 6.50.213
SQL   Server 6.5 SP2 6.50.240
SQL   Server 6.5 SP3 6.50.258
SQL   Server 6.5 SP4 6.50.281
SQL   Server 6.5 SP5 6.50.415
SQL   Server 6.5 SP5a 6.50.416
SQL   Server 6.5 SP5a Update 6.50.479

December 10, 2012 Posted by | ConfigMgr, ConfigMgr 2012, SQL | 2 Comments

Shared SQL with Configuration Manager?

Over time I have talked with numerous people about where the SQL database should be for the Configuration Manager database. Where this conversation typically comes up is when a company has a DBA team that is demanding that all SQL databases be hosted on dedicated (and super powerful) database servers. These servers predominantly will host numerous SQL databases for a variety of applications. The reasoning typically falls into the following arguments:

  1. Licensing – We don’t want to have to pay for another SQL license, so all DBs will be on our dedicated SQL servers.
  2. Performance – Our crazy powerful DB servers will give better performance than what you would install locally.
  3. Security – We need to maintain control over the content of the DB, and the DB integrity in general. Having them on a dedicated SQL server allows us to do that in the best way.

Sounds like some good arguments right? Well…not so much. Let’s take a look at each of the three.

  1. Licensing – Not an issue at all. Configuration Manager 2012 licensing includes the ability to install SQL Standard…at no additional charge.
  2. Performance – There have been arguments for years about whether Configuration Manager performed better with remote or on-box SQL. I’ve seen people give great arguments both ways…but haven’t really seen anything definitive either direction. With Configuration Manager 2012, the recommendation from Microsoft is that SQL be local unless you hit certain size limitations. Unless you are over 50,000 clients, then on-box SQL Standard will work just fine for you. If more than 50,000 clients, then a remote SQL Standard will take you to 100,000 clients. SQL Enterprise is only necessary on a Central Administration Site supporting more than 50,000 clients. (For more info.)
  3. Security – THIS IS THE BIG ONE! It generally takes about a three minute conversation with a DBA before they run away from this argument. Consider the following facts and implications in a remote SQL scenario:
    1. The Configuration Manager site server must be a member of the local administrators group on the remote SQL server. (See the Configuration Manager documentation.)
    2. Several people who are not SQL admins will be administrators on the Configuration Manager site server.
    3. It is trivial for an admin on the Configuration manager site server to run any application (such as a CMD prompt or SQL Server Management Studio) as Local System. (See this post.)
    4. Since the Configuration Manager server (Local System) has admin rights on the remote SQL server…the non SQL Admin can VERY easily obtain admin rights on the SQL server.
    5. The DBA has now started sweating, twitching and begging you to keep your weird database away from his/her server. :-)

So, really the only reason to consider doing remote SQL at all is a performance issue…but you have to be a pretty big organization for that one to come into play. And even if you do need to do remote SQL…it should be a SQL server that is dedicated to Configuration Manager.

Note (12/4/2012): I was talking with a friend late in the day yesterday about this blog post. He reminded me that I had already posted about this issue last April. Thanks Phil…I’m a little scatterbrained sometimes! I’m leaving this post up anyway because it is better than the original in my opinion.

December 3, 2012 Posted by | ConfigMgr, ConfigMgr 2012, Security, SQL | Leave a comment

Pet Peeve–Configuration Manager “Software” Inventory

This one has annoyed me for years…need to get on my soapbox for a minute.

Let’s talk about the difference between Hardware and Software Inventory in Configuration Manager. Hardware inventory collects data from WMI and the registry. Software inventory looks at file properties. Hardware inventory runs relatively quickly and isn’t very resource intensive. Software inventory can be very resource intensive if not configured correctly. At a high level, here is what is covered by the two:

Hardware Inventory:

  • Obviously info on system information – Proc, RAM, actual hardware
  • Add/Remove Programs information

Software Inventory:

  • File information
  • Can be configured to actually collect a copy of a file. (be VERY careful!)

I have talked to numerous clients who are looking at Software Inventory to try to gather data about what software is installed…which it does not gather at all. My pet peeve is not with the way that the system is designed…I think it is a very good design. My issue is with the name. Software inventory is NOT an inventory of software. It is an inventory of FILES. A much better name would be to call it what it actually is…File Inventory.

I have seen very few companies with a real need for information on specific files. Most are simply wanting to know what software is installed on which machines…which Hardware Inventory provides. Some valid uses that I have seen include:

  • Locating PST files in an effort to get rid of them.
  • Locating password dump files. (Company had experienced internal espionage issues.)

The key to the valid uses of Software inventory is that they had absolutely nothing to do with installed software. They were looking for files.

September 18, 2012 Posted by | ConfigMgr, ConfigMgr 2012 | Leave a comment

Silent App Install Help Page

Anyone who works in enterprise IT (and with products such as Configuration Manager) needs to know how to install applications silently…without requiring user intervention. Recently I came across a web page that gives really good info on the various installation types (MSI / InstallShield / Wise / etc) and how to make them silent. It goes beyond the basics and gives background on how each of them work. The page hasn’t been updated in a while, but there is still some very good information there. This could be a good one to bookmark.

May 24, 2012 Posted by | ConfigMgr, ConfigMgr 2012, Packaging | 1 Comment

Remote SQL Security Concern

I have had a few discussions over the years about whether a Configuration Manager installation should include SQL “on box” or “remote”. The answer is generally “it depends”. This blog post is not going to dig into all of the reasons why you would choose either local or remote SQL…it is designed to highlight one particular security concern with the remote SQL option. Let’s think through several of the underlying components that are necessary for remote SQL to take place along with a few very common scenarios when this is the case.

  1. Generally a company will choose remote SQL because they want to have a beefy SQL box that is managed by their DBA team. This SQL box will commonly house several SQL databases…not just the Configuration Manager DB. Which means that any disruption on that SQL server has an impact on much more than just Configuration Manager.
  2. A requirement for remote SQL with Configuration Manager is that the Configuration Manager server’s computer account must be in the local admin group on the SQL server.
  3. Commonly there will be a number of Configuration Manager administrators that have admin rights on the Configuration Manager server.
  4. Commonly there will be a number of those same Configuration Manager administrators that do NOT have admin rights on the remote SQL server.

THAT is where the problem rears it’s head. Let’s connect all of the dots…

  1. Joe Admin is an admin on the Configuration Manager server…but is not an admin on the SQL server.
  2. The Configuration Manager server’s computer account is an admin on the SQL server.
  3. Joe Admin has read my article on how to run a command prompt as local system. Uh oh.
  4. Joe Admin uses psexec to run a command prompt (or SQL Management Studio…or regedit…or services.msc…or disk management…or whatever else) as local system on the Configuration Manager server.
  5. Joe Admin then connects that app (running in the “user” context of the Configuration Manager server’s computer account) to the SQL server.
  6. Joe Admin is now able to do anything that the Configuration Manager server’s computer account has rights to do…which is full Administrator rights…ON THE SQL SERVER!!!
  7. That security you thought you had…well it didn’t work so well.

Is there anything to keep Joe Admin from (either accidentally or maliciously):

  • Stopping services?
  • Deleting files?
  • Rebooting the server?
  • Jacking with the registry?
  • Installing (either good or bad) software?
  • Copying data off of the SQL server?
  • etc (I think you get the picture.)

Now…for some people that doesn’t matter. In many smaller installations the same team is managing Configuration Manager and SQL. However…if you are that small…why take on the extra complexity of the remote SQL scenario?

For others it matters big time! I’ve had conversations with customers who cringe at the very idea that some random Configuration Manager admin could possibly gain full rights to the SQL server that other business critical databases are stored on.

April 27, 2012 Posted by | ConfigMgr, ConfigMgr 2012, Security, SQL | 4 Comments

Business Value of Application Replacement

Who cares? That is the thought that went through my mind last night a few hours after I posted the last of my five part series on Dynamic Operating System Deployment and application replacement. Even if you don’t now, you SHOULD care. Let’s see if I can convince you…

Let me give one example of the difference that the concepts in that series made at a company. I had a client recently who was performing a company-wide Windows 7 rollout…migrating from Windows XP. This coincided with a PC replacement cycle, so this rollout was predominantly a “Computer Replace” scenario…so replacing the old XP box with a new Win7 box. After replacement, users obviously needed to be able to do their jobs on the new Win7 system…which meant that they needed some key applications that had previously been installed on their Windows XP system…but these apps had been installed on a case by case basis previously. The company has not implemented role-based application deployment at this time.

And THAT is where the problem arose. MANY of these applications are not in the core Windows 7 image for obvious reasons. (Visio, Project, Creative Suite, Oracle apps, numerous internal apps) For that matter, many of the apps that were installed in Windows XP were being replaced with a newer version in Windows 7 for application compatibility reasons. For this company it meant that when they performed the Windows 7 refresh on a location, they flew two employees to the refresh location to perform the upgrade. The PRIMARY reason that they needed to do this was so that the two employees could re-install applications on the user’s Windows 7 computer post-install on a case by case basis. This resulted in significant business problems including:

  • User downtime because necessary applications weren’t installed on their new Windows 7 system.
  • IT staff were pulled away from their day-to-day job for a week at a time to drive the migrations…mainly because of the need to install additional applications.
  • The Windows 7 migration for the company was taking SIGNIFICANTLY longer than desired because of these limitations (both app installs and a limited number of employees to travel to numerous locations).
  • Significant additional costs were associated with all of this (travel, time, delays, loss of user productivity)

Quite simply it was an unacceptable situation. Way too much wasted time and effort. That’s when they called us to see if we could help them streamline this process. I implemented the steps I outlined in posts 1, 2, 3, 4, and 5. The client saw some very significant improvements from a business value perspective…including…

  • A VERY significant reduction in the number of special post-image application installations.
  • Automated re-installation of required applications without the need for IT staff intervention.
  • Significant reduction in user downtime as a result of the migration process.
  • Consistency from an end user perspective. (i.e. My computer used to have Program X and it still does.)
  • Smoother Windows 7 migrations.
  • The company expects that significantly less travel will be required to perform the Windows 7 migrations.
  • Cost savings…both travel related and time related.

So…should you care about making your operating system deployments dynamic and adding the application replacement functionality to the process? If cost and time savings mean anything to you, then yes you should. Don’t know about you, but I’ve got better things to do with my life than to babysit an OS deployment! :-)

April 12, 2012 Posted by | ConfigMgr, MDT 2010 | Leave a comment

OSD and the MDT Database (5 of 5)

This is the last of a five part series on utilizing the MDT integration into Configuration Manager to improve your Operating System Deployment functionality. These processes will make your OSD setup much more dynamic. The series will be:

  1. Assumptions and creating the MDT database
  2. Dynamic OSD using the MDT Database
  3. Application Replacement #1…this post is the reason I started the series. Modifying the RetrievePackages stored procedure.
  4. Application Replacement #2. Populating the PackageMapping table.
  5. OSD and the MDT Database…connecting all the dots from the previous four posts. Setting up a task sequence to use the MDT database.

So, up to this point in this series we have put most of the pieces in place that are necessary to allow both Dynamic operating system deployments that are driven by the MDT database as well as perform Application Replacement during both Computer Refresh and Computer Replace scenarios. Now we need to tie all of this together and make it work together.

The two remaining pieces of the puzzle are:

  1. Create the CustomSettings.ini File
  2. Set up the Task Sequence to Process the Database Settings

Continue reading

April 11, 2012 Posted by | ConfigMgr, MDT 2010 | 1 Comment

Application Replacement (4 of 5)

This is the fourth of a five part series on utilizing the MDT integration into Configuration Manager to improve your Operating System Deployment functionality. These processes will make your OSD setup much more dynamic. The series will be:

  1. Assumptions and creating the MDT database
  2. Dynamic OSD using the MDT Database
  3. Application Replacement #1…this post is the reason I started the series. Modifying the RetrievePackages stored procedure.
  4. Application Replacement #2. Populating the PackageMapping table.
  5. OSD and the MDT Database…connecting all the dots from the previous four posts. Setting up a task sequence to use the MDT database.

In the previous post we modified the SQL stored procedure to make Package Mapping work for both a Refresh and Replace scenario. However neither scenario will work until we populate the PackageMapping table in the MDT database.

Populating the PackageMapping Table

The PackageMapping table has two columns: ARPName and Packages. Each entry in the table creates a correlation between a piece of installed software (ARPName) and a Configuration Manager Package/Program (Packages).

The values in the ARPName column come from the values in the Uninstall registry key. (HKLM\Software\Microsoft\Windows\CurrentVersion\Uninstall OR HKLM\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Uninstall). MSI installs will be GUIDs. Non-MSI installs will be other names…not necessarily the DisplayName from Add/Remove Programs. The value in this field corresponds with the SMS_G_System_ADD_REMOVE_PROGRAMS.ProdID field in the Configuration Manager database which is gathered by Hardware Inventory.

The Packages column contains the PackageID and Program Name for a Configuration Manager Package. The proper format for this column is: “XYZ00000:Program Name” where XYZ00000 is the Package ID, and “Program Name” is the exact name of the Configuration Manager Program Name in the package. Of particular importance, the values in the Packages column are case sensitive.

We need to do four things in order to populate this table:

  1. Obtain the ARPNames that we care about
  2. Obtain the Package:Program combinations that we will correlate to the ARPNames
  3. Correlate which ARPNames should install which Package:Program.
  4. Add the ARPName = Package:Program to the table

Continue reading

April 11, 2012 Posted by | ConfigMgr, MDT 2010 | 3 Comments

Application Replacement (3 of 5)

This is the third of a five part series on utilizing the MDT integration into Configuration Manager to improve your Operating System Deployment functionality. These processes will make your OSD setup much more dynamic. The series will be:

  1. Assumptions and creating the MDT database
  2. Dynamic OSD using the MDT Database
  3. Application Replacement #1…this post is the reason I started the series. Modifying the RetrievePackages stored procedure.
  4. Application Replacement #2. Populating the PackageMapping table.
  5. OSD and the MDT Database…connecting all the dots from the previous four posts. Setting up a task sequence to use the MDT database.

In the first post in this series, we set up the MDT database in our already functional Configuration Manager environment. The second post showed how to populate the MDT database in order to make our OSD process much more dynamic. This post will show the Application Replacement functionality.


For a while now MDT has included the ability to dynamically replace applications during a computer refresh scenario. (i.e. if the computer being reimaged has Visio installed, dynamically install Visio as part of the reimage) In a MDT only scenario, this is done with the UDI wizard via the Application Discovery pre-flight check. This can also be done using the integration with Configuration Manager…basing the software reinstall on the Configuration Manager inventory. While this functionality has been there for a while, I have found two pieces of it lacking:

  1. How to do this is buried in documentation that makes it a bit challenging to implement. (The MDT document that talks about this is a 497 page Word doc…good luck.)
  2. The process only works for a computer refresh scenario. It does not work in a computer replace situation…which is fairly common with my clients. They are not upgrading older Windows XP systems to Windows 7…they are replacing the computer. But they still need the user to have access to their current applications after the upgrade.

That is the reason for this blog post. First I want to show how to do this without having to dig through a huge doc. Second, I want to show how to modify this feature to allow for doing application replacement in both the Computer Refresh and the Computer Replace OSD scenarios.

Continue reading

April 11, 2012 Posted by | ConfigMgr, MDT 2010 | 3 Comments

Dynamic OSD using the MDT Database (2 of 5)

This is the second of a five part series on utilizing the MDT integration into Configuration Manager to improve your Operating System Deployment functionality. These processes will make your OSD setup much more dynamic. The series will be:

  1. Assumptions and creating the MDT database
  2. Dynamic OSD using the MDT Database
  3. Application Replacement #1…this post is the reason I started the series. Deals with necessary modifications to the RetrievePackages stored procedure.
  4. Application Replacement #2. Populating the PackageMapping table.
  5. OSD and the MDT Database…connecting all the dots from the previous four posts. Setting up a task sequence to use the MDT database.

In the first post in this series, we set up the MDT database in our already functional Configuration Manager environment. (Check the assumptions section of the previous post.) Now let’s look at populating the database with information that will make our OSD process much more dynamic.

Populate the MDT Database:

The MDT database can be used to customize the deployment of systems. Customization can be based on Location, Make/Model, Roles, or tied to a specific Computer via Asset Tag or MAC address. The customizations available are numerous and include software installation as well as various AD and OS settings.

Configure Locations

Locations in the MDT database are set up based on Default Gateway.

  1. Select Location, then click “New”. Continue reading

April 11, 2012 Posted by | ConfigMgr, MDT 2010 | 1 Comment

Configuration Manager and the MDT Database (1 of 5)

This is the first of a five part series on utilizing the MDT integration into Configuration Manager to improve your Operating System Deployment functionality. These processes will make your OSD setup much more dynamic. The series will be:

  1. Assumptions and creating the MDT database
  2. Dynamic OSD using the MDT Database
  3. Application Replacement…this post is the reason I started the series. This will discuss configuring the application replacement functionality (also referred to as package mapping)…one of the more powerful components of OSD once it is working correctly! In a nutshell, it is a process for dynamically replacing applications during a computer refresh or replace scenario. For example, if a computer has Visio installed and I re-image it…ensure that Visio is re-installed. Or…if Acrobat 6, 7, 8 or 9 is installed…replace it with Acrobat X. VERY nice! This post will detail the necessary modifications that must be made to the RetrievePackages stored procedure in order for this to work for both a Refresh and Replace scenario.
  4. Application Replacement #2. Populating the PackageMapping table.
  5. OSD and the MDT Database…connecting all the dots from the previous four posts. Setting up a task sequence to use the MDT database.

While the end goal of this series is to show you how to use the MDT database to perform dynamic application replacement in your Configuration Manager task sequence, there is a lot that must be put into place before we get there. First let’s deal with a few assumptions that I am making.


  1. Configuration Manager 2007 is installed and functional
  2. OSD is functioning at a basic level…image is imported and can be deployed via a standard deployment task sequence
  3. MDT 2010 Update 1 is installed on the site server
  4. “Configure Configuration Manager Integration” has been run on the site server
  5. An MDT Task sequence has been created and the wizard has built packages for:
    1. USMT
    2. MDT Toolkit
    3. Custom Settings
  6. A boot image has been created using the “Create Microsoft Deployment Boot Image” wizard. During the wizard, ADO support must be added to the boot image. ADO support is required to be able to query a database from Windows PE. Any necessary Mass Storage and Wired NIC drivers should also be added to the boot image.

Creating the MDT Database:

  1. Log on to the server where MDT is installed with an account that has rights to create a database on the SQL server.
  2. Open the Deployment Workbench
  3. Right click “Deployment Shares” and choose to create a New Deployment Shareimage
  4. Walk through the rest of the wizard. Take note of the share name (the default share name is “DeploymentShare$”) as we will refer to that later. There is no need to populate the deployment share like you would need to do if just using MDT. Uncheck all of the checkboxes in the wizard (image capture, admin password, and product key).
  5. Expand Deployment Shares | MDT Deployment Share | Advanced Configuration | Database
  6. Right click Database, select New Database. Follow the New DB Wizard image
  7. Enter the SQL server that will host the DB. Choose “Named Pipes” for the Network Library. I have seen others who have commented that TCP/IP can be problematic. These were mostly old posts, so it may not be an issue any more. Whichever is used, be sure it is enabled on the SQL server. image
  8. Choose to create a new database. Give it a name that makes sense. (e.g. MDTdb) image
  9. On the SQL Share screen, choose any share that exists on the SQL server. This is only used for establishing that the credentials can work. It can be any share. In a single server scenario where the MDT Deployment share is on the server, you can use DeploymentShare$ that was created when you created the Deployment Share in step #3 above. If a special share is created just for this purpose, then a file should be created in the share to indicate its purpose to protect against accidental deletion because someone thinks it is not being used. image
  10. Continue through the wizard to finish. The new DB connection will appear in the Deployment Workbench. image


This post walked through the background requirements and the initial creation of the MDT database…the necessary pre-reqs for the Application Replacement / Package Mapping functionality to work. The next post in the series will show you how to populate the MDT database to set the stage for customizing your OSD deployments.

April 11, 2012 Posted by | ConfigMgr, MDT 2010 | 2 Comments

Windows 7 Customizations

Over the last few years as a consultant I’ve had numerous engagements where clients wanted to customize the look/feel/settings in Windows 7. Different clients had different requirements around which customizations, whether it was permanent or a preference, etc. Below is a list of several customizations that I have helped clients perform. Many of these are found at various locations in forums, blog posts and Microsoft documentation. My goal is to gather these into one location so that it is easier for some of the more common (and for that matter some of the more obscure) customizations to be found. These are in no particular order. I will update this list from time to time. If you have any favorite customizations that you’d like to pass on, email me on my contact form and I’ll add them in. This post is REALLY long, so click the “Read More” link if you want to see the customizations.

Continue reading

April 1, 2012 Posted by | ConfigMgr, ConfigMgr 2012, MDT 2010, Windows 7 | 1 Comment

ZTItatoo.wsf–Error 9601: DNS Zone Does Not Exist

Recently I ran into an issue where a task sequence was failing on the Tattoo step of an MDT integrated task sequence. The error that shows up in the Status Message viewer is:

The task sequence execution engine failed executing the action (Tattoo) in the group (Execute Task Sequence) with error code 9601.

The operating system reported error 9601: DNS zone does not exist.

Now…I know that the real error has nothing to do with DNZ zone. That error code was generated by the ZTItatoo.wsf script. Taking a quick look through the MDT documentation (gotta love the search function in Word) for that error code shows what the script thinks is going on…

ERROR – ZTITatoo state restore task should be running in the full OS; aborting.

So…at least I’m on the right track…but I’m in the full OS already…why the bogus error? Looking through the script shows that the script is the value of an environment variable:

If oEnvironment.Item(“OSVersion”) = “WinPE” then

oLogging.ReportFailure “ERROR – ZTITatoo state restore task should be running in the full OS, aborting.”, 9601

End If

The reason I got this error is that I had stripped out a bunch of tasks from the default MDT task sequence…including the Gather task that runs just before the Tatoo task. The Gather tasks sets a ton of variables based on the current state of the system…including the “OSVersion” variable. The only time the Gather task had run in my task sequence was at the very beginning…when it was still in PE. Adding that step back in fixed the issue.

September 19, 2011 Posted by | ConfigMgr | Leave a comment

Error Code 31 During Build & Capture Task Sequence

I’ve seen this a few times during Operating System Deployment engagements at clients. During a Build and Capture task sequence the TS will fail with a 80004005 exit code. Looking at the SMSTS log or in the status messages for the advertisement will show messages similar to:

Windows Setup completed with exit code 31

Exiting with code 0x80004005

Windows setup failed, code 31. The operating system reported error 2147500037: Unspecified error

Lovely. What the heck does that mean? Since we all know how helpful 80004005 is. :-)

In looking at the logs located in x:\windows\temp\smstslog\windowssetuplogs (I can’t remember which log file.) I found reference to setup not being able to import a critical driver. Now we are getting somewhere…which driver is it? A blasted Alps Touchpad driver. Not sure about you, but that does not rise to the level of “critical” to me during a completely hands off process like a task sequence.

Basically I left the default “Auto Apply Drivers” step in the B&C Task Sequence. The client had previously imported a ton of drivers…including the “critical” Alps Touchpad driver. Simply disabling the Auto Apply Drivers step in the TS let the task sequence continue. Since the B&C only runs on a virtual machine I don’t need that step anyway.

September 15, 2011 Posted by | ConfigMgr | Leave a comment

UDI Application Weirdness

One of the really cool aspects of MDT 2010 Update 1 is the integration of what was formerly known as Modena…now named “User Driven Installation”. This provides a very slick looking wizard that can pop up at the beginning of a task sequence to allow for customization of the OSD process. There is a lot of functionality there around computer naming, domain/workgroup joining, OU selection, adding a user to the local admin group, language selection, and others. The feature that most of my clients are interested in however is the ability to customize application deployment for apps that are not in the image (Project, Visio, Acrobat, etc).

There has been one aspect of this that has caused issues on multiple occasions that I figured was worthy of a detailed blog post. The issue comes up if you use the default configuration file (UDIWizard_Config.xml) as your starting point when you run through the UDI Wizard Designer. After customizing the applications section, you still see the default applications screen when the wizard runs…either using the “Preview OSDWizard” command or running it in a Task Sequence. What you see is:


Digging into that default XML file shows us why this issue comes up. First you will notice that one of the “Preflight” checks that is performed is the “Application Discovery” check.


This allows you to configure app replacement in a re-image scenario (i.e. if Office 2007 is installed, install Office 2010…if Acrobat 9 is installed, install Acrobat X…if EditPlus is installed, install Notepad++). This really is a cool feature. However…if you haven’t configured this aspect it’s not necessary to run the check, so you can just delete it. However…deleting that from the Preflight does not clean up the other references to it in the XML file. If you edit the XML directly you will notice the following lines down in the Application configuration section. Note that there is still a reference to the AppDiscovery preflight check.

<Page Name="ApplicationPage" Behavior="enabled">
      <Applications Link.Uri="preflight\AppDiscovery\AppDiscoveryresult.xml" TsAppBaseVariable="PACKAGES" RootDisplayName="Applications">

Removing the reference in the “Link.Uri” attribute resolves the issue. That section should look like:

<Page Name="ApplicationPage" Behavior="enabled">
        <Applications Link.Uri="" TsAppBaseVariable="PACKAGES" RootDisplayName="Applications">

After doing so, you will see the customized application section that you configured using the wizard:


August 25, 2011 Posted by | ConfigMgr, MDT 2010 | Leave a comment

PXE Booting and IP Helper-Address Resources

I commonly work with clients who want to use PXE as the method for starting the Operating System Deployment process. In practically all instances the ConfigMgr server (and hence the PXE Service Point) and the clients are not on the same subnet. By default the PXE UDP packet will not be forwarded by the router…it will just drop it. As a result there are two methods for getting PXE to work across subnets…the IP Helper-Address and setting DHCP options.

Let me first address the DHCP Options scenario. First, this option is officially unsupported by Microsoft for getting PXE to work (per this blog post). It can work, but it may not be reliable and consistent. The DHCP options to set are a combination of 60, 66 and 67 depending on the scenario…in particular depending on whether the PXE server is on the same box as the DHCP server. The blog post linked above describes the options.

Below are a few resources that I have found helpful in understanding what the IP Helper-Address is, why you would set it, and defending the decision with the network team (which nearly ALWAYS happens). I will update this list as I find others. If

Troubleshooting the PXE Service Point and WDS in Configuration Manager 2007

Cisco IP Addressing and Services Commands

Cisco Support Forum: DHCP and PXE Question (see the response in this thread from Robert Taylor)

Trinity Explains The IP Helper-Address Command

Checklist for getting PXE to work in ConfigMgr

PXE Service Point causes the Windows Deployment Services Server service to crash and hang

PXE clients computers do not start when you configure the Dynamic Host Configuration Protocol server to use options 60, 66, 67

Windows Deployment Service stops responding when you use a PXE service point on a computer that is running a System Center Configuration Manager 2007 SP1 or SP2 site server

Windows Deployment Services server that is running Windows Server 2003 may not start after you move the server to a different organizational unit

August 19, 2011 Posted by | ConfigMgr | Leave a comment

ConfigMgr Version Numbers

I posted on this once before but realized last week that an update was in order. First, there were a few version numbers missing from my original post. Second…I only dealt with part of the question. When looking at version numbers for ConfigMgr, there are typically two items that someone might be referring to…the version of ConfigMgr installed on the site server or the version of the ConfigMgr client installed on the clients. My previous post mostly dealt with the site server question. This one will be an update to the original along with dealing with the client version numbers as well. BTW…I’ve seen posts scattered around the web that deal with this to varying degrees…I’m trying to gather all of the info out there into one place…something I haven’t been able to find.

ConfigMgr Site Server version numbers:

ConfigMgr RTM


ConfigMgr SP1

4.00.6221.1000 “R2 installed: No” (See the screenshot below.)

ConfigMgr SP1 R2

4.00.6221.1000 “R2 installed: Yes”

ConfigMgr SP2 (RC)


ConfigMgr SP2 (RTM)


ConfigMgr SP2 R3

4.00.6487.2000 “R3 installed: Yes”

Note: If an International Client Pack is installed it will change the second digit of the last section. ICP1 makes that digit a “4” while ICP2 makes it a “7”, so the SP1 version would be 4.00.6221.1700 with ICP2 installed.


ConfigMgr client version numbers:

ConfigMgr RTM


ConfigMgr SP1


ConfigMgr SP1 (with KB977203)


ConfigMgr SP2 (Beta)


ConfigMgr SP2 (RC)


ConfigMgr SP2 (RTM)


ConfigMgr SP2 (with KB977203)


ConfigMgr SP2 (with KB977384 beta)


ConfigMgr SP2 (with KB977384 beta)


ConfigMgr SP2 (with KB977384)


ConfigMgr SP2 (with KB2509007)


Note: R2 and R3 do not change the client version number.

August 18, 2011 Posted by | ConfigMgr | 3 Comments

%d bloggers like this: