The Realm of the Verbal Processor

Jarvis's Ramblings

Speaking at MNSCUG

If you are in the Minneapolis area, come out to the MN System Center User Group tomorrow night (Wednesday June 19). I will be doing my “WHY of Configuration Manager” session from MMS. Hope to see you there.

BTW…CDW is sponsoring the group tomorrow night.

June 18, 2013 Posted by | ConfigMgr 2012, User Group, WHY Series | Leave a comment

Image Build–Manual or Build & Capture?

WHY Series #2


Late last week I got the following email via my contact form. It seemed like the ideal topic for the next post in the series. (Thanks Matt for the message!)

I have a question for your WHY series. I was debating with a co-worker yesterday why you would use the "Build and Capture" task sequence for OSD instead of capturing a system that you already have or have built with another method. I have a few ideas on advantages and disadvantages, but I would like to hear your opinion.

I am going to make a couple of assumptions based on what I read in the question. I interpret “a system that you already have” to mean an existing physical machine that would be captured to create an image. This might not be what the reader intended, but it should be addressed in this post regardless. Best practice is to create a hardware independent image on a virtual machine. (Need to address reasons why for that one in a future post.) I also see the phrase “built with another method”…which I interpret to be essentially a manually built image (as opposed to one using a B&C task sequence).

At the core, those are your options for image creation…automated with a Build & Capture task sequence or build it manually. A slight variation is to use the “Pause task sequence” step in an MDT task sequence to perform a step that can’t be automated…essentially automate all of it except for this one step.

Factors Impacting the Image Creation Process

When looking at the question of whether to manually build the image or use a Build and Capture task sequence, there are several key components that should be considered:

  • Image updates. Don’t consider an image to be “golden”…think of it as “current”. This can be a key distinction. Gold implies that it will never change. Current deals with the reality that an image is going to need to be updated. (Let’s not even get into the Thick/Thin/Hybrid image scenario…that’s a discussion for another day…perhaps another “WHY” post.) With that said, unless you are the most hardcore of “thin image” proponents, your image will at least have the OS and updates. Which means that within a month of image creation (Patch Tuesday), the image will be missing necessary updates. How often do you update it? Remember, anything that isn’t in your image has to be installed after the image is laid down…which adds time. I know of a very major company (if you live in the US, you have their products in your home) that had not updated their XP image in several years. The post image update process took a couple of hours to deploy somewhere around 200 updates that were not included in the image. Application updates/upgrades are also part of this equation. Basic gist is that images MUST be updated…ideally on a regular basis.
  • If applications are included in the image, are the applications packaged and able to be installed silently? If so, then that process can be automated. If not, then it has to be a manual step. Same goes for image tweaks.
  • Ideally you would like to use the same processes for managing apps and updates that go in your image that you use for managing the existing systems in your environment. You already have a “Patch Tuesday” process. Use the same process when building the image. You already have a process for pushing out application upgrades/updates. Use the same process in your image build.
  • In the end, you MUST have consistent repeatable results. You need a process that produces a reliable image every single time.
  • Lastly, you are busy. I’ve never met an IT person who had too much time on their hands. You need this process to take as little time out of your day/week as possible.

With those factors in mind…lets run them through the grid of our methods for image creation and see how things shake out.

Build and Capture Image Creation Process:

If your core applications that will go in the image can be installed silently…and if you are using either WSUS or SCCM for deploying updates, then this is the ideal situation. Your B&C task sequence could be as simple as “Click Next” and come back later to see your shiny new WIM file. Once you’ve got it working (which I won’t deny could be challenging) it couldn’t be any easier. Once it is going, you will never look back. I know of at least one company that has a recurring Task Sequence deployment to a virtual machine…to create a new image the day after Patch Tuesday each month. Completely automated. Score!

Because the task sequence is automated, there is very little time involved. Just click next and check on it later. Because all of the tasks are automated, there isn’t any room for admin error. Because it is automated, you are more likely to update your image on a regular basis. The process IS standardized and repeatable. Oh…and if a step does have to be performed manually, use an MDT task sequence with the “Pause” step to automate as much as possible…and only do the non-automatable tasks manually.

Manual Image Creation:

Manual is…well…manual. You install the OS from DVD/ISO. You install each app. You apply all the updates. You run Sysprep. You capture the image. All manually. Hopefully you are following a checklist. Hopefully you don’t forget a step. Good luck with that.

The manual image creation process is characterized by the following:

  • Slow. All those manual steps take time.
  • Time consuming. Because it is slow, realistically, you will not update the image as often as you should.
  • Open for admin error (i.e. forgetting a step or installing a component slightly differently upon image rebuild)
  • Not standardized/repeatable

Overall…friends don’t let friends use a manual image creation process. You might wish it on your enemies though! ;-) However…see my conclusion below for one instance where you might use an existing image.

Conclusion:

If you’ve followed my blog for long or have seen my presentations at MMS or TechEd, then you should have known I was going to land on the side of using the Build and Capture Task Sequence before you even started this article. In my opinion (that I think I’ve adequately backed up with solid logic), using a B&C task sequence to create your image is the only way to go. It just makes sense from a time/automation/repeatability/manageability standpoint.

The ONLY exception that I see to this is if you are migrating from an old technology (i.e. Ghost) to SCCM, AND you are migrating from XP to Windows 7 / Windows 8. In that instance…would I recommend going through the process of recreating all of your Windows XP images…that you are going to be getting rid of soon anyway? No. In that instance I would say go ahead and capture that existing image (or if it is already a WIM file…see if you can deploy it as-is). Don’t spend the time recreating the image that you are going to be dumping (since XP EOL is coming up very soon!).

Would love your comments and feedback. Keep the ideas for future posts coming!

Until next time…keep asking the right questions.


April 29, 2013 Posted by | ConfigMgr, ConfigMgr 2012, MDT 2012, WHY Series, Windows 7, Windows 8 | 4 Comments

To CAS or Not to CAS

WHY Series #1


I figured I’d start the WHY Series with a question that will have an impact on your Configuration Manager design…do you need a Central Administration Site or not? To CAS or not to CAS…that is the question.

First let’s address a key difference between Configuration Manager 2012 and 2007. A Central Administration Site (SCCM 2012) is NOT the same as a Central Primary site (SCCM 2007). A CAS cannot have clients assigned to it. It cannot have all SCCM site roles. It is for administration and reporting ONLY. A CAS can only have primary sites as child sites…no secondaries attached to a CAS. It isn’t just a new name…it is fundamentally different. With that said…why would you or would you not need a CAS?

When you get right down to it, the question of whether or not you need a CAS boils down to a different question…”will I need more than one primary site?”. If the answer to that question is no…then you’ve also answered the CAS question…no you don’t need a CAS. You only need a CAS if you have more than one primary site. So…with that being the REAL question to ask…let’s look at reasons why you would need multiple primaries.

Scalability

The primary reason why you would need multiple primaries is scalability. There are certain requirements from a technical limitation standpoint that force the need for a second primary. Per the documentation these include:

  • More than 100,000 clients. If you are currently or expecting to grow beyond 100,000 clients, congratulations, you get a CAS because the published client count limitation for a single primary site is 100,000.
  • More than 10,000 Windows Embedded clients with File Based Write Filters (with proper exclusions implemented). (3000 if the listed exclusions are not implemented)
  • More than 50,000 MAC clients.
  • More than 250 Secondary sites
  • More than 250 Distribution Points (although note that each Secondary site can have 250 DPs as well. With that in mind the aggregate total of DPs…those directly attached to the primary and all of the DPs attached to all of the secondary sites is a maximum of 5000 DPs)
Just in Case

Let’s go ahead and deal with an argument that came up with the RTM of SCCM 2012…the “just in case” scenario. This came about because at RTM, you had to install a CAS first in the hierarchy…you couldn’t attach a primary to a CAS after the fact. So, some companies chose to install a CAS “just in case” they would ever need one. This often came up when talking about a merger…that you would want a CAS in order to pull the other company into the hierarchy. Well…what if the other company had better hardware? What if your company was going to be the “child” company after the merger? Well…now you get rid of your CAS anyway…and you had unnecessary complexity in your hierarchy for nothing. Really, the “just in case” argument was always a weak/bad argument.

With the release of SP1 for SCCM 2012, it is now possible to join an existing primary to a CAS…the CAS no longer has to be the first thing installed in the hierarchy. Since it now IS possible to join an existing primary site to a CAS…the “just in case” scenario is completely blown away.

Conclusion:

Unless you meet (or are approaching) one of the scalability limitations, assume that you do NOT need a CAS. Keep your design simple. Always always always start with a simple design…then add complexity to meet either business or technical requirements. But ONLY add complexity to address one of those requirements. In general assume that you do NOT need a CAS unless specific requirements (business or technical) make it necessary.

Until next time…keep asking the right questions.


April 22, 2013 Posted by | ConfigMgr 2012, WHY Series | , , , | 1 Comment