I just finished reading an article by Rod Trent where he mentions some possibilities that may come in Satya Nadella’s announcements tomorrow. That brought to mind something I have been thinking and saying for a while now…but never put in print. Just a warning…everything I say in this post is speculation. I have zero inside information from Microsoft that any of this will in fact happen. It is just my attempt to read between the lines of what I see and take a guess about what could happen.
For the last couple of years I have been looking at the future of System Center Configuration Manager. Once SCCM 2012 was released it became pretty clear that this was a very mature product that while could be improved in some feature areas…it wasn’t likely to change significantly from an architecture standpoint. As a consultant, that got me thinking about my IT future…what will I be doing in five years? Add to this the constant move towards the Cloud. And the growing functionality of Windows Intune. And it is not insignificant that Microsoft moved the Intune team into the same building as the SCCM team.
Right now we have SCCM as a VERY solid on-premise solution for managing systems. There is also an Intune connection where you can see systems that you are managing with Intune inside your SCCM console.
Now…what if that got flipped on it’s head?
What if Windows Intune was the “boss” of the management solution. What if you had the option to host an on-premise Intune server for content distribution (app and OS deployment)? In this scenario, Intune would be the equivalent of your current SCCM primary. The (currently fictional) on-premise Intune server would be like an SCCM Distribution Point. Managed from the cloud, but with a local presence for content.
Let that sink in a bit.
And then think about the “Cloud first” mantra that has been coming out of Redmond lately.
Here are some thoughts on how this could change the systems management arena…
- If this becomes a “Cloud first” solution, then Intune would be on the fast track development cycle (the cloud release cadence) and SCCM would be playing catch up (or maybe not).
- Upgrade of the management system? They would take place in the cloud…in Intune. Nothing for you to do.
- Migration to the next version? Again…that would take place in the cloud…as far as a consumer of the service is concerned, the migration wouldn’t exist any more.
- For the consumer of the service (Microsoft’s customers), there are a few appealing aspects of this. Less on-premise complexity to manage. Fewer servers that could go down. Less maintenance/upgrade of internal servers…and the manpower costs associated with that.
- From the Microsoft perspective, let’s be realistic. Microsoft is not a non-profit…they are looking to make money. (I’m not saying that is a bad thing…it is reality.) This would be a recurring revenue stream. If a customer goes down that path and sees the value in the service, then they are likely in it for the long haul.
If this does in fact happen, it would be a big time game changer. It wouldn’t happen overnight, but it would result in a lot of IT folks sweating and figuring out what they will be doing next to pay the bills.
But at this point…it’s all pure speculation on my part. Let’s see if Satya says anything down that path tomorrow.
Okay…so I’m a month behind on looking through the new features that are coming in SCCM 2012 R2…it’s been a busy month. But in looking through the features…there are some really cool ones that I wanted to highlight. The full breakdown of what’s new in R2 is here.
- You can now select where to put the DB files when you install. No need to tweak stuff on the back end any more!
- Certificate Registration Point (along with Certificate Profiles)…you can now use SCCM to deploy certificates. This is one of those items that are typically done via Group Policy…but it’s surprising how many of my clients have to fight/negotiate with another department/silo in order to get a GPO created or modified. This will simplify that process…which is a very welcome addition!
- Ability to merge one SCCM 2012 R2 hierarchy with another.
- Mac computers can use an enrollment wizard instead of having to install from command line!
- Resultant Client Settings…kind of like RSOP for SCCM client settings.
- Numerous Mobile Device improvements.
- Enrollment of iOS and Android without requiring Windows Intune.
- Wipe/Retire functions can be configured to only wipe company data.
- Enrolled devices can be configured as either “company owned” or “personally owned”…with different configurations for each.
- VPN and Wi-Fi Profiles…again something that has historically fallen to GPO for configuration.
- Software Updates Preview…kinda like the Search Filter function of Software Updates in SCCM 2007. Nice to know what an ADR will do before it actually creates the deployment!
- New Application deployment type…”web application”. It just deploys a shortcut to a web-based app.
- A few OSD improvements:
- Support for Server 2012 R2 and Windows 8.1
- Check Readiness – VERY nice to see this “sanity check” step that has been available via the MDT integration become a native step! So many accidental OS deployments could have been prevented by this simple step.
- Set Dynamic Variables – this brings some of the common steps that are possible via the customsettings.ini file to the masses…putting the cookies on the bottom shelf.
- New report – “Distribution point usage summary”. Shows how much a given DP is used…number of clients connected and data transfer info.
- Multiple Network Access Accounts
- Content distribution improvements
- SCCM “learns” which DPs are connected by faster connections…and uses that info to prioritize content deployment.
- Improved content validation…validates 50 files per WMI call instead of just one!
- Reports can now be controlled via role-based administration. I’ve had multiple clients ask about this.
Those aren’t all of the additions…but they are the ones that I can see my clients being the most excited about. Looking forward to R2!
When you install a Service Pack or Cumulative Update for SCCM, you also need to update the SCCM console wherever it is installed. And…when you install the console it must be updated to the same SP and CU level as the site server. Unfortunately, the install of the CU only offers the option of creating a Package/Program for updating the console…not an Application that can take care of all of it with one deployment. So…here is how to deploy the SCCM Console via the Application Model.
First, we will need to create an Application for installing the SCCM Console. Create the Application with the name/app catalog/ etc info you wish. I am assuming that we are starting with a SP1 system. When you get to the Deployment Type, here are the settings to use:
- Script installer (since it is an EXE)
- Content location: Best practice is to copy the <SCCMinstall>\tools\ConsoleSetup folder to another location that you use for the source for this package.
- Programs tab:
- Installation Program:
- ConsoleSetup.exe /q EnableSQM=0 TargetDir=”%ProgramFiles%\<FolderName>” DefaultSiteServerName=<FQDN2SiteServer>
- Uninstall Program:
- ConsoleSetup.exe /uninstall /q
- Make sure to select – Run installation and uninstall program as 32-bit process on 64-bit clients.
- Installation Program:
- Detection Method…Registry
- Key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\Products\CE6E19024E9D710409D3F46536E239F3\InstallProperties
- Value: DisplayVersion
- Leave the 32bit / 64bit box UNchecked
- Data Type: String
- Operator = Equals
- Value = 5.00.7804.1000
- User Experience
- Install for system
- Whether or not a user is logged on
Second, we need an Application for the Cumulative Update (in this case, CU2). I used the Package Conversion Manager to migrate the existing Package/Program for the “SP1 Cumulative update 2 – console update” package into an application. Again…name/app catalog/etc are your choices…Deployment Type has the following settings:
- Script installer (this is a MSP patch)
- Content location: should already be set if you used PCM. If not, default is \\<SiteServerFQDN>\SMS_<SiteCode>\hotfix\KB2854009\AdminConsole\i386
- Programs Tab
- Installation Program:
- msiexec.exe /p configmgr2012adminui-sp1-kb2854009-i386.msp /L*v %TEMP%\configmgr2012adminui-sp1-kb2854009-i386.msp.LOG /q REINSTALL=ALL REINSTALLMODE=mous
- Make sure to select – Run installation and uninstall program as 32-bit process on 64-bit clients.
- Installation Program:
- Detection Method…Registry
- Key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\Products\CE6E19024E9D710409D3F46536E239F3\Patches\AAD68D6F52CC8E349805BB5169C11B26
- Value: DisplayName
- Leave the 32bit / 64bit box UNchecked
- Data Type: String
- Operator = Equals
- Value = ConfigMgr2012AdminUI-SP1-KB2854009-I386
- User Experience
- Install for system
- Whether or not a user is logged on
- Add a new dependency on the SCCM Console application that you created above.
Now all you need to do is deploy the SCCM CU2 Application to an AD Security Group that contains the users who should have the SCCM console. The applications above will:
- Determine if the SCCM console is already installed
- Install it if necessary
- Confirm that the console installed successfully
- Then it will determine if CU2 is already installed
- Install the CU2 update if necessary
- Then confirm that the CU installed successfully.
Now…I did not put anything in this to confirm things like the .NET 4 Framework which is a pre-req…but I’m assuming most of you already have that on your systems. If not, I’m sure you can figure it out on your own!
If you are in the Minneapolis area, come out to the MN System Center User Group tomorrow night (Wednesday June 19). I will be doing my “WHY of Configuration Manager” session from MMS. Hope to see you there.
BTW…CDW is sponsoring the group tomorrow night.
WHY Series #2
Late last week I got the following email via my contact form. It seemed like the ideal topic for the next post in the series. (Thanks Matt for the message!)
I have a question for your WHY series. I was debating with a co-worker yesterday why you would use the "Build and Capture" task sequence for OSD instead of capturing a system that you already have or have built with another method. I have a few ideas on advantages and disadvantages, but I would like to hear your opinion.
I am going to make a couple of assumptions based on what I read in the question. I interpret “a system that you already have” to mean an existing physical machine that would be captured to create an image. This might not be what the reader intended, but it should be addressed in this post regardless. Best practice is to create a hardware independent image on a virtual machine. (Need to address reasons why for that one in a future post.) I also see the phrase “built with another method”…which I interpret to be essentially a manually built image (as opposed to one using a B&C task sequence).
At the core, those are your options for image creation…automated with a Build & Capture task sequence or build it manually. A slight variation is to use the “Pause task sequence” step in an MDT task sequence to perform a step that can’t be automated…essentially automate all of it except for this one step.
Factors Impacting the Image Creation Process
When looking at the question of whether to manually build the image or use a Build and Capture task sequence, there are several key components that should be considered:
- Image updates. Don’t consider an image to be “golden”…think of it as “current”. This can be a key distinction. Gold implies that it will never change. Current deals with the reality that an image is going to need to be updated. (Let’s not even get into the Thick/Thin/Hybrid image scenario…that’s a discussion for another day…perhaps another “WHY” post.) With that said, unless you are the most hardcore of “thin image” proponents, your image will at least have the OS and updates. Which means that within a month of image creation (Patch Tuesday), the image will be missing necessary updates. How often do you update it? Remember, anything that isn’t in your image has to be installed after the image is laid down…which adds time. I know of a very major company (if you live in the US, you have their products in your home) that had not updated their XP image in several years. The post image update process took a couple of hours to deploy somewhere around 200 updates that were not included in the image. Application updates/upgrades are also part of this equation. Basic gist is that images MUST be updated…ideally on a regular basis.
- If applications are included in the image, are the applications packaged and able to be installed silently? If so, then that process can be automated. If not, then it has to be a manual step. Same goes for image tweaks.
- Ideally you would like to use the same processes for managing apps and updates that go in your image that you use for managing the existing systems in your environment. You already have a “Patch Tuesday” process. Use the same process when building the image. You already have a process for pushing out application upgrades/updates. Use the same process in your image build.
- In the end, you MUST have consistent repeatable results. You need a process that produces a reliable image every single time.
- Lastly, you are busy. I’ve never met an IT person who had too much time on their hands. You need this process to take as little time out of your day/week as possible.
With those factors in mind…lets run them through the grid of our methods for image creation and see how things shake out.
Build and Capture Image Creation Process:
If your core applications that will go in the image can be installed silently…and if you are using either WSUS or SCCM for deploying updates, then this is the ideal situation. Your B&C task sequence could be as simple as “Click Next” and come back later to see your shiny new WIM file. Once you’ve got it working (which I won’t deny could be challenging) it couldn’t be any easier. Once it is going, you will never look back. I know of at least one company that has a recurring Task Sequence deployment to a virtual machine…to create a new image the day after Patch Tuesday each month. Completely automated. Score!
Because the task sequence is automated, there is very little time involved. Just click next and check on it later. Because all of the tasks are automated, there isn’t any room for admin error. Because it is automated, you are more likely to update your image on a regular basis. The process IS standardized and repeatable. Oh…and if a step does have to be performed manually, use an MDT task sequence with the “Pause” step to automate as much as possible…and only do the non-automatable tasks manually.
Manual Image Creation:
Manual is…well…manual. You install the OS from DVD/ISO. You install each app. You apply all the updates. You run Sysprep. You capture the image. All manually. Hopefully you are following a checklist. Hopefully you don’t forget a step. Good luck with that.
The manual image creation process is characterized by the following:
- Slow. All those manual steps take time.
- Time consuming. Because it is slow, realistically, you will not update the image as often as you should.
- Open for admin error (i.e. forgetting a step or installing a component slightly differently upon image rebuild)
- Not standardized/repeatable
Overall…friends don’t let friends use a manual image creation process. You might wish it on your enemies though! ;-) However…see my conclusion below for one instance where you might use an existing image.
If you’ve followed my blog for long or have seen my presentations at MMS or TechEd, then you should have known I was going to land on the side of using the Build and Capture Task Sequence before you even started this article. In my opinion (that I think I’ve adequately backed up with solid logic), using a B&C task sequence to create your image is the only way to go. It just makes sense from a time/automation/repeatability/manageability standpoint.
The ONLY exception that I see to this is if you are migrating from an old technology (i.e. Ghost) to SCCM, AND you are migrating from XP to Windows 7 / Windows 8. In that instance…would I recommend going through the process of recreating all of your Windows XP images…that you are going to be getting rid of soon anyway? No. In that instance I would say go ahead and capture that existing image (or if it is already a WIM file…see if you can deploy it as-is). Don’t spend the time recreating the image that you are going to be dumping (since XP EOL is coming up very soon!).
Would love your comments and feedback. Keep the ideas for future posts coming!
Until next time…keep asking the right questions.
WHY Series #1
I figured I’d start the WHY Series with a question that will have an impact on your Configuration Manager design…do you need a Central Administration Site or not? To CAS or not to CAS…that is the question.
First let’s address a key difference between Configuration Manager 2012 and 2007. A Central Administration Site (SCCM 2012) is NOT the same as a Central Primary site (SCCM 2007). A CAS cannot have clients assigned to it. It cannot have all SCCM site roles. It is for administration and reporting ONLY. A CAS can only have primary sites as child sites…no secondaries attached to a CAS. It isn’t just a new name…it is fundamentally different. With that said…why would you or would you not need a CAS?
When you get right down to it, the question of whether or not you need a CAS boils down to a different question…”will I need more than one primary site?”. If the answer to that question is no…then you’ve also answered the CAS question…no you don’t need a CAS. You only need a CAS if you have more than one primary site. So…with that being the REAL question to ask…let’s look at reasons why you would need multiple primaries.
The primary reason why you would need multiple primaries is scalability. There are certain requirements from a technical limitation standpoint that force the need for a second primary. Per the documentation these include:
- More than 100,000 clients. If you are currently or expecting to grow beyond 100,000 clients, congratulations, you get a CAS because the published client count limitation for a single primary site is 100,000.
- More than 10,000 Windows Embedded clients with File Based Write Filters (with proper exclusions implemented). (3000 if the listed exclusions are not implemented)
- More than 50,000 MAC clients.
- More than 250 Secondary sites
- More than 250 Distribution Points (although note that each Secondary site can have 250 DPs as well. With that in mind the aggregate total of DPs…those directly attached to the primary and all of the DPs attached to all of the secondary sites is a maximum of 5000 DPs)
Just in Case
Let’s go ahead and deal with an argument that came up with the RTM of SCCM 2012…the “just in case” scenario. This came about because at RTM, you had to install a CAS first in the hierarchy…you couldn’t attach a primary to a CAS after the fact. So, some companies chose to install a CAS “just in case” they would ever need one. This often came up when talking about a merger…that you would want a CAS in order to pull the other company into the hierarchy. Well…what if the other company had better hardware? What if your company was going to be the “child” company after the merger? Well…now you get rid of your CAS anyway…and you had unnecessary complexity in your hierarchy for nothing. Really, the “just in case” argument was always a weak/bad argument.
With the release of SP1 for SCCM 2012, it is now possible to join an existing primary to a CAS…the CAS no longer has to be the first thing installed in the hierarchy. Since it now IS possible to join an existing primary site to a CAS…the “just in case” scenario is completely blown away.
Unless you meet (or are approaching) one of the scalability limitations, assume that you do NOT need a CAS. Keep your design simple. Always always always start with a simple design…then add complexity to meet either business or technical requirements. But ONLY add complexity to address one of those requirements. In general assume that you do NOT need a CAS unless specific requirements (business or technical) make it necessary.
Until next time…keep asking the right questions.
One of my sessions at MMS this year was titled “The WHY of Configuration Manager”. It focused on why would you choose to do things a particular way in SCCM. There are many tasks that can be performed multiple ways in SCCM…and plenty of resources to tell you how to do those things. But there aren’t many resources to answer the question of “Why”. Why would I choose to do a task (or configure a setting…or design a hierarchy…etc) one way instead of another. The session took on several of these questions and attempted to answer the question of “Why?”.
With that in mind, my plan is to start a series of blog posts that I’m calling “The WHY Series”. The plan is to think through the options of a task/setting/design/etc and lay out the reasons why you might choose to implement things one way or another. At this point I don’t foresee a specific outline for the topics to be covered. I also don’t know that it will be solely limited to SCCM questions…although that is where many of the initial posts in the series will come from.
Also…I would love some feedback. Is this something you are interested in? If so…what topics would you like to see covered? Either leave a comment on this post, send me a message via my contact form, or ping me on Twitter.
Check back soon…I hope to have the first post up this week.
A little over a week ago I found out that I get to speak at MMS again this year…and this year I get to speak twice! My sessions will be:
There are plenty of resources to tell you HOW to perform various tasks with Configuration Manager. For that matter, there are multiple ways of doing many tasks. This session will use lessons learned from numerous Configuration Manager deployments to teach you WHY you would choose one method over another. This will be a broad fast paced session that digs into the questions you should ask to ensure you implement Configuration Manager the right way for your company.
Microsoft System Center: I’m "All In" (Co-present with Phil Pritchett)
Ever wondered what impact deploying all of System Center could have on your business? Join us for a look at a real world example of a company who did just that. We will look at the impact of deploying SCCM, SCOM, SCSM, and Orchestrator all in one environment.
So, if you are going to be in Vegas for the Management Summit, come on by…would love to meet you out there!
A couple of years ago I created a post with the major SQL version numbers. While working with a client this morning, I realized that I had not updated it to reflect several updates that have been released since the original post. Here is an updated table of major version numbers. To see all major and minor version numbers (i.e. versions for cumulative update versions), see this post. I’m also using this post to clean up some inconsistency in how the version numbers were listed in my previous post.
|SQL Version||Version Number|
|SQL Server 2012 RTM||11.0.2100.6|
|SQL Server 2012 SP1||11.0.3000.0|
|SQL Server 2008 R2 RTM||10.50.1600.1|
|SQL Server 2008 R2 SP1||10.50.2500.0|
|SQL Server 2008 R2 SP2||10.50.4000|
|SQL Server 2008 RTM||10.0.1600.0|
|SQL Server 2008 SP1||10.0.2531.0|
|SQL Server 2008 SP2||10.0.4000.0|
|SQL Server 2008 SP3||10.0.5500.0|
|SQL Server 2005 RTM||9.00.1399|
|SQL Server 2005 SP1||9.00.2047|
|SQL Server 2005 SP2||9.00.3042.01|
|SQL Server 2005 SP3||9.00.4035|
|SQL Server 2000 RTM||8.00.194.0|
|SQL Server 2000 SP1||8.00.384.0|
|SQL Server 2000 SP2||8.00.534.0|
|SQL Server 2000 SP3||8.00.760|
|SQL Server 2000 SP3a||8.00.760|
|SQL Server 2000 SP4||8.00.2039|
|SQL Server 7.0 RTM||7.00.623|
|SQL Server 7.0 SP1||7.00.699|
|SQL Server 7.0 SP2||7.00.842|
|SQL Server 7.0 SP3||7.00.961|
|SQL Server 7.0 SP4||7.00.1063|
|SQL Server 6.5 RTM||6.50.201|
|SQL Server 6.5 SP1||6.50.213|
|SQL Server 6.5 SP2||6.50.240|
|SQL Server 6.5 SP3||6.50.258|
|SQL Server 6.5 SP4||6.50.281|
|SQL Server 6.5 SP5||6.50.415|
|SQL Server 6.5 SP5a||6.50.416|
|SQL Server 6.5 SP5a Update||6.50.479|
Over time I have talked with numerous people about where the SQL database should be for the Configuration Manager database. Where this conversation typically comes up is when a company has a DBA team that is demanding that all SQL databases be hosted on dedicated (and super powerful) database servers. These servers predominantly will host numerous SQL databases for a variety of applications. The reasoning typically falls into the following arguments:
- Licensing – We don’t want to have to pay for another SQL license, so all DBs will be on our dedicated SQL servers.
- Performance – Our crazy powerful DB servers will give better performance than what you would install locally.
- Security – We need to maintain control over the content of the DB, and the DB integrity in general. Having them on a dedicated SQL server allows us to do that in the best way.
Sounds like some good arguments right? Well…not so much. Let’s take a look at each of the three.
- Licensing – Not an issue at all. Configuration Manager 2012 licensing includes the ability to install SQL Standard…at no additional charge.
- Performance – There have been arguments for years about whether Configuration Manager performed better with remote or on-box SQL. I’ve seen people give great arguments both ways…but haven’t really seen anything definitive either direction. With Configuration Manager 2012, the recommendation from Microsoft is that SQL be local unless you hit certain size limitations. Unless you are over 50,000 clients, then on-box SQL Standard will work just fine for you. If more than 50,000 clients, then a remote SQL Standard will take you to 100,000 clients. SQL Enterprise is only necessary on a Central Administration Site supporting more than 50,000 clients. (For more info.)
- Security – THIS IS THE BIG ONE! It generally takes about a three minute conversation with a DBA before they run away from this argument. Consider the following facts and implications in a remote SQL scenario:
- The Configuration Manager site server must be a member of the local administrators group on the remote SQL server. (See the Configuration Manager documentation.)
- Several people who are not SQL admins will be administrators on the Configuration Manager site server.
- It is trivial for an admin on the Configuration manager site server to run any application (such as a CMD prompt or SQL Server Management Studio) as Local System. (See this post.)
- Since the Configuration Manager server (Local System) has admin rights on the remote SQL server…the non SQL Admin can VERY easily obtain admin rights on the SQL server.
- The DBA has now started sweating, twitching and begging you to keep your weird database away from his/her server. :-)
So, really the only reason to consider doing remote SQL at all is a performance issue…but you have to be a pretty big organization for that one to come into play. And even if you do need to do remote SQL…it should be a SQL server that is dedicated to Configuration Manager.
Note (12/4/2012): I was talking with a friend late in the day yesterday about this blog post. He reminded me that I had already posted about this issue last April. Thanks Phil…I’m a little scatterbrained sometimes! I’m leaving this post up anyway because it is better than the original in my opinion.
This one has annoyed me for years…need to get on my soapbox for a minute.
Let’s talk about the difference between Hardware and Software Inventory in Configuration Manager. Hardware inventory collects data from WMI and the registry. Software inventory looks at file properties. Hardware inventory runs relatively quickly and isn’t very resource intensive. Software inventory can be very resource intensive if not configured correctly. At a high level, here is what is covered by the two:
- Obviously info on system information – Proc, RAM, actual hardware
- Add/Remove Programs information
- File information
- Can be configured to actually collect a copy of a file. (be VERY careful!)
I have talked to numerous clients who are looking at Software Inventory to try to gather data about what software is installed…which it does not gather at all. My pet peeve is not with the way that the system is designed…I think it is a very good design. My issue is with the name. Software inventory is NOT an inventory of software. It is an inventory of FILES. A much better name would be to call it what it actually is…File Inventory.
I have seen very few companies with a real need for information on specific files. Most are simply wanting to know what software is installed on which machines…which Hardware Inventory provides. Some valid uses that I have seen include:
- Locating PST files in an effort to get rid of them.
- Locating password dump files. (Company had experienced internal espionage issues.)
The key to the valid uses of Software inventory is that they had absolutely nothing to do with installed software. They were looking for files.
I came across this while helping a client yesterday. Good post by Steve Rachui with links to TONS of info to get you up to speed on Configuration Manager 2012.
Update: This bug has been fixed with the version of MDT 2012 Update 1 that is currently available for download. Update was released on 9/19/2012.
I came across a bug in MDT 2012 Update 1 today. This has been previously reported to the MDT team. They are able to reproduce it and are working on a solution. Here are the details:
- Configuration Manager 2012 installed.
- Configuration Manager 2012 Cumulative Update 1 applied.
- MDT 2012 Update 1 installed.
- MDT/Configuration Manager integration performed.
- Attempted to create an MDT Task Sequence which failed with the following error:
Microsoft.ConfigurationManagement.ManagementProvider.SmsConnectionException: Failed to validate property
The recommendation from the forums is to roll back to MDT 2012. I have confirmed that this does in fact allow you to create an MDT integrated task sequence.
TechNet Posts on this issue:
Very cool to see this today! Microsoft has released the initial offline help file for System Center 2012 Configuration Manager. But…better than that…it comes in three formats.
- a 13MB, 2000+ page PDF
- a 2MB, 2000+ page DOCX
- a CHM file…with an update utility!!!
If you install the ConfigMgr2012HelpUpdate.msi app, you will see the following on your Start Menu.
The first link is the help file…note that it is dated May 2012. The second is the update wizard…which presumably will keep the local copy up to date with the online version when a new offline copy is published. Very nice!
Anyone who works in enterprise IT (and with products such as Configuration Manager) needs to know how to install applications silently…without requiring user intervention. Recently I came across a web page that gives really good info on the various installation types (MSI / InstallShield / Wise / etc) and how to make them silent. It goes beyond the basics and gives background on how each of them work. The page hasn’t been updated in a while, but there is still some very good information there. This could be a good one to bookmark.
Over the weekend I got a final update on the Unknown Computer “bug” in Configuration Manager 2012 that I wrote about recently. This time the update came from John Vintzel who for those who don’t already know him is a Senior Program Manager on the Configuration Manager product team. Basic gist of the update is that they will evaluate a change in this behavior for a future release.
In my opinion (which based on conversations I’ve had I know is shared by many others), this is a necessary change. There should not be a requirement to delete an object when a system doesn’t even begin the task sequence…or when it fails early in the process. A few options for how this could be changed:
- Don’t create the Unknown object at all. (I’m guessing there is a reason behind why it exists though.)
- Create the object after the system becomes a manageable object. (probably the same as #1 though)
- Have logic built into the task sequence process that automatically removed the “unknown” object from the database if the Task Sequence fails before the system becomes a manageable object.
I have had a few discussions over the years about whether a Configuration Manager installation should include SQL “on box” or “remote”. The answer is generally “it depends”. This blog post is not going to dig into all of the reasons why you would choose either local or remote SQL…it is designed to highlight one particular security concern with the remote SQL option. Let’s think through several of the underlying components that are necessary for remote SQL to take place along with a few very common scenarios when this is the case.
- Generally a company will choose remote SQL because they want to have a beefy SQL box that is managed by their DBA team. This SQL box will commonly house several SQL databases…not just the Configuration Manager DB. Which means that any disruption on that SQL server has an impact on much more than just Configuration Manager.
- A requirement for remote SQL with Configuration Manager is that the Configuration Manager server’s computer account must be in the local admin group on the SQL server.
- Commonly there will be a number of Configuration Manager administrators that have admin rights on the Configuration Manager server.
- Commonly there will be a number of those same Configuration Manager administrators that do NOT have admin rights on the remote SQL server.
THAT is where the problem rears it’s head. Let’s connect all of the dots…
- Joe Admin is an admin on the Configuration Manager server…but is not an admin on the SQL server.
- The Configuration Manager server’s computer account is an admin on the SQL server.
- Joe Admin has read my article on how to run a command prompt as local system. Uh oh.
- Joe Admin uses psexec to run a command prompt (or SQL Management Studio…or regedit…or services.msc…or disk management…or whatever else) as local system on the Configuration Manager server.
- Joe Admin then connects that app (running in the “user” context of the Configuration Manager server’s computer account) to the SQL server.
- Joe Admin is now able to do anything that the Configuration Manager server’s computer account has rights to do…which is full Administrator rights…ON THE SQL SERVER!!!
- That security you thought you had…well it didn’t work so well.
Is there anything to keep Joe Admin from (either accidentally or maliciously):
- Stopping services?
- Deleting files?
- Rebooting the server?
- Jacking with the registry?
- Installing (either good or bad) software?
- Copying data off of the SQL server?
- etc (I think you get the picture.)
Now…for some people that doesn’t matter. In many smaller installations the same team is managing Configuration Manager and SQL. However…if you are that small…why take on the extra complexity of the remote SQL scenario?
For others it matters big time! I’ve had conversations with customers who cringe at the very idea that some random Configuration Manager admin could possibly gain full rights to the SQL server that other business critical databases are stored on.
Just a quick update on the potential bug that I reported a couple of weeks ago. I’ve had a few back and forth exchanges via Connect about this issue, and it is being called “by design”. They asked my how I would like for this to work and at what point I would like for the machine to become “known”. Here is my response:
Thinking through the whole scenario…it would be best if the computer is seen as "known" AFTER it becomes a manageable system (i.e. after the Configuration Manager client is installed). Until that time, it is not a system that can be managed…it doesn’t even have an operating system until just before the client install step in the task sequence.
At minimum, I would not expect the computer to be "known" until after the task sequence successfully started. In the scenario I provided (task sequence erroring out at dependency check…which is VERY common), the task sequence has not begun…it is failing during the dependency check. The computer object that is created (named "Unknown") is not a manageable object. It is however an object that will block the computer from being able to run a task sequence that would allow it to be come manageable unless action is taken to remove it from the console.
The final response back from Microsoft via Connect was that this would be submitted to the Product Group as a Design Change Request.
This will be a very welcome change if it is implemented. Until then, be aware of the issue and what you need to do to fix this issue when you run into it in your environment.
I just posted an update to this issue. It has been submitted as a Design Change Request to the System Center Product Group.
Today I ran into what I believe to be a bug in the RTM of Configuration Manager 2012. (I have replicated the issue below multiple times in both the RC and RTM.) I’m submitting it on Connect and will update this post if I hear anything back from the product team. BTW…I have mixed feelings writing this post. On one hand it’s exciting to find a bug in a released product (Geek Nirvana). On the other hand, Configuration Manager 2012 is a very solid product that I’m very excited about…I don’t want to make it look bad. Anyway…
I was testing an OSD proof of concept at a client this morning. This is a Configuration Manager 2012 POC and we were deploying Windows 7 32bit over PXE to an HP desktop. I had the following in place:
- OSD has been working fine.
- PXE booting is working without problems.
- I have previously deployed the Win7 image to a different hardware model without issues.
- The Task Sequence is deployed to a Collection that has “All Unknown Computers” as members via an “Include Rule”
In this instance we were needing to deploy to a new model. I imported the drivers into Configuration Manager and added a new “Apply Driver Package” step into the Task Sequence. I forgot to add the new driver package to a Distribution Point…so when I kicked off the new bare metal deployment to this unknown computer, it naturally failed at the “resolving selected task sequence dependencies” check. I quickly realized what I had overlooked and added the driver package to the DP (and ensured it was source version 2…I was surprised to see that this is STILL an issue.). When I attempted to PXE boot the computer again (the unknown computer that had JUST run the task sequence as an unknown) it failed with the “abortpxe.com” error message that typically means that there is no Task Sequence deployment applicable to this computer.
After doing some troubleshooting, I found the following issue…
This computer object has the MAC address and BIOS ID of the previously unknown computer…except that it is now a Known computer…not an Unknown computer…although the System Resource “Unknown Computer” property is set to “1”.
So…my deployment to “All Unknown” computers now fails. This is easy to resolve…simply delete the computer object named “Unknown” and restart the PXE process. But…at best this is an unexpected and undesirable result.
I was able to easily replicate this issue. Here are the steps to replicate the issue:
- Add a package to your Task Sequence that has not been distributed to a Distribution Point
- Deploy the Task sequence to a collection that includes “All Unknown Computers”
- PXE boot a computer that is unknown to Configuration Manager.
- Start the task sequence
- The Task Sequence fails at the “resolving selected task sequence dependencies” check because of the package in step #1
- Find the package that isn’t on a DP and distribute content to the DP (or simply remove it from the Task Sequence).
- Attempt to PXE boot the client again and you will get the “abortpxe.com” message. “TFTP Download: smsboot\x64\abortpxe.com. PXE Boot aborted. Booting to next device…”
- In “All Systems” is a computer object named “Unknown” that has the MAC address of the system that was previously unknown. Because it is in the database, it is now a “known” computer…so deployments to “Unknown Computers” won’t pick up this computer any more.
Options to resolve:
- Delete the computer object(s) named “Unknown” from All Systems
- Add a query rule to the Collection that grabs new computers where the System Resource “Unknown Computer” property = “1”
Note for Option 2: if the TS continues to fail, it will create a second/third/etc object with different resource IDs.
Over the last few years as a consultant I’ve had numerous engagements where clients wanted to customize the look/feel/settings in Windows 7. Different clients had different requirements around which customizations, whether it was permanent or a preference, etc. Below is a list of several customizations that I have helped clients perform. Many of these are found at various locations in forums, blog posts and Microsoft documentation. My goal is to gather these into one location so that it is easier for some of the more common (and for that matter some of the more obscure) customizations to be found. These are in no particular order. I will update this list from time to time. If you have any favorite customizations that you’d like to pass on, email me on my contact form and I’ll add them in. This post is REALLY long, so click the “Read More” link if you want to see the customizations.