Latest Entries »

Problem:  We started noticing hardware inventory failing right after we applied the 1706 update.  The management point was getting flooded with error messages like below Message ID: 5416 and the retry outbox filling up with thousands(25,000+ for us) of retry files.  There were no changes to inventory settings or issues with the configuration.mof file as far as we could tell.  We also checked for replication issues and DP corruption issues but nothing came up as a problem.

MP needs to reload the hardware inventory class mapping table when processing Hardware inventory. The MP hardware inventory manager cannot find a WMI class definition in the mapping table when processing a report. This should only happen if new definitions beyond those known to the site are added to the collected classes.

 

Possible cause: Inconsistent way the new definitions have been introduced.

Solution: Check that the mapping tables contain the information consistent with the hardware definition tables and that the definitions have been propagated properly.

Possible cause: Corruption of the data base.

Solution: Check the consistency of the data base.

Solution:  The solution ended up being pretty easy but frustrating, all we had to do was modify the hardware inventory classes(we added a class) in client settings and let the clients recompile inventory.  After doing that the retry box gradually went down as systems sent in inventory with the new settings and inventory is now updating in the database.

Problem: When initiating a client push from the console, some clients are not being installed.  A closer look at the CCM.log and CCR.box on the site doing the client push shows the CCR file is never created for some systems.

Why: I haven’t had a chance to dig deeper into why this occurs as I ran into this issue in the production environment and can’t recreate it in my development lab.  My guess is it may be due to some database info that is not replicating to the child sites as client push is handled through the DB in ConfigMgr12.

Solution:  Delete the affected system records from the console, then force a rediscovery of the systems.  This clears out all the old data in the DB and generates a new record.  Then do client push.

 

Problem: Users are getting errors creating new boot images in ConfigMgr.

BootImage

Error while importing Microsoft Deployment Toolkit Task Sequence.

Failed to read image property from the source WIM file due to error 8007007e

Microsoft.ConfigurationManagement.ManagementProvider.WqlQueryEngine.WqlQueryException: The SMS Provider reported an error. —> System.Management.ManagementException: Generic failure
at System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus errorCode)
at System.Management.ManagementObject.Put(PutOptions options)
at Microsoft.ConfigurationManagement.ManagementProvider.WqlQueryEngine.WqlResultObject.Put(ReportProgress progressReport)
— End of inner exception stack trace —
at Microsoft.ConfigurationManagement.ManagementProvider.WqlQueryEngine.WqlResultObject.Put(ReportProgress progressReport)
at Microsoft.ConfigurationManagement.ManagementProvider.WqlQueryEngine.WqlResultObject.Put()

 

Solution: Make sure OSD admin console is connected to a Primary Site instead of the CAS when creating boot images.

Move Content Library

If you need to move the content library due to failed disk or out of space it’s a simple task with the ConfigMgr Toolkit.  There is a utility called ContentLibraryTransfer.exe which needs to be run from an administrative command prompt.

Location(your install directory may vary):

C:\Program Files (x86)\ConfigMgr 2012 Toolkit R2\ServerTools\ContentLibraryTransfer.exe

Syntax:

ContentLibraryTransfer.exe -SourceDrive <drive letter of source> -TargetDrive <drive letter of target>

Example:

ContentLibraryTransfer.exe -SourceDrive D -TargetDrive E

The reason why I’m writing this post is because the utility is very basic, one of the issues I had was it was getting stuck with not enough disk space errors when there clearly was enough space for the library.  Not really an issue if you are moving the library due to disk space issues as the new drive will be larger.  I was moving the library due to disk corruption issues.  The utility is only comparing the total drive size and not the actual size of the library, the new disk was slightly smaller than the current disk.

The workaround:

Shrink the existing volume using disk manager to be smaller than the new disk.

 

Also if you have WSUS installed, you may have to move that content store too.  Use WSUSUTIL.exe to move that content.

Location(your install directory may vary):

C:\Program Files\Update Services\Tools\WSUSUTIL.exe

Syntax:

wsusutil.exe movecontent <content path> <log file name and location>

Example:

wsusutil.exe movecontent F:\WSUS F:\WSUS\WSUS.log

 

This blog post will not cover the step by step instructions on how to do a site recovery. Please review Technet documentation for that. https://technet.microsoft.com/en-us/library/gg712697.aspx#BKMK_RecoverSite

This post will cover how to recover from a failed installation of a service pack upgrade. Upgrades can fail for many reasons, mine failed on the CAS because my remote desktop session was terminated while in the middle of SP1 for R2 install.  My site setup is CAS and two Primaries so my recovery process will be a little different than most as I can use the primary sites as reference sites and no need for a database to be restored manually.

So which media do you use to recover the site? New SP1 media or the old R2 media?

Using R2 media gives an error stating downgrade is not supported. Using SP1 media will succeed but as you’ll see later you’ll run into issues.

First you’ll notice that when you open the console it will be in read only mode due to site server performing recovery tasks.

I forgot to take a screen shot but it looks the same as the maintenance mode error below.

ConsoleReadOnly

Read only mode basically means something is not right with database replication. Check RCMCTRL.log mine was filled with recovery failed errors.

On entry: recovery status for link [xxx, yyy , Configuration Data] is StartRecovery. Site status RecoveryFailed.

rcmctrl

I figured it just takes time for the replication to complete so I let it go for awhile and still no changes.

Next, Log on to SQL Server Management Studio and run spDiagDRS by selecting DB > New Query and execute spDiagDRS stored procedure.

Site Status will show RECOVERY_FAILED

spDiagDRS

I knew this was happening because SP1 makes a lot of changes to the database and the most likely issue is the database entries from the Primary (not SP1 yet, remember upgrades need to be done top down) are no longer the same as what the CAS is looking for.

Basically I was stuck in a situation where I can’t install the older version of ConfigMgr and recover because downgrades are not allowed and the newer version doesn’t recognize the older database. I couldn’t uninstall the CAS and start over because doing that requires uninstalling the working Primary sites first.

 

Now what?

My choices are do a full system state restore for my CAS and cross my fingers it works and then recover site. I didn’t want to go this route as it takes forever to get this accomplished at my organization and doesn’t always work as planned.

Only choice left was to hack away at the registry. (I called MS support before I did this to make sure they were ok with this approach and it’s my dev environment so wasn’t too concerned if it failed)

Find: HKLM\Software\Microsoft\SMS

Rename Key(delete if you are braver than me): SMS to SMS_Old

Reg_old

Uninstall console if newer version of console was installed.

Detach Database from SQL server.

DB_Detach

Select Drop connections and OK

DP_Drop

Move, rename or delete database .mdf and log .ldf files. I chose to move the files just in case I needed to bring the existing DB back.

Now with old installation information cleaned up. Do a site recovery from the installation media for R2 and follow all post installation tasks. Now it will let you do a recovery without downgrade errors and database will replicate successfully.

spDiagDRS_Good

Problem: You receive software update point is busy errors in Software Center, or HTTP ERROR 503 in ConfigMgr client logs, or component status messages in console or browsing to  the SUP website manually.

http://SUP.CORP.COM:8530/SimpleAuthWebService/SimpleAuth.asmx

Message ID: 6703
WSUS Synchronization failed.
Message: The request failed with HTTP status 503: Service Unavailable.
Source: Microsoft.UpdateServices.Administration.AdminProxy.CreateUpdateServer.

Or

HTTP Error 503 The service is unavailable

Solution:

Check your IIS Application Pool and see if the WsusPool is in a stopped state.  If stopped then it’s occurring because the application pool is running out of memory and just stopping instead of restarting.  The Private Memory Limit (KB) for the Application Pool is probably set to the default value of 1843200 KB.  Increase this to 4GB by click Advanced Settings and changing the default to 4000000 KB and restart the app pool.  Keep an eye out for this in larger environments as you may need to increase it even higher if it keeps stopping.

IISAppPool1

IISAppPool

Never really ran into this issue in the smaller environments I worked in, then again I never had only 50GB to work with on an OS partition either so that may explain why I never noticed this before and more likely because I’m not an IIS expert.  I suggest you take a look on your servers just to be safe.

I had 25GB! of IIS logs on my management point. My environment is generating about 500mb of logs daily. After parsing the log files, there really isn’t much info I need from them at least not for long.

IISLog

I’m enabling folder compression for the log files and setting up a cleanup task that will delete the files older than a few weeks as described on the IIS.net website http://www.iis.net/learn/manage/provisioning-and-managing-iis/managing-iis-log-file-storage

A powershell script I’ve put together for adding distribution points, distribution point groups, boundaries and boundary groups from a CSV file.  Also attached is a sample CSV file to assist in formatting the file.  The script does not take into account for required fields, invalid data, or bad format so refer to TechNet for cmdlet help

https://technet.microsoft.com/en-us/library/jj821831.aspx

The script was tested on SCCM 2012 R2 and Powershell 4.0

You’ll have to go to Microsoft to get the files since I cannot upload them here.

https://gallery.technet.microsoft.com/systemcenter/Bulk-Add-Distirbution-f0b4ed71

 

Problem:  Can’t delete ConfigMgr_OfflineImageServicing folder after a failed offline image servicing.  Errors in OfflineServicingMgr.log preventing all scheduled offline servicing.

OfflineServicing

Failed to remove previously existing staging folder E:\ConfigMgr_OfflineImageServicing\DCS00120, GLE = 5

Manually deleting the folder from explorer gives errors stating you don’t have administrative permissions, taking ownership of content doesn’t help either.

Solution: Reboot server. Run DISM /cleanup-wim once completed you’ll be able to manually delete the folder.

Problem:

This issue doesn’t always occur and I’m not sure why it happens but randomly the content location path for applications fails to update on the database when changing it from the console. I think it’s happening when an administrative user creates an application then modifies it or changes revisions too quickly but I can’t be sure as it’s never happened to me but I’m the lucky one that gets to figure out how to fix these things.

We create an application with a typo for the content location. Try to distribute it to a DP and fails with errors 2306, 2361 and 2302.

 

DistError

 

Then changed content location in console and updated content and Distribution Manager still fails with same errors.

 

Path

 

Errors still show same typo for content location.

Error2

 

Launch PowerShell and run Get-CMPackage -ID “dcs00104”. Still shows old content location via PS but new location via console.

PSGet

 

 

Solution:

Run PowerShell Set-CMPackage -id “dcs00104”Path \\mdt-srv-017\apps\Microsoft\App-V Client 5.0 SP2\”

Run Get-CMPackage -ID “dcs00104” and verify path is set correctly.

 

PSSet

 

Now both Console and PS show correct path and content is able to be distributed successfully.

 

DistGood