.Net Frameworks 4.x updated Mof and Reports

This is an update to this older blog: http://mnscug.org/blogs/sherry-kissinger/422-dot-net-frameworks-mof-edit

Nash Pherson, Enterprise Client Management MVP, pointed out that for versions 4.5x and higher Microsoft is recommending using the dword registry key called "release" to better pinpoint what version of .net is installed. Because "buildNumber" in the registry will say something like "4.5.51209"--but what it MEANS is that's version 4.5.2 (don't ask me why, I don't get it either).

Unfortunately, "Release" also isn't in nice, plain English. I couldn't find anything "easy" to make using BuildNumber any more or less useful than using "Release" number. But if you want to do exactly what Microsoft tells you to use, attached are updated mof edits for reporting on .net versions. The only thing added to this is "release"; which is only applicable to .net 4.5 and higher (well, up to 4.6.1 as far as I can tell; maybe it'll still be there in newer versions as those are released, but for now, that's all I can see)

If you already have the DotNetFramework edits, you replace your existing snippet for DotNetFrameworks in your configuration.mof with the configuration.mof edit in --> This Zip File<--   If you've never edited your \inboxes\clifiles.src\hinv\configuration.mof for dotNetFrameworks yet, you will add that attached snippet to the bottom of that file. Monitor your 'dataldr.log' to confirm all is well.

Once configuration.mof is edited, you take the attached "to-be-imported-dot-net.mof" and in your CM console, Administration, Client Settings, right-click on "Default Client Settings", Properties, Hardware Inventory, "Set Classes..." then Import that to-be-imported-dot-net.mof file. If you already have one from previously, not to worry. It'll just accept the new one and modify your tables and views. Just monitor your dataldr.log to confirm all is well.
Then, of course, it's the typical waiting that one does in ConfigMgr. Just wait a while; how long depends upon how often you have hardware inventory configured to run; the # of clients you have, and other factors unique to your environment. But in a couple hours or by the next day, try running one of the reports in the attached .zip file.

Regardless of whether you have the "old" DotNetFrameworks mof edit (which doesn't have release) or are using this new one, attached in the .zip file are also some sample reports. With versions of .net 4.0-4.5.1 no longer under support, your organization may be under hightened awareness of finding and upgrading anyone with those older versions to the supported versions. For example, below is what a report might look like, using 2 of the SQL queries attached. The top one is the results of the 'SQLtoCountDotNetVersions', and bottom one would be the 'SQLToShowVersionsInYourDatabase' -- what values you have in your database will vary from company to company.

 dotnetSampleReport

Geek notes: the "how to tell what .net is installed", came from two different Microsoft articles.
As I write this blog, this covers from .net 1.0 through .net 4.6.1
For V1-4: http://social.technet.microsoft.com/wiki/contents/articles/15601.how-to-determine-the-net-framework-installed-versions.aspx
For v4-4.61: https://msdn.microsoft.com/en-us/library/hh925568(v=vs.110).aspx

ADR RBA YES

When will Microsoft ever get Role Based Access (RBA) working for Automatic Detection Rules (ADRs)? I need to know that a server admin can make use of an ADR to setup his patches and that a workstation admin can't go in and edit the server ADRs. And vice versa.

Well, RBA is there. Already. Right now. At least in CM12 R2 it is. Was it always there? I could swear that when RTM came out, that this wasn't possible. But I verified this works yesterday. What isn't there is the option to right-click an ADR and assign the scope, but that's really not important.

The server admin can see the workstation admin's ADRs, but all the properties are grayed out and no changes can be made. The guts of this (as with all RBA) revolves around the collections each admin has access to. When a server admin creates an ADR which targets his collection that a workstation admin doesn't have access to, RBA kicks in and protects the admin.

So what's not to like about ADRs now?

Well, other than wishing they'd use saved searches instead of filters (which is another DCR submitted long ago) not much. I have just one thing driving me nuts before I let the admins know that they can start using ADRs now. Packages.

You can't make an ADR without filling out the packages prompts in the wizard. I'd have to let these admins also make patch packages on their own. And I can even grant that specific feature in our SUM role. So why could this be bad, especially if our single instance store in the Content Library is saving us space?

Well for one, it isn't saving us space on the source files (and for that I really need to move that share to a dedupe volume). But the other one is that one admin could now download a patch everyone is using and later just go delete it and break a lot of deployments. Sure, I could go fix that by downloading the patch myself quickly, but that could leave clients sitting around for a day before they retry. Maybe I'm over thinking this?

App-V 5.0 and UeV 2.0 Presentation Follow Up

As promised here is the slide deck for the App-V and UeV presentations at the last MNSCUG meeting. Managing UeV with ConfigMgr 2012 and automating the creation UeV Configuration Items and Baseline Compliance items using PowerShell. The script should be saved and run from the same location as the System Center 2012 Configuration Pack for Microsoft User Experience Virtualization 2.0.

The script will create and import a CI for the UeV Agent settings, and then it will create and import a CI for each template file and add each to a Baseline item. You can then use the CM12 console to add the new Agent Settings CI that was created to the baseline, or create a new baseline for just the Agent settings. Then deploy the baseline to whatever collection you want to manage UeV settings for. You can run this script each time a template or agent setting change. there is no need to delete the existing CI/Baseline items, if there are changes the items will simply get a new revision as opposed to creating a new item.

If you have any questions on how it works, etc. feel free to contact me @FredBainbridge or fred [email protected]_ mnscug.org

APP-V5-MNSCUGJune2014.pptx

UeV20-MNSCUGJune2014.pptx

UEVbaselineGenerator.zip

 

Apps/Packages stuck in “In progress” state when distributing to DPs

For almost a week spent troubleshooting these PIA app and a package that got stuck in “In progress” state trying to deploy to the DPs. Other 1000 apps and packages we have are fine, except for these two. Oh and, Google-fu and/or Tae-BING searches weren’t good or helpful enough to fix this issue! J  Of course, recreating these would have been the easier way.   But why are they in that state and how could we get this fixed? I just want to find a way to reset these objects so I can deploy these to the DPs successfully and know what to do when/if it happens again.

So I looked at all basic app/package properties to see if everything’s setup properly; checked the source path, DT settings, distribution settings, content settings, etc.   I even tried changing the source path to I know for sure a valid path where CAS can get to. The CAS is able to grab the content from the source location, pack it to PCK, replicate the app/package settings to child primary servers (via DRS), and also able to send to them via sender. But when it gets to the primary servers, despooler was blowing chunkies…   It wouldn’t process the .sni file properly, even though I see the TRY file that came with it in the despooler. It kept coming up with Error=12! Tried removing the DPs from the app, resetting the pkgstatus/SourceVersion=0, and Status=2, then re-adding the DPs back, no dice.  This app just kept falling in Retry state! Ugh!

Despooler.log on the primary servers

 

Received package CAS008DB version 6. Compressed file -  D:\SMSPKG\CAS008DB.PCK.6 as D:\SMS\inboxes\despoolr.box\receive\PKGfooh3.TRY

Instruction D:\SMS\inboxes\despoolr.box\receive\ds_2ivq1.sni won't be processed till 6/20/2014 1:42:00 PM Central Daylight Time

Instruction D:\SMS\inboxes\despoolr.box\receive\ds_ijdl4.sni won't be processed till 6/20/2014 1:13:50 PM Central Daylight Time

Instruction D:\SMS\inboxes\despoolr.box\receive\ds_oko21.sni won't be processed till 6/20/2014 1:31:10 PM Central Daylight Time

Instruction D:\SMS\inboxes\despoolr.box\receive\ds_vy0p3.sni won't be processed till 6/20/2014 1:36:40 PM Central Daylight Time

Instruction D:\SMS\inboxes\despoolr.box\receive\ds_xzqrp.sni won't be processed till 6/20/2014 2:08:40 PM Central Daylight Time

Waiting for ready instruction file....

Old package storedUNC path is .

This package[CAS008DB]'s information hasn't arrived yet for this version [6]. Retry later...

Created retry instruction for job 00005921

Despooler failed to execute the instruction, error code = 12

So I started digging, and compared a successful APP vs this bad one, and was surprised to found this on CAS’s pkgstatus SQL view. The successfully deployed to the DPs app only has one row per Primary, one for the CAS, and its DPs that’s deployed to starting with “["Display=\\DP1.jeff.com\"]MSWNET:…”.   This bad application happens to have extra rows per primary server along with CAS’s fqdn in PkgServer column!   And if you look closely below, their “Update times” were old with different or old PKID. (Which I assume PKID increments).

StuckApps

                       

Time to try to fix this!

  1. I made certain all the DPs are removed from this bad application “CAS008DB
  2. I then proceeded by deleting these extra rows by executing below on the CAS and on the Primary servers’ DB.   NOTE: MS doesn’t support you modifying the DB, so be careful and make sure you have a valid backup before doing so! J

DELETE FROM pkgstatus

where id = 'CAS008DB' and PkgServer = 'CASSERVER.jeff.com' and sitecode <> 'CAS'

  1. Then I reset the pkgstatus of this application. Executed this on the CAS only, targeting just one of the primary servers PR41.jeff.com, just to see if we could get past the despooler process successfully so it can get copied to its DPs.

update pkgstatus set Status = 2 where id = 'CAS008DB' and pkgServer = 'PR41.jeff.com'

update pkgstatus set SourceVersion = 0 where id = 'CAS008DB' and pkgServer = 'PR41.jeff.com'

  1. Then deployed the app to the DPs that are on PR41.jeff.com primary server.
  2. Voilà! The bad application got processed by the despooler and deployed to the targeted DPs successfully!

Verifying signature for instruction D:\SMS\inboxes\despoolr.box\receive\ds_c50g8.nil of type MICROSOFT|SMS|MINIJOBINSTRUCTION|PACKAGE

Signature checked out OK for instruction coming from site CAS, proceed with the instruction execution.

Executing instruction of type MICROSOFT|SMS|MINIJOBINSTRUCTION|PACKAGE

Package CAS008DB is currently being processed, sleep for 10 seconds

Waiting for the next instruction....

Waiting for ready instruction file....

Old package storedUNC path is .

Use drive D for storing the compressed package.

No branch cache registry entries found.

Uncompressing D:\SMSPKG\CAS008DB.PCK to D:\SMSPKG\CAS008DB.PCK.temp

Content Library: O:\SCCMContentLib

Extracting from D:\SMSPKG\CAS008DB.PCK.temp

Extracting package CAS008DB

Extracting content Content_122fcdf2-f4c6-43d0-a0fe-61caf6f67a23.1

Package CAS008DB (version 0) exists in the distribution source, save the newer version (version 7).

Stored Package CAS008DB. Stored Package Version = 7

STATMSG: ID=4400 SEV=I LEV=M SOURCE="SMS Server" COMP="SMS_DESPOOLER" SYS=PR41.jeff.com SITE=P41 PID=2924 TID=8196 GMTDATE=Wed Jun 25 21:24:37.393 2014 ISTR0="CAS008DB" ISTR1="\\PR41.jeff.com\D$\SMSPKG\CAS008DB.PCK" ISTR2="" ISTR3="" ISTR4="" ISTR5="" ISTR6="" ISTR7="" ISTR8="" ISTR9="" NUMATTRS=1 AID0=400 AVAL0="CAS008DB"

Despooler successfully executed one instruction.

How did this happen? I have still no clue at the moment, I’m still digging for the root cause.   I can only assume we had a SAN glitch or CM crash at the same time this app was being processed or created. The best part is, now I know why it wouldn’t deploy to the DPs, and I know what to look for and what to do when/if this happens again J

April 2014 MNSCUG Meeting

Our next MN System Center User Group meeting will be MONDAY, April 21st from 4:30pm - 7:15pm at Microsoft's Edina HQ building - 6th Floor, Lake of the Woods.

 

This month's meeting is sponsored by NowMicro, the premier IT provider of cutting-edge technology products, solutions, and services.  They also have the honor of having the newest ConfigMgr MVP among their ranks.  Congratulations Nash Pherson!  

 

For the main event we are honored and excited to welcome ConfigMgr "Super Man" Kent Agerlund. Kent is a Microsoft System Center 2012 ConfigMgr MVP who works as senior System Center architect, trainer, event speaker, and author. For the past four years, he has been on the road with his Mastering System Center 2012 Configuration Manager class.  He will be presenting several real world examples of mobile device management across many different mobile platforms.  This is a can't miss for anyone interested in how to start tackling mobile devices in the enterprise.  Kent is in town to teach his famous ConfigMgr 2012 course, for those who are unable to attend his training class, this is a great opportunity to learn from one of the best.

 

There will be a round table Q&A session after Kent's presentation to handle any general issues you may be experiencing in your environment.

 

nowmicro

 

 

Registration is free to the public, but please be sure to sign-up if you are attending so we can ensure everyone has food.

 

Eventbrite - February 2014 MNSCUG Meeting

 

See you there!

August 2014 MNSCUG Meeting

August is already upon us!  Our next user group meeting will be Wednesday, August 20th at 4:30 - 7:00pm at Microsoft's Edina HQ building - 6th Floor, Lake of the Woods and is going to be Operations Manager oriented!  Operations Manager is always popular so please register in advance so we can get a proper count for food and drink.

 

Jonathan Almquist will be presenting on creating an application service model in Operations Manager using Visual Studio Authoring Extensions.  Also presenting will be Nathan Foreman from C.H. Robinson.  He will be covering how to integrate a CMDB into your Operations Manager environment and using CI Attributes to automatically control monitoring.  Should be really cool stuff.

 

Concurrency

Food and beverage will be provided again by Concurrency!  A special thanks to Concurrency for becoming our newest Bronze sponsor! Concurrency is also bronze for MMS which shows their commitment to the community.

 

Registration is free to the public, but please be sure to sign-up if you are attending so we can ensure everyone has enough food and drink.

Eventbrite - February 2014 MNSCUG Meeting

 

See you there!

August 2014 MNSCUG Meeting - Notes

The August meeting was a great success!  Both presentation were fantastic.  A big thank you to Jonathan Almquist and Nathan Foreman!  Concurrencyprovided stellar food and drink as well.  Great times were had by all.  

Hereare the details and examples of Nathan's presentation on integration SCOM with a CMDB.

As a reminder, elections for MNSCUG board members is going to be held at the October meeting.  Must be present to run or vote.  Get involved, it's well worth it.  

Also, MMS is coming up!  Have you registered yet?  You should!

mms2014

 

Be CAS I told you not to! More CAS pain points with CM12

Well I told you not to install a CAS, didn’t I?  But of course I no choice since my team supports the servers that manage 365K clients.  Well, we had one heck of a week last month.

It started with a phone call from Sherry Kissinger waking me up to say we had a replication issue.  She said she dropped a pub file to get the Hardware_Inventory_7 replication group to sync.  My 1st thought was that our Alabama primary site was holding onto files again with SEP (Symantec Endpoint Protection) not honoring exclusions.  We had files stuck in despoolr.box the prior week and entries in the statesys.log showing that it was having problems trying to open files.  I told her to uninstall SEP and I’d go into my office and logon.

So I logged on and start looking around.  Our CM12 R2 environment consists of a CAS & 3 primary sites.  The monitoring node in the console was showing all sorts of site data replication groups in degraded status.  That’s because the CAS was in maintenance mode and not active (easily seen with exec spDiagDrs).  Primary sites hold on to their site data until they see the CAS active.

By the way, if you own a CAS and run into DRS issues, you’ll become well acquainted with spDiagDRS.  It’s a harmless query you can run right now to look at how your DRS is doing.

What Sherry was seeing there was normal.  It would look that way until the primary site caught up with Hardware_Inventory_7.  But that was only the beginning of the story.

When you run RLA on a primary against site data (or drop a pub file as a last resort) what happens is the primary dumps all the tables in that replication group into bcp files and compresses them into a cab file in the rcm.box.  It copies that cab file up via the sender to the despoolr.box\receive folder on the CAS, which decompresses it back to a folder in the rcm.box on the CAS.  Then RCM takes over and parses each bcp file by grabbing 5000 rows at a time and merges them into its table in the CM database.

RCM = Replication Configuration Manager

RCM activity can be viewed in the rcmctrl.log and viewing that on the Alabama primary showed that it failed with “an internal SQL error”.   Huh?  The SQL logs were clear.  A colleague noticed that the D drive on the primary site was full.  With a database as large as ours, the 100GB we gave for the CM12 folders and inboxes, wasn’t enough for the 120GB of data for that one replication group.

We quickly got another 200GB of SAN added to those D drives and another 600GB added to the CAS’s D drive (so that it could fit up to 200GB of data per site should all 3 sites need to send up recovery files at once).

Then we restarted the RCM thread on the primary site and this time it had the room to finish.  But it took forever to compress that data into a cab file to send to the CAS and it took a long time for the CAS to decompress it.  Then RCM does this excruciatingly slow slicing of 5000 rows at a time merge into its database.  That took all night to run.  I assume it does few tables at a time because if you were doing a restore for a CAS, all client data tables would be sent up at once).

But the story gets worse.

After all night of working on this we hit an error stopping the entire process.

Error: Failed to BCP in.

Error: Exception message: ['ALTER TABLE SWITCH' statement failed. The table 'CM_CAS.dbo.INSTALLED_SOFTWARE_DATA' is partitioned while index '_sde_ARPDisplayName' is not partitioned.]

Error: Failed to apply BCP for all articles in publication Hardware_Inventory_7.

Will try to apply BCP files again on next run

What the deuce?  Well that one was our fault for making our own index.  Quick fix: remove the index.  OK.  GO!  C’mon, do something!  But nothing happened.  We waited for an hour and it was clear that it was just not going to start again.  So we ran RLA and started all over again.

All of this takes time so what I’m describing is now the 3rd day with site data down (inventory, status messages, etc.).  We told all the admins that their data could be seen on the primary sites and that work would go out as normal, but all the nice SRS reports we got them used to using were rather useless because the CAS had stale data.

The next attempt of the group got the first table of the replica group done but blew up on the second for the same reason as before.  Yes, we forgot to go look at other indexing our team might have done.  Oops; so we disabled all the rest.  But now what?  There is no way we could let this go for another 8 hours to do this one replica group again.

We kept running ideas past our PFEs but they had nothing helpful.  I don’t blame them because few people know DRS well.  Last year, after a 3-week CSS ticket, we gave up on our lab and rebuilt it after Microsoft couldn’t fix it.

So how could we kick-start RCM to pick up where it left off instead of starting all over?  It had already consumed a 78GB table of the bcp files.  It was just unbearable to consider starting over.  So we rolled up our sleeves and came up with this:

  1. update RCM_DrsInitializationTracking set TryCount=0 where InitializationStatus=99
  2. update RCM_DrsInitializationTracking set InitializationStatus=5 where InitializationStatus=99
  3. In the root of rcm.box, create a file which has the exact name of the bcp folder inside the rcm.box folder and  a file extension of .init

The failed group had gone to a 99 in Initialization Status (simply run select * fromRCM_DrsInitializationTrackingto see all replication groups) and it had a TryCount of 3 on it.  After 3 tries, it just stops trying.

DrsInitTracking

Setting TryCount to 0 did nothing.  Setting Initialization Status to 5 still didn’t kick it.  But adding the init file (and yes, it has to be all 3 tasks) finally got RCM to look at the folder and pick up where it left off.

Then what?  Well once that was done, the CAS told all the primary sites that their data was stale.  Now I would like to think that they would already know what they sent before and just send up the deltas, but nooooooo!  What the CAS does next is to drop the tables of each replica group into bcp files and send them down to the primary sites.  Why?  I assume that this must be used for them to compare and then send up the deltas.

Looking at how long that was going to take got depressing fast.  The CAS is a 1.5TB database and the primary sites are 600GB.  We’re talking about a lot of data even when it’s delta.  Is there such a thing as “too big for DRS?” Because I think we fit the bill.  We simply were not going to catch up.

So I proposed we go with the nuclear option: Distributed Views.

What is that and why do it?

Distributed Views is where you tell your primary site to hold onto client data locally and not send it to the CAS.  Navigate to Monitoring\Database Replication\Link Properties and you’ll see you get 3 boxes of groups which you can decide to just leave on the primary site.  We chose the Hardware Inventory box because that was really the site data not replicating.  So we figured, leave it on the primary and hope the WAN links hold up when running SRS reports or anything in the console that has to go look at client data.

Distributed Views

We did this for the Alabama site to start with.  After the other primary sites showed that they still were not recovering, we enabled distributed views on them too, one at a time.  90 minutes after enabling distributed views, all of our links were finally showing successful status again.

How does this affect SRS reports if the data isn’t on the CAS?  Well under the covers, SRS has to go to each site to run the query if the view contains client related data.  That can put load on your primary sites, but less load than all that site replication I would think.  And we had actually had this enabled on the Minneapolis site that is next to the CAS at one time.  We disabled it only because we found some issues and were concerned that it wasn’t ready to use yet (see Sherry’s blog for one issue).

The downside of trying distributed views is that there simply isn't going to be an easy way to turn it back off.  Once you undo it, your primary is going to have to send all of that data to the CAS.  And for large sites, this is very painful if not impossible.  If we ever get the confidence to disable distributed views I think we’d have to disable hinv, disable DV, enable hinv and let all that data come back as full per client schedules.  To put into perspective how much inventory that is that would have to replicate up: our CAS database is now 1TB smaller.

I said “if we ever get the confidence” to go back to sending all data up, we might do it, but we don’t have confidence right now.  Why?

First off that slicing of the bcp file seems to be hard coded at 5000 rows at a time.  For tables that are millions of rows in size, the wait is agonizing and just not palatable.  Should we ever have to reinitialize a replication group, we simply cannot afford a day or days of waiting.  We’ve asked our PFEs to see if there is some hidden switch to overcome that 5000 row number and if there isn’t one, we’ll submit a DCR.  This really needs to be addressed for very large sites.  We need to throttle (my servers could probably consume 50K rows at a time).  And RCM isn't multithreaded.  Crazy.

As for why DRS had ever failed in the first place, well it didn’t.  It was degraded and we should have tried to let that one catch up.  You have to know which tables you’re messing with for both RLA and pub files.  Here is a query to help you know what is where:

 

SELECTRep.ReplicationGroup,
       
Rep.ReplicationPattern,
       
App.ArticleName,App.ReplicationID
FROMvArticleDataASApp
INNERJOINv_ReplicationDataASRepONApp.ReplicationID=Rep.ID

And you can combine that with:

execspDiagGetSpaceUsed

Those queries should help you make an educated decision on dropping pub files on huge sites.  Generally running RLA is safer, but you should still look at the data to see what's going on.

BEWARE: Couple of issues after Upgrading a CM12 Primary site to Windows Server 2012 R2

After upgrading your CM12 Primary site(s) to Windows Server 2012 R2, you may experience the following issues.

1.  You may not be able to access the console after the upgrade.  Check SMS Admins permission to the Primary site's WMI's root/SMS and root/SMS/Site_XXX.

SMS Admins group should have the following:

    • Root/SMS
      • Enable Account
      • Remote Enable
    • Root/SMS/Site_XXX
      • Executable Methods
      • Provider Write
      • Enable Account
      • Remote Enable

 

2.  Your MPs may experience issues moving files to its Primary site’s inbox folders after upgrading your Primary site to Server12 R2.   Overtime, if this is unnoticed, you would see your clients become inactive in the console.   You may see similar errors below in your MPFDM.log.

mpfdm

 

The only fix we have found so far that’s effective is to add the MPs in local\admins group of that Primary site that’s been upgraded.

 

CM12 MP and DP with no Server GUI

Here is something I've wanted to try forever - heck since they used to call it Server Core.

For my role servers like the MP or DP servers, would CM still work if I remove the GUI from the OS?  Because Server 2012 R2 lets you take the Windows shell off and put it back on, it's easy to test.  So just I did.

I mix my MP and DP servers on the same VM.  So my test here is to see if those roles will still work after I take the UI away (and manage the servers strictly with PowerShell).

RemoveUI

By using Service Manager, I ask to remove the feature User Interfaces and Infrastructure.  Well that's a bit too extreme because we'd evidently lose the IIS BITS Server Extensions and Remote Differential Compression.  And I know I need those for CM.  So I back off and select only to remove the Server Graphical Shell (essentially Explorer and IE).  That works!

So why am I even playing with it?  Theoretically, the loss of the UI means a smaller attack surface so my server should be safer.  And it could mean fewer patches might be needed in the future which could lead to fewer reboots and more uptime.

In reality, I doubt I'm gaining much here.  The actual best benefit would be that my team is forced to manage more using PowerShell and quit playing with things one at a time in a UI.  When you RDP to this server, you just get a cmd box and no explorer.  This isn't supported by Microsoft yet as far as I know, but because my MP and DP logs (and CM client logs) look good, I'm sure it's simply a matter of Microsoft not testing this setup yet to support it.

I'll let this server in the lab sit for a couple months like this and decide then if I'd like to do the rest in the lab (role servers only; I highly doubt a primary site could work like this).  Also, I have other internal apps to consider beyond CM.  Like is Symantec Endpoint Protection still fine?  Other server base apps I'm required to run also need to be checked.

Many apps might fail if you start with no UI, but it seems they mostly work if you remove it after the install.  And if I change my mind about this or run into an issue, it's easy to put the Server Graphical Shell back on.  Oh, and Kaido has a tip regarding this as your source files for the GUI can become stale.

CM12 R2 SP1 not ready for Distributed Views

If you have a CAS (you shouldn't), and if you have enabled distributed views, you might want to hold off from upgrading to SP1 for CM12 R2.

It sounds like we'll need a hotfix for the upgrade so that certain tables and views are checked for during the upgrade. When you enable distributed views, views are created to show data left on primary sites that is not on the CAS. The upgrade isn't expecting those so when it goes to recreate tables, the name is in use (as are some keys in indexes) and the upgrade fails. The ugly part for me is that it failed so far in that running recovery using my SQL backup wouldn't work.

So will this fail for you too? I'm not sure. It might just be my certain layout that wasn't tested. Let me describe it.

CAS

PR1 PR2

PR1 has just the hardware inventory node enabled for distributed views.

PR1

PR2 has all three links enabled.

PR2

How much you extend your inventory affects the total number of distributed views created. In my case, I had 321 of them. But it's just the PR2 tables and views that the upgrade got upset over; the ones where one site had all links enabled for DV. What if PR1 had all three links enabled too? Would I have had the problem? What if PR2 had only the hardware inventory node enabled? Would I have had the problem? I don't know. Will you have the problem? I wouldn't take the chance.

To get past this issue, I nuked some tables and views:

DROP TABLE [dbo].[CollectedFiles_RCM]
DROP TABLE [dbo].[FileUsageSummary_RCM]
DROP TABLE [dbo].[FileUsageSummaryIntervals_RCM]
DROP TABLE [dbo].[MonthlyUsageSummary_RCM]
DROP TABLE [dbo].[SoftwareFile_RCM]
DROP TABLE [dbo].[SoftwareFilePath_RCM]
DROP TABLE [dbo].[SoftwareInventory_RCM]
DROP TABLE [dbo].[SoftwareInventoryStatus_RCM]
DROP TABLE [dbo].[SoftwareProduct_RCM]
DROP TABLE [dbo].[SoftwareProductMap_RCM]
DROP TABLE [dbo].[SummarizationInterval_RCM]
DROP VIEW [dbo].[CollectedFiles]
DROP VIEW [dbo].[FileUsageSummary]
DROP VIEW [dbo].[FileUsageSummaryIntervals]
DROP VIEW [dbo].[MonthlyUsageSummary]
DROP VIEW [dbo].[SoftwareFile]
DROP VIEW [dbo].[SoftwareFilePath]
DROP VIEW [dbo].[SoftwareInventory]
DROP VIEW [dbo].[SoftwareInventoryStatus]
DROP VIEW [dbo].[SoftwareProduct]
DROP VIEW [dbo].[SoftwareProductMap]
DROP VIEW [dbo].[SummarizationInterval]
DROP VIEW [_sde].[v_GeneralInfo]
DROP VIEW [_sde].[v_GeneralInfoEx]
DROP VIEW [_sde].[v_GS_AppInstalls]
DROP VIEW [_sde].[v_HR_NSV]
DROP VIEW [_sde].[v_MachineUsage]

So after restoring the CM database from backup, dropping the views and tables above, and then running the upgrade, it finally took. The CAS is now at SP1 and replication is looking good. The only reason I'm posting the views and tables above is in case someone else already got themselves in trouble. I wouldn't do this unless it's already too late. And those last views we created in our own schema, but the upgrade still doesn't like them so if you have any of your own, you might want to makes copies, blast them, and put them back after the upgrade.

Long story short, if you're using distributed views, I'd recommend you wait on SP1 until we hear from Microsoft.

Update: Notice that the views above are all related to the software inventory and software metering link. As I mentioned in the lab, one of my site had all three links set for DV and one primary was marked for DV for just hardware inventory. Well in production we have only the hardware inventory link enabled for DV so we decided to move forward with SP1 there and it worked fine. So if there is an issue, it would only be with the software inventory and software metering link. Now is there an issue? I sent our database off to Microsoft but never heard back.

CM12 R2 Toolkit

I just tweeted a while ago that I couldn't wait to get my hands on the new Toolkit for R2 because one of the new tools, the Collection Evaluation Viewer, is something that should really help us keep coleval flowing smoothly. It's rather common for us to find someone has written a bad query collection which slows things down for everyone and this tool is just what we need to pinpoint those bad ones.

After install of the toolkit, I fired up the viewer and connected to one of my primary sites (doesn't matter which since they all do the same thing so I picked the closest). But instead of opening, I got this error:

The certificate chain was issued by an authority that is not trusted

So I opened the MMC's Certificate snap-in and connected (Computer account) to the primary site and to my workstation (actually, it's a server I use as a workstation). I exported the primary server's cert out of the Trusted People store and imported it to my workstation's Trusted People store (just use the defaults).

I suppose you could install the toolkit to the primary directly and run it from there, but I like to leave my primary sites alone as much as possible.

Anyway, look at the gold this tool just gave me!

Compliance Setting to Enable WinRM

The Situation:  ConfigMgr 2012 clients can be managed remotely (and troubleshooting remotely) quite handily... if only Powershell were installed and Remote Management via Powershell (WinRM) were enabled.

This article presumes you are deploying Powershell via other means, and this routine is just 1 of several ways to get WinRM enabled, if Powershell is installed.  Note you don't have to use this at all; by far the most popular method is to simply have a GPO, or if you must, interactively login to a computer and run   winrm quickconfig -q  from a command prompt (if you have the rights).

This situation may or may not be an edge case for you... but in our environment there are a few workstations, which are ConfigMgr Clients, but which for whatever reason are not candidates for the GPO, and to have a human interactively connect to each of those machines and run the winrm config (with our settings) is cumbersome. 

I grabbed Roger Zander's Baseline from here: http://myitforum.com/cs2/blogs/rzander/archive/2013/06/21/configure-winrm-by-using-cm12-settings-management.aspx, and found that there were a few things inside that just weren't working in my environment--some old clients, or older versions of Powershell just were not being detected or remediated well.  So I tweaked it to work in my environment.  The tweaking I did may or may not work for your environment--only you can determine that.

The baseline attached is just a SAMPLE of a Configuration Item; using the settings as created if you were to run winrm quickconfig; however you or your security team may have determined not to use those defaults--you may need to modify the port used, or change the ipv4 or ipv6 listening ports.  So take the attached as-is ONLY if you know you are using the defaults, and they are acceptable in your environment.  If you've modified how WinRM is configured in your company, you will definitely need to either modify the ConfigItem detection and remediation, or not use this at all.

How to use:

  1. Import the --> Attached <--  baseline into your Compliance Settings, Baselines 
  2. PARANOIA: Deploy the Baseline to a collection withOUT checking the box about remediation.  
    1. Monitor, and for the machines which say "non-compliant", check that you really cannot remotely connect to them with Powershell Remote Management.  
    2. To a collection of those non-compliants, Deploy the baseline again, but DO check the remediation box.  
    3. Confirm that the remediation Baseline runs, and that you now can remotely connect to them with Powershell remote management. 
  3. Repeat the Paranoia steps as many times as you need to until you are comfortable that it's doing what you think it should be doing. 
  4. Once you've passed your own internal Paranoia Steps (above), you can remove the test deployments, and deploy it again to your 'main' collection, with the remediation checkbox checked.

Again, to repeat... this is just a sample.  and this sample will only be logical to use in your environment if you simply can't use a GPO to enable WinRM on all of your CM clients.  If you CAN use a GPO against your entire environment; then perhaps all you'd maybe, and I mean MAYBE want this for is to Monitor only (no remediation) and just check if the GPO is in fact getting to all your clients.  I wouldn't bother, personally; if I had a GPO that could get to every client.

Puzzling Behavior: When I was testing, more often than I was comfortable with (when using remediation enabled), client computers would report "Failed"--but at re-run they would report Compliant, and forever after report compliant.  What I suspect is happening (but couldn't verify, because a rerun was compliant) was that during Remediation, AS it was remediating one of the 1st non-compliant results... other tests would fail.  But by the time of a human (me) following up on it, WinRM was all enabled and configured and a re-run of the Baseline would indicate absolutely nothing wrong.  So... if you get a lot of failures in Remediation... just wait for your next cycle or re-run the baseline manually.  I suspect it's fine; just a timing issue.

Compliance Setting to Enable WinRM-Updated

This is an update to a previous blog post.  One of the Compliance Settings has modified content. The changes are

  1. Inside the "WinRM Config for v2 or v3", there used to be 4 settings to check if the listener is defined.  I found in my environment; about 3% of machines were "failing" on those 4 settings because they had multiple listeners defined, and those settings didn't take that possibility into account.  This update replaces those 4 settings with 1 scripted test.  if any listener is defined and has the 4 settings we want, then it's compliant.
  2. Inside the "WinRM Config for v2 and v3", for our environment we needed to define TrustedHosts.  It's possible you may not need this setting at all. For this blog posting I've used the common setting of "*"; however, you may need to be more restrictive in your settings.  Or you may want to remove that setting from the ConfigItem completely.  If you have a GPO which configures that, obviously set it to match.

The rest of this is a repeat of the earlier blog posting, just so you don't have to go look up the older post.

-----------------------------

The Situation:  ConfigMgr 2012 clients can be managed remotely (and troubleshooting remotely) quite handily... if only Powershell were installed and Remote Management via Powershell (WinRM) were enabled.

This article presumes you are deploying Powershell via other means, and this routine is just 1 of several ways to get WinRM enabled, if Powershell is installed.  Note you don't have to use this at all; by far the most popular method is to simply have a GPO, or if you must, interactively login to a computer and run   winrm quickconfig -q  from a command prompt (if you have the rights).

This situation may or may not be an edge case for you... but in our environment there are a few workstations, which are ConfigMgr Clients, but which for whatever reason are not candidates for the GPO, and to have a human interactively connect to each of those machines and run the winrm config (with our settings) is cumbersome. 

The baseline attached is just a SAMPLE of a Configuration Item; using the settings as created if you were to run winrm quickconfig; however you or your security team may have determined not to use those defaults--you may need to modify the port used, change the ipv4 or ipv6 listening ports, or modify the TrustedHosts setting.  So take the attached as-is ONLY if you know you are using the defaults, and they are acceptable in your environment.  If you've modified how WinRM is configured in your company, you will definitely need to either modify the ConfigItem detection and remediation, or not use this at all.

How to use:

  1. Extract the .cab from --> Here <--  and import that baseline into your Compliance Settings, Baselines; note if you've previously imported the older baseline from me (December, 2013), it MAY overwrite your existing if you don't select the "make a copy" choice.  Be careful what you do when importing!
  2. PARANOIA: Deploy the Baseline to a collection withOUT checking the box about remediation.  
    1. Monitor, and for the machines which say "non-compliant", check that you really cannot remotely connect to them with Powershell Remote Management.  
    2. To a collection of those non-compliants, Deploy the baseline again, but DO check the remediation box.  
    3. Confirm that the remediation Baseline runs, and that you now can remotely connect to them with Powershell remote management.
  3. Repeat the Paranoia steps as many times as you need to until you are comfortable that it's doing what you think it should be doing.
  4. Once you've passed your own internal Paranoia Steps (above), you can remove the test deployments, and deploy it again to your 'main' collection, with the remediation checkbox checked.

Again, to repeat... this is just a sample.  and this sample will only be logical to use in your environment if you simply can't use a GPO to enable WinRM on all of your CM clients.  If you CAN use a GPO against your entire environment; then perhaps all you'd maybe, and I mean MAYBE want this for is to Monitor only (no remediation) and just check if the GPO is in fact getting to all your clients.  I wouldn't bother, personally; if I had a GPO that could get to every client.

Puzzling Behavior: When I was testing, more often than I was comfortable with (when using remediation enabled), client computers would report "Failed"--but at re-run they would report Compliant, and forever after report compliant.  What I suspect is happening (but couldn't verify, because a rerun was compliant) was that during Remediation, AS it was remediating one of the 1st non-compliant results... other tests would fail.  But by the time of a human (me) following up on it, WinRM was all enabled and configured and a re-run of the Baseline would indicate absolutely nothing wrong.  So... if you get a lot of failures in Remediation... just wait for your next cycle or re-run the baseline manually.  I suspect it's fine; just a timing issue.

ConfigMgr 2012 OSD Notes

Attached is the slide deck and the refreshMP script that I referenced during the OSD presentation at the October MNSCUG meeting.  This script should be copied the device early in the task sequence and then run after every reboot via run command line and a static path to the vbs file.  This will help avoid problems where the device can't contact the MP/DP after a reboot.  Be aware, an application/package installation that returns a 3010 will reboot the task sequence unless you define it not to in the package/application itself.  Know were your reboots are happening so you can run this script after each reboot.  

Rumor has it if the configuration manager client has the CU2 update installed this reboot issue is a non issue.  Give it a shot and let me know.

Take aways from the presentation - 

  • Know your application exit codes
  • Be prepared to break down the app model if it has reboots
  • Application Model works fine with OSD.
  • Configure appropriate task sequence variables for your environment.
  • Make sure your problems are not external to ConfigMgr.  Networking issues perhaps?
  • Get statistical significance with your builds.  1 successful build is useless.  10 in a row is a good start.

Here are good references for building your OSD Task Sequence - 

http://blogs.msdn.com/b/steverac/archive/2008/07/15/capturing-logs-during-failed-task-sequence-execution.aspx

http://technet.microsoft.com/en-us/library/hh273375.aspx

I can be reached @FredBainbridge.  Thanks!

OSD Presentation Slidedeck

RefreshDefaultMP Script

ConfigMgr 2012 SQL Report with Collection information about Include or Exclude other collections

Either my web searching skills have left me; or no one else has had occasion to create this type of report for SRS; but I couldn't find a SQL query I could use in SRS to show me collection details about when a particular collection was "including" another collection, or "excluding" another collection.  Here's what I ended up with; perhaps there is an easier or better way, but this worked:

select distinct c.name as [Collection Name],
c.collectionid,
cdepend.SourceCollectionID as 'Collection Dependency',
cc.Name as 'Collection Dependency Name',
Case When
cdepend.relationshiptype = 1 then 'Limited To ' + cc.name + ' (' + cdepend.SourceCollectionID + ')'
when cdepend.relationshiptype = 2 then 'Include '  + cc.name + ' (' + cdepend.SourceCollectionID + ')'
when cdepend.relationshiptype = 3 then 'Exclude '  + cc.name + ' (' + cdepend.SourceCollectionID + ')'
end as 'Type of Relationship'
from v_Collection c
join vSMS_CollectionDependencies cdepend on cdepend.DependentCollectionID=c.CollectionID
join v_Collection cc on cc.CollectionID=cdepend.SourceCollectionID
where c.CollectionID = @CollectionID

and where, of course, you then (in Report Builder) have another query just for use by the parameter "CollectionID": select c.collectionid, c.name from v_collection c order by c.name

With that, Report Builder 3.0 to publish it into SRS, and then you can then pick a collection by name, and see what types of relationships it has with other collections.  In this example, the collection called "Sample Collection for the Blog", happens to have 3 relationships to other collections.

CollectionRelationshipReport

Of course, you can also get more information about your collections; like...collection queries for that collection

select crq.name, crq.queryexpression from v_collectionrulequery crq where [email protected]

or... is that a collection which has direct members and no queries; like...

select count(*) as 'Number of Direct Member Rules' from v_collectionRuleDirect crd where [email protected]

or...are there any service windows applied to that collection:

select sw.Name, sw.Description, sw.Duration, sw.IsEnabled, sw.ServiceWindowID from v_ServiceWindow sw where [email protected]

There's also v_CollectionSettings and v_CollectionVariable which might be interesting.  So you can make up a "everything you wanted to know about this collectionid" report if you so desire.  Just need to be creative with Report Builder and having multiple tables in Report Builder.

Configmgr 2012 Truncate History Tables

Have you ever noticed, being the extreme ConfigMgr geek that you are, that you have v_gs and v_hs views?  Which point back to current, and History tables in your database.

Have you ever, and I mean EVER, needed to reference anything in the v_hs views?  Ever?  If you have, then perhaps this isn't for you.  If you've never used the data in the history views... why are you keeping it?  Sure, there are Maintenance Tasks you can tweak to help keep that data down, but... there is a quick (not supported) way to clean that up.

Keeping in mind this is NOT SUPPORTED (but it works anyway), so do this at your own risk, etc. etc.  If you mess up, I don't support you.  Microsoft won't support you.  You have a backup of your database, right?

On your Primary Site (even if you have a CAS, you still do this at your primary sites), all of this is done in SQL, the console is not involved at all.

Take the below, and in SQL management Studio, just take a look at how much History data you have.  Only you can determine if that's cause for concern, and you want to automate cleaning that up using a SQL Truncate process.  At my company, in the 12+ years that people on this team have been supporting SMS, then ConfigMgr...no one ever needed data in the History tables.  So...for us this was a lot of space gained, that didn't need to be backed up, and made nightly processing of some of the maintenance tasks that look at history tables finish MUCH faster than they have in months.

John Nelson (aka, Number2) would run the Truncate manually occasionally; but after a while that gets tedious.  :)  So he showed me how to see what is going to be truncated (query #1) and then how to make a Scheduled Job that runs daily, to actually do the Truncate of History tables.

Query #1: This particular query is only to look at what you have.  It does nothing but show you results.  Run this against all of your ConfigMgr sites with a CM_ database; and see if there is history you want to truncate.  If so, you may want to then move on to running SQL #2 (below).

SELECT
    t.NAME AS TableName,
    s.Name AS SchemaName,
    p.rows AS RowCounts,
    SUM(a.total_pages) * 8 AS TotalSpaceKB,
    SUM(a.used_pages) * 8 AS UsedSpaceKB,
    (SUM(a.total_pages) - SUM(a.used_pages)) * 8 AS UnusedSpaceKB
FROM
    sys.tables t
INNER JOIN
          sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN
    sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN
    sys.allocation_units a ON p.partition_id = a.container_id
LEFT OUTER JOIN
    sys.schemas s ON t.schema_id = s.schema_id
WHERE
    t.NAME NOT LIKE 'dt%'
    AND t.NAME LIKE '%[_]HIST'
    AND t.is_ms_shipped = 0
    AND i.OBJECT_ID > 255
GROUP BY
    t.Name, s.Name, p.Rows
ORDER BY
    rowcounts desc

SQL #2:  This will CREATE a job, with a daily schedule.  Before you run it, change CM_FUN to be your CM_<your Site Code> ; and you may also want to change
                @active_start_date=20140909
prior to running it, to whatever date you want the daily schedule to really start.  Once created, presuming SQL Server Agent is running, on that SQL server, for the Databse of CM_<whatever you put in>, it'll truncate your history tables on the schedule defined.

Optional:  After you've run the below, in your SQL Management Studio, Sql Server Agent, Jobs, if your right-click on the new job "ConfigMgr Truncate History Tables", you can select "Start Job at Step..." to have the job run RIGHT now; to confirm it works.  Once it's done, you can re-run query #1 above and see that it's clean(er).  Note that as machines report inventory, data will go into the history tables frequently.  You may already have new rows after you just ran the Truncate job, but it should be much less than it was.

Optional:  The next day, or weekly, or monthly...whatever schedule you have internally for checking up on your ConfigMgr infrastructure, every once in a while, run Query #1 above; and/or every once in a while, in SQL go to SQL Server Agent, Jobs, right-click on the Configmgr Truncate History Tables job, and select "View History", to see that the job was successful.

USE [msdb]
GO /****** Object:  Job [ConfigMgr Truncate History Tables]    Script Date: 9/8/2014 2:05:50 PM ******/
BEGIN TRANSACTION
DECLARE @ReturnCode INT
SELECT @ReturnCode = 0
/****** Object:  JobCategory [[Uncategorized (Local)]]    Script Date: 9/8/2014 2:05:51 PM ******/
IF NOT EXISTS (SELECT name FROM msdb.dbo.syscategories WHERE name=N'[Uncategorized (Local)]' AND category_class=1)
BEGIN
EXEC @ReturnCode = msdb.dbo.sp_add_category @class=N'JOB', @type=N'LOCAL', @name=N'[Uncategorized (Local)]'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
END
DECLARE @jobId BINARY(16)
EXEC @ReturnCode =  msdb.dbo.sp_add_job @job_name=N'ConfigMgr Truncate History Tables',
                @enabled=1,
                @notify_level_eventlog=0,
                @notify_level_email=0,
                @notify_level_netsend=0,
                @notify_level_page=0,
                @delete_level=0,
                @description=N'Truncate ConfigMgr database History tables',
                @category_name=N'[Uncategorized (Local)]',
                @owner_login_name=N'NT AUTHORITY\SYSTEM', @job_id = @jobId OUTPUT
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
/****** Object:  Step [Truncate]    Script Date: 9/8/2014 2:05:51 PM ******/
EXEC @ReturnCode = msdb.dbo.sp_add_jobstep @[email protected], @step_name=N'Truncate',
                 @step_id=1,
                @cmdexec_success_code=0,
                @on_success_action=1,
                @on_success_step_id=0,
                @on_fail_action=2,
                @on_fail_step_id=0,
                @retry_attempts=0,
                @retry_interval=0,
                @os_run_priority=0, @subsystem=N'TSQL',
                @command=N'USE [CM_FUN]
GO
DECLARE @SQL NVARCHAR(MAX) = N''
''
SELECT
  @SQL = @SQL+N''TRUNCATE TABLE dbo.''+TABLE_NAME+'';
''  
FROM
  INFORMATION_SCHEMA.TABLES x
WHERE
  x.TABLE_SCHEMA = ''dbo''
  AND x.TABLE_NAME LIKE ''%[_]HIST''
ORDER BY
  x.TABLE_NAME
exec sp_executesql @SQL
',
                @database_name=N'CM_FUN',
                @flags=0
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_update_job @job_id = @jobId, @start_step_id = 1
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobschedule @[email protected], @name=N'ConfigMgr Truncate Hsitory',
                @enabled=1,
                @freq_type=4,
                @freq_interval=1,
                @freq_subday_type=1,
                @freq_subday_interval=0,
                @freq_relative_interval=0,
                @freq_recurrence_factor=0,
                @active_start_date=20140908,
                @active_end_date=99991231,
                @active_start_time=231100,
                @active_end_time=235959,
                @schedule_uid=N'9936718a-af85-497b-ac0d-d47d91ce99d8'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
EXEC @ReturnCode = msdb.dbo.sp_add_jobserver @job_id = @jobId, @server_name = N'(local)'
IF (@@ERROR <> 0 OR @ReturnCode <> 0) GOTO QuitWithRollback
COMMIT TRANSACTIONGOTO End
Save QuitWithRollback:
    IF (@@TRANCOUNT > 0) ROLLBACK TRANSACTION
EndSave:
GO

 



ConfigMgr CCMRecentlyUsedApps blank or mtrmgr.log with StartPrepDriver - OpenService Failed with Error Issue

Issue:  ConfigMgr Clients which should be reporting via hardware inventory "CCMRecentlyUsedApps" have nothing to report.  An analysis of the client indicates there is nothing in WMI root\cimv2\sms\ccm_recentlyusedapps TO report, and mtrmgr.log on the client contains lines like "StartPrepDriver - OpenService Failed with Error".  See --> KB3213242 <-- at Microsoft for more details.  UPDATE to script on 2018-01-31

Remediation: What worked for us was to re-register the 'prepdrv.inf', and then restart SMS Agent Host (aka ccmexec)

Before you do anything suggested below--confirm this will fix the issue you are seeing.  Login to a box or two with the issue, and from an elevated cmd prompt, run the "remediation" powershell script below.  Watch mtrmgr.log; and manually check that root/cimv2/sms select * from ccm_recentlyusedapps gets information.  Once you see info, do a hardware inventory, and confirm that box now reports information up to your database, as you expect it to.  If manually remediating works, then you can look to completing the steps below to automate the fix across your environment.

What we did to remediate the 'StartPrepDriver - OpenService Failed with Error' using a Configuration Item and Baseline was this:

It is assumed that you are using a "custom client agent setting" for enabling CCM_RecentlyUsedApps, since you normally do NOT want to target "every client".  Usually you don't want to target heavily used Application Servers, or Citrix Servers--since hundreds of userIDs can 'launch' applications, the results of CCMRUA on those types of machines often won't process (the MIF file will be bigger than 50mb) and isn't that useful anyway.  So go check, in your Custom Client Agent Settings, which collection you are targeting to enable 'ccm recently used apps'.  Once you know that, then continue.

1) first, determine with SQL query how big of an issue it might be; maybe it was only a couple of boxes; and you can address them manually. Depending upon the count returned, it'll be up to you if you want to pursue this as a Configuration Item, or manually address.

   Declare @CollID as nvarchar(8) = (Select collectionid from v_collection c where c.name = 'The Collection Name You figured out has CCMRUA as a HINV rule')
   select count(ws.resourceid) [Count]
   from v_gs_Workstations_Status ws
   where ws.LastHWScan is not null
   and ws.resourceid not in (select resourceid from v_gs_ccm_recently_used_apps)
   and ws.resourceid in (select resourceid from v_fullcollectionmembership_valid fcm where [email protected])

2) Create the Collection Query, to target.

   - Create a new Collection, using the "limit to collection" as the collection you use for targeting when CCM Recently Used Apps information should be reported via Hardware Inventory. (the one you figured out above)

   - The collection query rule should have two conditions... where:
      SMS_G_System_WORKSTATION_STATUS.LastHardwareScan is not null
      and
      SMS_R_SYSTEM.ResourceId not in (Select ResourceId from SMS_G_System_CCM_RECENTLY_USED_APPS)

3) If the count of the SQL query, and the count of the Collection query are the same (or close enough); then you can continue to creating the ConfigItem, and deploying the Baseline to remediate the issue.

4) Create the Configuration Item, where the Detection and Remediation Logic are at the end of this blog post.  Both are Powershell scripts. for "what means compliant", it will be a String, equals "Compliant". Make sure you check the box about Remediate if non-compliant.

5) After you create the Configuration Item, create a Baseline, add the CI to the baseline, and deploy the baseline, with remediation, to the collection you created above. (you may want to do a pilot, to a subset of 2 or 3 specific boxes, just to be sure it will all work as you expect it to work)

UPDATE 2018-02-02:  Discovered that machines were "re-breaking".  So I ended up targeting 'every box' with the Remediation script--it'll only remediate if problem found.  Note in my production remediation script, I comment out the ccmexec restart.  CCMEval takes care of that overnight, so it's OK.

6) That's it... then it's just a waiting game.  You are waiting for the Baseline to run, remediate the issue by doing the rundll routine, restart ccmexec.  Once that is done, in a minute or two mtrmgr.log will start to begin recording information into the root\cimv2\ccm_recentlyusedapps WMI location.  Once there is information there, then at the next hardware inventory, presuming this client is asked via hinv policy to report that information, it will have something to report.  So how long it takes completely depends upon YOUR schedules, that you defined.  How frequently the Baseline evaluates, and how frequently your clients do the scheduled HINV action.

PS: you might be tempted to "let me just add a line to the remediation script, to trigger a hinv right at the end".  I wouldn't bother.  It takes a few minutes for the client, after the ccmexec restart, to populate WMI.  It'll all shake out quickly enough.

######### DETECTION Powershell Script for the CI #####################
<#
.SYNOPSIS
   This Script is for Detection of a known condition for the inability of CM Client to initialize the StartPrepDriver
.DESCRIPTION
   This script finds the CM mgrmgr.log file, and reads it for a known good, or known error, condition.
   "what means good" =  mtrmgr.log contains lines like this
        PREP driver successfully initialized 
        or
        Termination event received for process
   "what means bad" = mtrmgr.log contains lines like this
        StartPrepDriver - OpenService Failed with error
.NOTES
   Steps are to
    1) read the regkey for CM Client to find the correct log file location
    2) Look for good or Bad entries in the file
    3) Exit with 'Compliant' or 'Non-Compliant' depending upon the results
.VERSION
    1.0 Sherry Kissinger  2017-03-30
1.1 Sherry Kissinger 2018-01-31 (added -Tail 10 for better accuracy)
#>
$ErrorActionPreference = 'SilentlyContinue'
#Get the LogDirectory for CM Client
$CMLogDir = (Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\CCM\Logging\@Global" -Name LogDirectory).LogDirectory
#Define the LogFile to Read
$CMLogFile = $CMLogDir + '\mtrmgr.log'
#Read the Log file for the Error Phrase
$GotErrors = (Get-Content -Path $CMLogFile -Tail 10 | where-object {$_ -like '*StartPrepDriver - OpenService Failed with error*'})
#Read the Log File for a known good condition phrases
$GotGoodEntry = (Get-Content -Path $CMLogFile -Tail 10 | where-object {($_ -like '*PREP driver successfully initialized*') -or ($_ -like '*Termination event received for process*')})
if ($GotGoodEntry) {
  write-host 'Compliant'
  }
else {
  if ($GotErrors) {
     write-host 'Non-Compliant'
  }
}

############## REMEDIATION Powershell Script for the CI  ##########################
<#
.SYNOPSIS
   This Script is for Detection and Remediation of a known condition for the inability of CM Client to initialize the StartPrepDriver
.DESCRIPTION
   This script finds the CM mgrmgr.log file, and reads it for a known good, or known error, condition.
   "what means good" =  mtrmgr.log contains lines like this
        PREP driver successfully initialized 
        or
        Termination event received for process
   "what means bad" = mtrmgr.log contains lines like this
        StartPrepDriver - OpenService Failed with error
.NOTES
   Steps are to
    1) read the regkey for CM Client to find the correct log file location
    2) Look for good or Bad entries in the file
    3) Exit with 'Compliant' or if 'Non-Compliant', to run the fix
       3a) the fix is two steps
            RUNDLL32.EXE SETUPAPI.DLL,InstallHinfSection DefaultInstall 128 C:\WINDOWS\CCM\prepdrv.inf
            Restart-Service ccmexec
.VERSION
    1.0 Sherry Kissinger  2017-03-30
1.1 Sherry Kissinger 2018-01-31 (added -Tail 10 for better accuracy)
#>
$ErrorActionPreference = 'SilentlyContinue'
#Get the LogDirectory for CM Client
$CMLogDir = (Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\CCM\Logging\@Global" -Name LogDirectory).LogDirectory
#Define the LogFile to Read
$CMLogFile = $CMLogDir + '\mtrmgr.log'
#Read the Log file for the Error Phrase
$GotErrors = (Get-Content -Path $CMLogFile -Tail 10 | where-object {$_ -like '*StartPrepDriver - OpenService Failed with error*'})
#Read the Log File for a known good condition phrases
$GotGoodEntry = (Get-Content -Path $CMLogFile -Tail 10 | where-object {($_ -like '*PREP driver successfully initialized*') -or ($_ -like '*Termination event received for process*')})
if ($GotGoodEntry) {
  write-host 'Compliant'
  }
else {
if ($GotErrors) {
 Try { Set-ExecutionPolicy -ExecutionPolicy 'Bypass' -Scope 'Process' -Force -ErrorAction 'Stop'}
 Catch {}
 $CMClientDIR = (Get-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\SMS\Client\Configuration\Client Properties" -Name 'Local SMS Path').'Local SMS Path'
 $ExePath = $env:windir + '\system32\RUNDLL32.EXE'
 $CLine = ' SETUPAPI.DLL,InstallHinfSection DefaultInstall 128 ' + $CMClientDIR + 'prepdrv.inf'
 #Parse the Parameters
 $Prms = $Cline.Split(" ")
 #Execute the fix with parameters
 & "$Exepath" $Prms
 #Restart ccmexec service
 #CCMExec should be restarted; If you'd rather wait for a natural client reboot, comment out this line.
 #Note that this CI will continue to attempt to remediate until a ccmexec restart or reboot
 restart-service ccmexec
  }
}

 

 

ConfigMgr Experts - I'd like to hear from you!

I am a Program Manager at Microsoft investigating new scenarios for ConfigMgr. I'd love to hear about your experiences with application deployment and management - your problems and asks, to identify how we can improve admin experiences.

If you'd like to connect further, please This email address is being protected from spambots. You need JavaScript enabled to view it. me. Looking forward to hearing from you. -Ravi Ashok

ConfigMgr Inventory: Per User Network Printer Mapped Information (datashift replacement)

Back when Windows 98 and Windows XP were the norm, there was a script available called "datashift", which would grab information from users regarding their printers, and included any network mapped printers.  It worked because everyone was a local admin--that's no longer the case.

This is a replacement for that older utility.  If you still have a need for the per-user information regarding network printer connections, here is a way to obtain that information.

The basics of how it works is this.  There are two vbscripts that run.  One runs as SYSTEM, and it's only purpose is to create a custom namespace in WMI (if it doesn't already exist), and grant permissions to all of your domain users to that custom namespace--so they can populate it with the results of script #2.  Script #2 runs, only when a user is logged in, with user rights.  That's because the script needs to read information about that specific logged-in users mapped printer information.

The results of the 2nd script end up in that custom WMI namespace, and will have the following information:

DateScriptRan = the exact date and time that the script ran to gather this user-specific information.
UserDomain = whomever is logged in, what their domain is.
UserName = whomever is logged in, what their username is.
deviceid --  The Mapped Printer
Drivername -- Driver offered from the Server
Location -- Metadata from the print share (if populated)
Comment -- Metadata from the print share (if populated)
ServerName -- ServerName where the print share originated
ShareName -- ShareName of that shared printer

End result:  After deploying these two scripts, you will be able to answer the question from your server team of "who is actually mapping to these network printers".  Of course, the main limitation is this is per-user information. 

Ok, enough of how it works.  You really want to know *exactly* what to do, right?  Let's start!   Your Source folder for the package will contain 3 things: WMINameSpaceAndSecurity.VBS WMISecurity.exe MappedPrinters.vbs

The .vbs files are at this link --> MappedPrinters <--.  Note that WMISecurity.exe is not attached here; just search using your favorite search engine to find and download wmisecurity.exe.  The one I used was version 1.0.1.31058 --maybe there are later versions of this .exe; but that's the one I found, and it worked.

You will need to make 1 change to "WMINameSpaceAndSecurity.vbs", this line: strDomain = "YOURDOMAINHERE" Modify that to be your domain (the domain your users are in that will be logging in and running script #2).

Create two programs; the first runs cscript.exe WMINameSpaceAndSecurity.vbs, whether or not a user is logged in, with Administrator rights.  The second runs cscript.exe MappedPrinters.vbs, only when a user is logged in, with user rights.  The 2nd one; you want to "run another program first", and have it run the first one.  It only needs to run the 1st program once, per computer; it doesn't need to re-run.

Advertise the 2nd program to a collection (I recommend a test/pilot first), and confirm that it works as you expect.  If you want to confirm the data is there, look in root\CustomCMClasses  (not root\cimv2) for cm_MappedPrinters, that there are instances there for mapped printers for that user.

If you are satisfied it's there locally, either add the below to sms_def.mof (if you are ConfigMgr07) or import it into Default Client Agent Settings, Hardware Inventory (if you are CM12)

// NOTE!  Requires pre-requisite scripts run on every client!
//============================================================
[SMS_Report(TRUE),
 SMS_Group_Name("MappedPrinters"),
 SMS_Class_ID("MappedPrinters"),
 SMS_Namespace(FALSE),
 Namespace("\\\\\\\\localhost\\\\root\\\\CustomCMClasses")]

class cm_MappedPrinters : SMS_Class_Template
{
   [SMS_Report(TRUE)] string Comment;
  [SMS_Report(TRUE)] string DateScriptRan;
  [SMS_Report(TRUE)] string DriverName;
  [SMS_Report(TRUE)] string Location;
  [SMS_Report(TRUE),key] string PrinterDeviceID;
  [SMS_Report(TRUE)] string ServerName;
  [SMS_Report(TRUE)] string ShareName;
  [SMS_Report(TRUE)] string UserDomain;
  [SMS_Report(TRUE)] string UserName;
};

Sit back, relax for a bit... then invoke a hardware inventory on your test boxes, and see if the data shows up in your database in v_gs_MappedPrinters0.  If so, deploy the advert to your real target collection of users or computers, and wait for the data to show up.  Depending upon your need for this information; you may or may not want to have the advert run on a recurring basis (weekly? monthly?) or just gather it for a week or so (just enough to answer the question) then delete the advert and change the Inventory from TRUE to FALSE (until the next time they ask).

Potential SQL report:

select
s1.Netbios_Name0 as [Computer Name],
prn.userdomain0 as [User Domain],
prn.username0 as [UserName],
prn.DateScriptRan0 as [Date Information Obtained],
prn.PrinterDeviceID0 as [Printer DeviceID],
prn.servername0 as [Server hosting the printer share],
prn.Sharename0 as [Share Name of the printer],
prn.Location0 as [If metadata exists on the print share, Location information],
prn.Comment0 as [If metadata exists on the print share, Comments],
prn.Drivername0 as [Driver offered by the printshare]
from v_R_System s1
join v_gs_mappedprinters0 prn on prn.resourceid=s1.ResourceID
order by s1.netbios_name0

ConfigMgr RefreshServerComplianceState as a Configuration Item

State messages are great, because they are quickly processed.  However, it can (and does) occasionally happen where for network reasons, corrupt data, or other influences, some State Messages from your ConfigMgr Clients never make it from the client into your database.  Normally, that isn't a big deal--however, what does sometimes happen is those state messages are for Software Updates.  If you have people who look at reports for Software Updates, and some clients locally say they are Compliant for Software Update KB123456, but when you look at reports based on your database, for that same client, the database says KB123456 on that client is non-Compliant.  Read this: https://blogs.msdn.microsoft.com/steverac/2011/01/07/sccm-state-messagingin-depth/ for a much better explanation of why and how; but the short conclusion to that situation is you want to ask your clients to occasionally run what is referred to as a "RefreshServerComplianceState" locally.  Basically, you are asking your clients to resend the state messages about compliant/non-compliant for all existing Software Updates they are aware of locally, to ConfigMgr, your database.  aka.. exactly what it says on the tin.  Refresh Server Compliance State.

The short and sweet is that it's really just a line or two of vbscript or powershell code.  But if you are in a large environment, you often don't want to tell every single client to all send up state messages all on the same day.  It could POTENTIALLY be a lot of data, and backlog your servers' SQL processing.  It would eventually catch up... but why create a headache for yourself?

Below is a Powershell Script that you COULD choose to run... as a set-it-and-forget-it type of thing.  As-is, if you took the below and deployed it as a Powershell Script in a Configuration Item, and the Baseline were to run daily, your clients would around about 2x a year or so randomly decide to RefreshServerComplianceState .  If you want it more frequent; change the 180 to say... 90  (to be about every 3 months, or 60 to be about every 2 months). 

The below is just a suggestion, and you can take it and break it as you like.

<#
.SYNOPSIS
  This routine is to generate a random number between 1 and "MaximumRandom". In general, it a Maximum Random
  number will likely be 180; if the Configuration Item is run daily, approximately twice a year it is expected
  that a client will randomly pick a value of 1, and trigger a RefreshServercomplianceState
.DESCRIPTION
  - This script would likely be used by a Configuration Manager Administrator as a 'Configuration Item', as the
    "Detection" script in that Configuration Item. The Administrator would set it up as a detect-only script, where
    the "what means compliant" is that any value at all is returned.
  - The Configuration Manager Administrator would likely add this to a baseline, and deploy that baseline to run
    on a Daily basis to their windows-os based devices, which scan for or deploy patches using the Software Updates Feature.
  - Using the MaximumRandom number of 180, presuming the baseline runs daily, approximately twice a year based on
    random probabilities, a client will trigger to run the "ResetServerComplianceState". See the blow mentioned
    below for why this is something a Configuration Manager Administrator might want to do this.
  - If the Configuration Manager Administrator wants to make it randomly occur more frequently or less frequently,
    they would either adjust the $MaximumRandom number higher or lower, or modify the frequency of the Baseline evaluation
    schedule.
  - For interactive testing, modify $VerbosePreference to 'Continue' to see what action was taken. Remember to change
    it back to SilentlyContinue for live deployments.
  - If a client does trigger, an EventLog entry in the ApplicationLog with an Information EventId of 555 from SyncStateScript
    will be created. You can add or modify the -Message entry for the EventLog to be as verbose as you need it to be for
    your own potential future tracking purposes. Perhaps you might want to add in specifics like "Configuration Item
    Named <whatever> in the Baseline <whatever> triggered this action, this was originally deployed on <Date>"

   Credits: Garth Jones for the idea.

   https://blogs.msdn.microsoft.com/steverac/2011/01/07/sccm-state-messagingin-depth
   for the reasons why it's a good idea to do so occasionally.
.NOTES
  2018-05-06 Sherry Kissinger

  $VerbosePreference options are
   'Continue' (show the messages)
   'SilentlyContinue' (do not show the message, this is the default if not set at all)
   'Stop' Show the message and halt (use for debugging)
   'Inquire' Prompt the user if ok to continue
#>

Param (
  $VerbosePreference = 'SilentlyContinue',
  $ErrorActionPreference = 'SilentlyContinue',
  $MaximumRandom = 180,
  $ValueExpected = 1
  #ValueExpected Will likely always be 1, and never change; set as a parameter for ease of reporting.
)

$RandomValue = Get-Random -Maximum $MaximumRandom -Minimum 1
if ($RandomValue -eq $ValueExpected ) {
  Write-Verbose "Random generated value of $RandomValue equals $ValueExpected, therefore RefreshServerComplianceState for ConfigMgr Client State Messages for Updates will be triggered."
  $SCCMUpdatesStore = New-Object -ComObject Microsoft.CCM.UpdatesStore
  $SCCMUpdatesStore.RefreshServerComplianceState()
  New-EventLog -LogName Application -Source SyncStateScript -ErrorAction SilentlyContinue
  Write-EventLog -LogName Application -Source SyncStateScript -EventId 555 -EntryType Information -Message "Configuration Manager RefreshServerComplianceState Triggered to Run. If questions on what this is for, refer to   https://blogs.msdn.microsoft.com/steverac/2011/01/07/sccm-state-messagingin-depth/ "
}

else

{
  Write-Verbose "Random generated value was $RandomValue, which does not equal $ValueExpected, RefreshServerComplianceState for ConfigMgr Client State Messages for Updates was not triggered. "
}

Write-Host 'Compliant' 

Configmgr Reports Leveraging SrsResources.dll display #Error instead of localized error descriptions

Issue:  Leveraging SrsResources.dll using ConfigMgr Reporting Services Reports used to work... but after an upgrade or reconfiguration any reports using the expression "=SrsResources.Localization.GetErrorMessage(Fields!ErrorCode.Value, User!Language) "  in the report just says #Error

Resolution:

I'll give the quick resolution here, and the long explanation later.

  1. Get the absolute latest version of SrsResources.dll you have, by looking at any CM Administration Console installation, in the \bin folder.
  2. On the Server which has your ConfigMgr Reporting Services Role, on the same drive where you have that role, make a directory (call it whatever you like, but for this example, I'm calling it CMSrsResources, and for me, the drive was S:.  Copy that latest SrsResources.dll to S:\CMSrsResources folder.
  3. On the Server which has your ConfigMgr Reporting Services role, you will have had to install SQL Reporting Services.  Find that installed location, it might be on C:, D:, or elsewhere.  The folder will likely be "something like"....\MSRS13.MSSQLServer\Reporting Services\ReportServer.  In that location will be a rssrvpolicy.config file.  Edit rssrvpolicy.config in notepad, and look for the reference to SrsResources.dll.  It is most likely pointing to...\MSRS13.MSSQLServer\Reporting Services\ReportServer\bin\SrsResources.dll.  CHANGE that to point instead to what you did in Step 2.  In my case, it would be S:\CMSrsResources\SrsResources.dll.  Save the config file.  (if it won't LET you save the config file, go to Services, and stop the SQL Reporting Services 'service', then save it.)
  4. Restart the service for SQL Reporting Services, or Restart the Server.

Done.

The long Explanation...

We have multiple custom reports, especially for Application Deployments, where knowing what an errorcode number means in a localized language (in our case, usually English) is very handy, instead of looking at that information in the console.  SQL reports are preferred in many instances.  In order to get that localization, according to multiple sources, the way to do that using ConfigMgr is 3 steps:

  1. rssrvpolicy.config for your SQL Reporting Services needs to have the SCCM Assembly referenced, pointing to the location of the SrsResources.dll file.
  2. that SrsResources.dll file has to exist in that location identified in the .config file.
  3. for any individual Report for ConfigMgr, for the report properties, on the 'References' tab, add the SrsResources assembly. Inside that report, presuming an error code is one of the values of your dataset, you can use the expression =SrsResources.Localization.GetErrorMessage(Fields!ErrorCode.Value, User!Language to get that information displayed in legible english (if you are english speaking, french if you're french, etc. etc.)

I noticed after CM 1610, and then again after CM 1702, apparently what is done "for me" is either permissions are reset, or something else fun is happening--maybe it's only when you are using SQL 2016, I don't know (all my labs and production are SQL 2016 latest).  But the .config file, if it's pointing to the Reporting Services\ReportServer\Bin location... It just won't read it and use it.  What the report displays instead of a nice handy english-y message is just #Error .  If I instead copy the .dll elsewhere, and edit the .config file to say "go find it over here" -- it uses it just fine.

Here's hoping this might help others if they like leveraging the SrsResources.dll within ConfigMgr reports, and they are doing everything by the book... and yet it still doesn't work.  You might just need to copy the .dll elsewhere and edit the config file.

Configuration Manager Collection Cleanup Suggestions

Certainly in your CurrentBranch Console, under "Management Insights", there are some things there regarding collection cleanup; but here's a few other ways to look at your data.

Over the years, Collection plaque and tartar just grows and grows... and over time, people forget what collections were made for, or why they have them.  As a way to help the people who use our console narrow it down a bit to 'possible' stale, old collections which no longer have any purpose, below is a potential starting point.

What the below would list is collectionids and names, which are:
- NOT Currently used for any other collection as a "limited to", "Include", or "Exclude"
- NOT Currently used for any Deployment, whether it's a baseline, an application, an advertisement, or a task sequence
- NOT Currently used to define a Service Window (aka Maintenance Window)
- NOT Currently used for any custom client agent settings you might have configured.
- NOT currently used for any collection variables you might have for OSD
- NOT currently used for Automatic Client Upgrade, as an excluded collection
- NOT a default/out of the box collection (aka, ones that start with SMS)

This isn't of course a definitive list.  For example, perhaps a collection was created to deploy "Really Important Application" 2 weeks ago... but the actual deployment hasn't happened yet--it's destined to begin next week.  In that case of course the collection might show up on this list--but it shouldn't be deleted--it has a future use.  But hopefully if your environment has a lot of collections and determining which ones might be safe to remove, this is a potential starting point.

Select c.collectionid, c.name [CollectionName]
from v_collection c
where
    c.collectionid not in (Select SourceCollectionID from vSMS_CollectionDependencies) -- include, excludes, or limited to
and c.collectionid not in (Select collectionid from v_deploymentsummary) -- any deployment, apps, advert, baseline, ts
and c.collectionid not in (Select Collectionid from v_ServiceWindow)
and c.collectionid not in (select collectionid from vClientSettingsAssignments)
and c.collectionid not in (select siteid from vSMS_CollectionVariable) -- OSD Collection Variables
and c.collectionid not in (Select a.ExcludedCollectionID from autoClientUpgradeConfigs a) -- ACU exclusion collection
and c.collectionid not in (select collectionid from v_collection where collectionid like 'sms%') -- exclude default collections

Another potential sql query for you to look for "collections not needed" could be this one.  What this one would be is it would sort, by "last time members changed in this collection".  The potential argument goes like this... even *if* that collection is being used for an active deployment... if the members of that machine based (not userbased) collection hasn't changed in years; how important is it to keep that particular deployment going / available?

;with cte as (select t2.CollectionName, t2.SiteID [CollectionID]
 ,(Cast(t1.EvaluationLength as float)/1000) as [EvalTime (seconds)]
 ,t1.LastRefreshTime, t1.MemberChanges, t1.LastMemberChangeTime,sc.SiteCode,
 case
  when c.refreshtype = 1 then 'Manual'
  when c.refreshtype = 2 then 'Scheduled'
  when c.refreshtype = 4 then 'Incremental'
  when c.refreshtype = 6 then 'Scheduled and Incremental'
 end as [TypeofRefresh]
,c.MemberCount,c.CollectionType
from dbo.collections_L t1 with (nolock)
join collections_g as t2 with (nolock) on t2.collectionid=t1.collectionid
join v_sc_SiteDefinition sc on sc.SiteNumber=t1.SiteNumber
join v_collection c on c.collectionid=t2.siteID
)
Select cte.collectionID, cte.CollectionName, CTE.[EvalTime (seconds)]
,Right(Convert(CHAR(8),DateADD(SECOND,CTE.[EvalTime (seconds)],0),108),5) [EvalTime (Minutes:Seconds)]
,cte.lastrefreshtime, cte.memberchanges, cte.lastmemberchangetime, cte.typeofrefresh, cte.membercount
from cte
where cte.collectiontype=2
and cte.collectionid not like 'SMS%'
order by lastmemberchangetime

Configuration Manager Current Branch FastChannel Information via SQL Query

A lot of people use the console--but I don't go in there that much.  I'm more of a query SQL kind of person.  Some of the updates lately for Current Branch have been leveraging the "FastChannel" for communications.  If you don't remember, originally the FastChannel was meant for quick-hit communications, primarily around Endpoint protection.  However, over the last several updates, the product team has been adding more communications over the fast channel.  Most of those communications are to make the console experience feel more "real time"--and I get that.  For people who live in the console.  but I don't... so where is that information and how can I use it... using SQL?

Here's a couple things to have in your SQL query backpocket.

If you are Current Branch 1710 or higher, the 1710 clients will communicate back about if they have 1 or more of 4 specific "reboot pending" reasons.  You can see that in console--but as a SQL report, here's a summary query to show you counts of devices and what reboot pending state (and why) they are in:

select cdr.ClientState [Pending Reboot],
Case when (1 & cdr.ClientState) = 1 then 1 else 0 end as [Reason: ConfigMgr],
Case when (2 & cdr.ClientState) = 2 then 1 else 0 end as [Reason: Pending File Rename],
Case when (4 & cdr.ClientState) = 4 then 1 else 0 end as [Reason: Windows Update],
Case when (8 & cdr.ClientState) = 8 then 1 else 0 end as [Reason: Windows Feature],
Count(*) [Count]
from vSMS_CombinedDeviceResources cdr
where CAST(right(left(cdr.ClientVersion,9),4) as INT) >= 8577 and cdr.clientversion > '1'
Group by cdr.ClientState
order by cdr.clientstate

It'll only tell you about clients which are version 8577 or higher (aka, 1710).  If you are absolutely certain all your clients are 1710 or higher, you can remove that section of the "where" clause.
asking for clientversion > 1 is because you "might" have mobile clients reporting to your CM.  You really only want to know about Windows-based clients.  Essentially, those where clauses are so that you can be a little more accurate about pending reboots.  If you have a lot of clients less than version 1710, they can't communicate their clientState via the FastChannel, so you might think "great, these devices don't have a pending reboot"--when what it really means is "these clients aren't able to tell me if they need a pending reboot, because their client version is not capable of telling me that, via this method".

Another piece of information that can come in via the Fast Channel, if you are using Current Branch 1806 or higher, 1806 clients can tell you about a CURRENTLY logged in user.  This differs from what we as SMS/ConfigMgr admins are used to in the past.  We have for years been able to tell "last logged on user" or "most likely primary user"--based on heartbeat, hardware inventory, or asset intelligence data.  But that could be "old news"--depending upon how frequent your heartbeat or inventory runs, it could be hours to days old information.  Current logged on user should be at worst a few minutes old (depending of course upon your size, and complexity)

select s1.netbios_name0 [ComputerName], cdr.CurrentLogonUser [Current Logged on User According to FastChannel]
from vSMS_CombinedDeviceResources cdr
join v_r_system s1 on s1.resourceid=cdr.machineid
oder by s1.netbios_name0

December 2014 MNSCUG Meeting

After a wildly successful #MMSMinnesota we will be wasting no time getting back into the swing of things.  We are going to hold a rare December MNSCUG meeting and for very good reason: MVP Mikael Nystrom offered to come speak while he's in town. If you couldn't make it to MMS, you missed out on some great OSD sessions. One of Nystrom's sessions was so popular that we had to run it twice.

Nystrom, aka the Deployment Bunny plans to show us some OSD tips. We are truly honored to be having him join us as he's a premiere speaker and industry expert in OSD. As a Senior Executive Consultant at TrueSec, Mikael is passionate about sharing his extensive experience and methods from the field, verified to work in a real environment.  This is not to be missed.

And to keep with the OSD theme, local Microsoft MVP Nash Phersonwill be doing a presentation on Windows Server Builds with Configuration Manager OSD.  There will be an immense amount of expertise in the room for this meeting.  Bring your questions! 

Coretech will sponsor food and beer (good beer!) and in exchange, Brian Mason will take a couple minutes to show off the Coretech Dashboard.

 

Please register to help us gauge food and beer ordering.
Eventbrite - December 2014 MNSCUG Meeting

Dot Net Framework Versions via Custom Hardware Inventory

Based on information contained in here:

http://social.technet.microsoft.com/wiki/contents/articles/15601.how-to-determine-the-net-framework-installed-versions.aspx

Below is a potential custom hardware inventory MOF edit to use to pull back installed versions of .net using Configuration Manager 2012

There's a section you would need to add to your <installed location on your server>\inboxes\clifiles.src\hinv\Configuration.mof, near the bottom.

Then there's the section you would save as a text file, called "dotnet.mof", and you would import that into via your CM Console, into "Default Client Settings", hardware inventory, import.

Once clients start reporting back, there's a potential report for you to use, with, if you just-so-happened-to-have workstations that were named starting with "WIN7-", a sample output. Obviously you can modify the "where" statement to use a @ parameter in sql, or re arrange the SQL report in whatever way is needed for your reporting requirements.

WARNING!!! Sometimes when one copies and pastes from a web browser, "straight" quotes are changed for you to "Smart Quotes". You will want to carefully look at what you've copied and pasted, and if necessary, use a notepad "replace" to replace any curly smart quotes to straight quotes.

//=================================DOTNetFrameworks

#pragma namespace("\\\\.\\root\\cimv2")
#pragma deleteclass("DotNETFrameworks",NOFAIL)
[DYNPROPS]
class DotNETFrameworks

{ [key] string Version="";
boolean Installed;
string ServicePack;
string BuildNumber;
};

[DYNPROPS]
instance of DotNETFrameworks
{ Version="1.0";
[PropertyContext("local|HKEY_LOCAL_MACHINE\\Software\\Microsoft\\Active Setup\\Installed Components\\{78705f0d-e8db-4b2d-8193-982bdda15ecd}|Version"),Dynamic,Provider("RegPropProv")] BuildNumber;
};

[DYNPROPS]
instance of DotNETFrameworks
{ Version="1.0 MCE";
[PropertyContext("local|HKEY_LOCAL_MACHINE\\Software\\Microsoft\\Active Setup\\Installed Components\\{FDC11A6F-17D1-48f9-9EA3-9051954BAA24}|Version"),Dynamic,Provider("RegPropProv")] BuildNumber;
};

[DYNPROPS]
instance of DotNETFrameworks
{ Version="1.1";
BuildNumber="1.1.4322";
[PropertyContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v1.1.4322|Install"),Dynamic,Provider("RegPropProv")] Installed;
[PropertyContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v1.1.4322|SP"),Dynamic,Provider("RegPropProv")] ServicePack;
};

[DYNPROPS]
instance of DotNETFrameworks
{ Version="2.0";
[PropertyContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v2.0.50727|Install"),Dynamic,Provider("RegPropProv")] Installed;
[PropertyContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v2.0.50727|SP"),Dynamic,Provider("RegPropProv")] ServicePack;
[PropertyContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v2.0.50727|Version"),Dynamic,Provider("RegPropProv")] BuildNumber;
};

[DYNPROPS]
instance of DotNETFrameworks
{ Version="3.0";
[PropertyContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v3.0|Install"),Dynamic,Provider("RegPropProv")] Installed;
[PropertyContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v3.0|SP"),Dynamic,Provider("RegPropProv")] ServicePack;
[PropertyContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v3.0|Version"),Dynamic,Provider("RegPropProv")] BuildNumber;
};

[DYNPROPS]
instance of DotNETFrameworks
{ Version="3.5";
[PropertyContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v3.5|Install"),Dynamic,Provider("RegPropProv")] Installed;
[PropertyContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v3.5|SP"),Dynamic,Provider("RegPropProv")] ServicePack;
[PropertyContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v3.5|Version"),Dynamic,Provider("RegPropProv")] BuildNumber;
};

[DYNPROPS]
instance of DotNETFrameworks
{ Version="4.0";
[PropertyContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v4\\Client|Install"),Dynamic,Provider("RegPropProv")] Installed;
[PropertyContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v4\\Client|SP"),Dynamic,Provider("RegPropProv")] ServicePack;
[PropertyContext("local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\NET Framework Setup\\NDP\\v4\\Client|Version"),Dynamic,Provider("RegPropProv")] BuildNumber;
};

//===========End of section to be added to Configuration.mof

// Save the below as DotNet.mof, and import into Default Client Settings, Hardware Inventory

[ SMS_Report (TRUE),
SMS_Group_Name ("DotNetFrameworks"),
SMS_Class_ID ("DotNETFrameworks"),
Namespace ("\\\\\\\\.\\\\root\\\\cimv2") ]
class DotNETFrameworks : SMS_Class_Template
{
[ SMS_Report (TRUE), key ] String Version;
[ SMS_Report (TRUE) ] String BuildNumber;
[ SMS_Report (TRUE) ] String Installed;
[ SMS_Report (TRUE) ] String ServicePack;
};
// ========End of To-be-Imported.mof

Sample Report:

SELECT
sys1.netbios_name0 as [Computername],
MAX(CASE dn.version0 when '1.0' THEN
case dn.buildNumber0 when isnull(dn.buildnumber0,1) then dn.BuildNumber0 End END) AS [.Net 1.0],
MAX(CASE dn.version0 when '1.1' THEN
case dn.BuildNumber0 when isnull(dn.buildnumber0,1) then dn.buildnumber0 End END) AS [.Net 1.1],
MAX(CASE dn.version0 when '2.0' THEN
case dn.BuildNumber0 when isNull(dn.buildnumber0,1) then dn.BuildNumber0 end END) AS [.Net 2.0],
MAX(CASE dn.version0 when '3.0' THEN
case dn.BuildNumber0 when isNull(dn.buildnumber0,1) then dn.BuildNumber0 end END) AS [.Net 3.0],
MAX(CASE dn.version0 when '3.5' THEN
case dn.BuildNumber0 when isNull(dn.buildnumber0,1) then dn.BuildNumber0 end END) AS [.Net 3.5],
MAX(CASE dn.version0 when '3.5' THEN
case dn.ServicePack0 when isnull(DN.ServicePack0,1) then dn.ServicePack0 end END) AS [.Net 3.5 ServicePack],
MAX(CASE dn.version0 when '4.0' THEN
case dn.BuildNumber0 when isNull(dn.buildnumber0,1) then dn.BuildNumber0 end END) AS [.Net 4.0]
FROM
v_r_system_valid sys1
Left Join v_gs_dotnetframeworks0 dn
ON dn.resourceid=sys1.ResourceID
where sys1.netbios_name0 like 'Win7-%'
Group By
sys1.netbios_name0
ORDER BY
sys1.netbios_name0

The report would end up looking something sort of like this:

ComputerName   .Net 1.0  .Net 1.1   .Net 2.0               .Net 3.0               .Net 3.5            .Net 3.5 Service Pack     .Net 4.0
Win7-ABC12345   NULL     1.1.4322   2.0.50727.5420   3.0.30729.5420   3.5.30729.5420  1                                 4.5.50938
WIN7-ABC23456  NULL     1.1.4322   2.0.50727.5420   3.0.30729.5420   3.5.30729.5420  1                                 4.5.51209

Enough of the cold?

Seagate has an opening in Oklahoma (warmer there!):

https://careers.seagate.com/jobs/146935/Oklahoma-City-Oklahoma-Senior-Engineer-Global-Client-Operations

Gather some Adobe Serial Numbers and Version using ConfigMgr Compliance Settings and Hardware Inventory

Update to an older blog entry...

http://www1.myitforum.com/2012/06/13/gather-some-adobe-software-serial-numbers-using-configmgr-dcm-and-hardware- inventory/ :

Because this thread: http://social.technet.microsoft.com/Forums/en-US/configmgrinventory/thread/7243fac9-36c4- 4d1f-9b2b-eb1b2f53ed87, got me thinking about it, I went to the adobe blog entry they referenced, here: http://blogs.adobe.com/oobe/2009/11/software_tagging_in_adobe_prod_1.html

Searched our lab for a couple of clients with full Adobe products, and low and behold… found the .swtag files mentioned. Interestingly, that blog was a little misleading–it didn’t seem to cover some of the tags that are really in the .swtag files for serial number, version, etc… so I doubt the script (attached) will actually find everything. but it’s a start; so I thought I’d throw this out into the wild (blog it) and see what others can make of it.

Attached is a script, which you’d run similar to the "all members of all local groups" type of thing–run it on clients (either as a recurring advertisement or as a DCM ConfigItem, with no validation), and the sms_def.mof edit to pull the info back into your DB. Some of what it returns you’ll already have from ARP (name, version), but the golden nuggets of info are the SerialNumber, and whether it’s part of a Suite (according to that blog, anyway). There’s also something about "licensedState", but one of my test boxes had a serial number, but said it was unlicensed. Not sure what that is really about–that the human didn’t click on something after launching to register online? Not sure. But hey, that field is there if it means anything. You could always set that to FALSE in the mof if that LicenseState information is pointless.

What was nice about the above routine was that in the "partofasuite" returned results, it would say "Std" or "Pro" right in there, so that when the licensing folk would come knocking and ask for your pro vs std counts, it was relatively easy to run a report, and show them exactly what you had out there, based on Adobe's own information. With the "DC" version, they've apparently decided to make it even MORE difficult to tell the difference between Pro vs. Std.

Here's a new link to their swid tag information: http://www.adobe.com/devnet-docs/acrobatetk/tools/AdminGuide/identify.html

Fortunately, the Script + Mof edit will pull back all of the information necessary to tell the difference, it just makes reports more, uh... "fun"

http://www.adobe.com/devnet-docs/acrobatetk/tools/AdminGuide/identify.html#identifying-dc-installs

and basically you'll see that that std, the serial numbers start with 9101 and for pro, the serial numbers start with 9707

Here's a sample report, once you've created the ConfigItem and Baseline, deployed it, and imported the mof snippet into inventory, and start getting back results:

This sample report is ONLY for Acrobat, there are other Adobe products returned with the AdobeInfo routine, so this is just a sample report, it's not meant to showcase everything returned.

;with cte as (
Select distinct resourceid, Case when a.SerialNumber0 like '9101%' then 'Std'
when a.SerialNumber like '9707%' then 'Pro' end as 'Type',
Case when a.PartOfASuite0 like 'v7%' then 'DC'
when a.PartOfASuite0 like 'v6%' then '11'
when a.PartOfASuite0 like 'Acrobat%' then '10' end as 'Version'
from v_gs_AdobeInfo0
where a.PartOfASuite0 like 'v%{}Acrobat%' or a.PartOfASuite0 like 'Acrobat%'
)
select cte.version as [Acrobat Version] , cte.type as [Acrobat Type] ,count(*) as 'Count'
from cte group by [version], [type]
order by [version], [type]
would result in something sorta like this (#'s have been changed from my production environment to fake #'s)
Acrobat Version Acrobat Type Count
10                    Pro                20
10                    Std                15
11                    Pro                300
11                     Std               210
DC                   Pro                700
DC                   Std                800
Of course, the best part of this routine is *if* Adobe comes knocking, you can show them that the information about pro vs. std originates from their SWID tag files, and you can point to their web site about how to tell the difference, so they should be satisfied and quickly leave you alone (unless, of course... you did deploy Pro to all of your environment, and you thought you were deploying Standard... well, then... pay up...)

--> Link--< to get the mof file for importing for ConfigMgr Inventory, and the script to add to a Configuration Item (or you could deploy it as a recurring Advertisement, if you are adverse to Configuration Baselines).  Basically, the client, on a recurring basis, needs to run the script to populate--or wipe and re-populate--the custom WMI location with the Adobe swid tag information.

Ignore Ignite?

At our last user group meeting we discussed the inevitability of the cloud in IT and what that would mean for the future of IT Pros. One thing we all agreed on was that knowing PowerShell was probably the best investment of time right now for hope of having a meaningful job down the road (and heck, really today). It was also rather clear that for most attendees, we still have a hard time just doing today's job and continue to look for help via our user group and conferences. Microsoft Ignite came up and few seemed interested in attending. Why not?

Ignite is seen more as a crowded marketing show where a search shows 91 sessions listed for System Center (but that is a cloudy list) with crowded hotels and daily busing in needed. But we have so many better options today:

Each conference is on track to repeat around the same time and location each year so attendees can plan on making at least one and budget for them in advance.

These smaller conferences give attendees a better chance to network with others. With SCU, you can attend a user group broadcasting it in your area so that you can talk about the sessions you just saw with the rest of the group and go over common issues and ideas. And SCU is free. So you have no excuse not to go. Even if you have no local group you can watch from home or work. The speakers there are all the top speakers out there. MNSCUG plans to simulcast SCU next week.

I went to my first Connections conference last year in Vegas and was surprised how well it went. Smaller rooms and a crowd not too spread out from System Center. In fact, many of the CM sessions bore me simply because the product hasn't changed much over the past few years, so I found myself drifting into SQL sessions (something all System Center products rely on). They were great. There should be a good 80-90 System Center sessions this year. And the Aria is just a gorgeous hotel!

And then there's my favorite: MMS. It's right here at the MoA. It's just 3 days, but very long days. Early risers can start with birds of a feather sessions and sessions can start as late as 6pm (some with beer served!). Small rooms and many great speakers where "attendees don't get lost in the crowd." Feedback for the 1st year was overwhelmingly positive. An evening party plus the mall and a great bar right at the hotel make after hours mingling with others easy and fun. No busing to a convention center, no long lines, no crappy catered food. We've also revised Henry Wilson's old comparison doc as it might help get you funding. And MMS sessions from 2014 are still up to give an idea of what 2015 sessions should look like. And we just got word that our dates should be Nov 9-10-11 this year.

Internet Explorer Version Information via Hardware Inventory

Although it is certainly possible to use iexplore.exe to obtain this information, I'm not a huge fan of software inventory, when I can use hardware inventory instead.  Below is a mof edit to pull out Internet Explorer version information, and latest IE hotfix applied, from the registry.  Also below is a sample SQL report, and screen shot of what that might look like.

In my environment, this reported on Internet Explorer versions 6 and higher (well, up to version 11; who knows if newer versions, once released, will still be in the same place).  There is one exception to the data available, IE versions 8 and lower do not populate the regkey "svckbnumber", so that information is not available for those versions of IE.  You should be able to 'surmise' some of the information based on the Build Number as to whether or not a system has a later hotfix applied, when addressing IE version 6, 7, and 8.  But because of that slight change from version 8 to 9 and higher, it made the report interesting to do so that it displayed exactly what I wanted.  I suspect most people have already done the RegKeytoMof for internet explorer versions, so this blog post is mostly to share the report syntax; in case you (like me) wanted the report to look as logical as possible to the management-types that might be looking at it.

This is a ConfigMgr Mof edit, based on regkeytomof from Mark Cochrane:

// RegKeyToMOF by Mark Cochrane (thanks to Skissinger, Steverac, Jonas Hettich & Kent Agerlund)
// this section tells the inventory agent what to collect
// Place at the bottom of your configuration.mof file in installedlocation/inboxes/clifiles.src/hinv

#pragma namespace ("\\\\.\\root\\cimv2")
#pragma deleteclass("IExplorerVer", NOFAIL)
[DYNPROPS]
Class IExplorerVer
{
[key] string KeyName;
String Version;
String svcVersion;
String svcKBNumber;
};

 

[DYNPROPS]
Instance of IExplorerVer
{
KeyName="RegKeyToMOF";
[PropertyContext("Local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Internet Explorer|Version"),Dynamic,Provider ("RegPropProv")] Version;
[PropertyContext("Local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Internet Explorer|svcVersion"),Dynamic,Provider ("RegPropProv")] svcVersion;
[PropertyContext("Local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Internet Explorer|svcKBNumber"),Dynamic,Provider ("RegPropProv")] svcKBNumber;
};

 

//=============================

 

// RegKeyToMOF by Mark Cochrane (thanks to Skissinger, Steverac, Jonas Hettich & Kent Agerlund)
// this section tells the inventory agent what to report to the server
// Save this snippet as 'tobeimported.mof', and in CM2012, import it into your Default Client Agent
// Settings, Hardware Inventory, Classes, Import

 

#pragma namespace ("\\\\.\\root\\cimv2\\SMS")
#pragma deleteclass("IExplorerVer", NOFAIL)
[SMS_Report(TRUE),SMS_Group_Name("IExplorerVer"),SMS_Class_ID("IExplorerVer")]
Class IExplorerVer: SMS_Class_Template
{
[SMS_Report(TRUE),key] string KeyName;
[SMS_Report(TRUE)] String Version;
[SMS_Report(TRUE)] String svcVersion;
[SMS_Report(TRUE)] String svcKBNumber;
};

 

Below are a couple sample queries to get you started.  The 'fun' stuff is with version 9, the regkey 'Version' started being recorded as 9.9.0, then version 10 was 9.10.0... version 11 was 9.11.0... which is slightly irritating (at least to me).  So that's why this sql is slightly obnoxious.  It's cast 'ing and figuring out whether the report should use version0 or svcversion0 as the version we humans want to see.

//======Internet Explorer, all computers=============
Select
s.netbios_name0,
ie.svcKBNumber0 [Latest Hotfix applied, (available in version 9 and higher)],
--Use this next for linking in an SRS report, but you don't need to have a column for it in the report display
RIGHT(ie.svcKBNumber0,LEN(ie.svckbnumber0)-2) as 'UseforLinking',
case when ie.svcversion0 is null then ie.version0
 else ie.svcversion0 end as 'Internet Explorer Version',
case when ie.svcversion0 is null then
 cast(ParseName(ie.version0,4) AS BIGINT)
 else cast(ParseName(ie.svcversion0,4) AS BIGINT) end as 'MajorVersion',
case when ie.svcversion0 is null then
 cast(ParseName(ie.version0,3) AS BIGINT)
 else cast(ParseName(ie.svcversion0,3) AS BIGINT) end as 'MinorVersion',
case when ie.svcversion0 is null then
 cast(ParseName(ie.version0,2) AS BIGINT)
 else cast(ParseName(ie.svcversion0,2) AS BIGINT) end as 'RevisionNumber',
case when ie.svcversion0 is null then
 cast(ParseName(ie.version0,1) AS BIGINT)
 else cast(ParseName(ie.svcversion0,1) AS BIGINT) end as 'BuildNumber'
from
v_r_system s
join v_GS_IExplorerVer0 ie on ie.ResourceID=s.ResourceID
order by 'MajorVersion','MinorVersion','RevisionNumber','BuildNumber'

//========Count Internet Explorer====================
Select
ie.svcKBNumber0 [Latest Hotfix applied, (available in version 9 and higher)],
--Use this next for linking in an SRS report, but you don't need to have a column for it in the report display
RIGHT(ie.svcKBNumber0,LEN(ie.svckbnumber0)-2) as 'UseforLinking',
case when ie.svcversion0 is null then ie.version0
 else ie.svcversion0 end as 'Internet Explorer Version',
case when ie.svcversion0 is null then
 cast(ParseName(ie.version0,4) AS BIGINT)
 else cast(ParseName(ie.svcversion0,4) AS BIGINT) end as 'MajorVersion',
case when ie.svcversion0 is null then
 cast(ParseName(ie.version0,3) AS BIGINT)
 else cast(ParseName(ie.svcversion0,3) AS BIGINT) end as 'MinorVersion',
case when ie.svcversion0 is null then
 cast(ParseName(ie.version0,2) AS BIGINT)
 else cast(ParseName(ie.svcversion0,2) AS BIGINT) end as 'RevisionNumber',
case when ie.svcversion0 is null then
 cast(ParseName(ie.version0,1) AS BIGINT)
 else cast(ParseName(ie.svcversion0,1) AS BIGINT) end as 'BuildNumber',
count(*) as 'Count'
from v_GS_IExplorerVer0 ie
group by
 ie.svckbnumber0,
 RIGHT(ie.svcKBNumber0,LEN(ie.svckbnumber0)-2),
case when ie.svcversion0 is null then
 cast(ParseName(ie.version0,4) AS BIGINT)
 else cast(ParseName(ie.svcversion0,4) AS BIGINT) end,
case when ie.svcversion0 is null then
 cast(ParseName(ie.version0,3) AS BIGINT)
 else cast(ParseName(ie.svcversion0,3) AS BIGINT) end,
case when ie.svcversion0 is null then
 cast(ParseName(ie.version0,2) AS BIGINT)
 else cast(ParseName(ie.svcversion0,2) AS BIGINT) end,
case when ie.svcversion0 is null then
 cast(ParseName(ie.version0,1) AS BIGINT)
 else cast(ParseName(ie.svcversion0,1) AS BIGINT) end,
case when ie.svcversion0 is null then ie.version0
 else ie.svcversion0 end
order by 'MajorVersion','MinorVersion','RevisionNumber','BuildNumber'

//===========================

If you are really ambitious, you can edit your Report Builder Report, and for the field "svckbnumber", you could make that link to another web page, like ="http://support.microsoft.com/kb/" + Fields!UseForLinking.Value

Below is what a sample report (for count) might look like.  This sample also would link by Build Number to a detailed per-computer report.

IeVersions

Inventory Google Chrome Extensions with ConfigMgr

In response to this forum post:
https://social.technet.microsoft.com/Forums/systemcenter/en-US/55b1d256-f3fb-4296-a9e6-2241cc8d4d0d/sccm-report-google-chrome-extensions

I cobbled together a VERY rough approximation of a powershell script + mof edit that MIGHT work to gather the bare minimum information.

Download the files in the -->  Attached Zip File <-- In the .zip file are two files

TestScript1.ps1  -- this is a powershell script you will need to have every ConfigMgr client you have run, presumably the ones with Google Chrome installed.  You can either deploy it as a recurring advertisement, or my favorite is to create a "Configuration Item", and deploy the script that way on a recurring basis.

ToBeImported.mof -- Once  you've had test workstations run that powershell script, AND you've confirmed that data appears on those test workstations' root\cimv2\cm_chromeExtensions, AND that data appears to be stuff you find interesting, THEN in your CM Console, Administration, Client Settings, Default Client Settings, Hardware Inventory, Import... this file.

Caveats: 
"let the buyer beware":  Read the .ps1 file--especially the top section.  the part where the author (ok, it was me) said that this was all cobbled together and is probably useless. 

1) 1 thing I noticed even with only 15 minutes worth of testing...I uninstalled Google Chrome from the test workstation.  That does NOT clear out the user profile appdata folders where "chrome extensions" are listed.  So everything was still reported.  So it is highly likely, in fact probably guaranteed, that this will in no way EVER be indicative of "Google Chrome is actually installed and working".  It's indicative of "Google Chrome was installed once and launched once for this user--sometime during the life of the computer".  It could have been installed and uninstalled within 30 minutes and never used again--but the user profile information about chrome extensions will be there.  Forever.  Welcome to user-centric nightmares (if you weren't already aware of them).  Also by the way, chrome apparently comes pre-packaged with multiple extensions so no matter what you'll have entries if any user ever launched Chrome on that workstation--even if it was immediately uninstalled.  It won't matter.

so my recommendation is *if* you think in your weeks of testing that this might be useful in some way--in reporting you will need to be extremely careful to tie the reports about chrome extensions to machines which clearly indicate that chrome is actually installed.  Or, of course--feel free to re-write this chrome extensions script to detect that before recording anything.

2) There were some extensions in the user profile folder for chrome extensions for which I couldn't figure out any way to clearly identify what it was.  Those will be labeled unknown.  You are certainly welcome to edit the script if you know how to identify those.

3) No promises of usefulness or compatibility or even functionality are implied.  I'm just tossing this out there in the hopes that someone else can make it work better.  If in fact anyone even cares about Chrome extensions. Ever.

fyi, in testing, I got this type of information on the test box:

 

 Counter   Name Version ProfilePath ScriptLastRan
 0 Google Docs 0.9 c:\users\fakeuser 3/28/2016 4:55:13 PM
 1 Google Drive 14.1 c:\users\fakeuser 3/28/2016 4:55:13 PM
 2  YouTube 4.2.8  c:\users\fakeuser  3/28/2016 4:55:13 PM
 ...  <more data>      3/28/2016 4:55:13 PM
 14  iHeartRadio 1.1.0 c:\users\anotheruser  3/28/2016 4:55:14 PM

In the above example, "fakeuser" used Chrome and never added any CUSTOM or additional extensions.  "anotheruser" using the same computer did add a custom extension for iHeartRadio.

As mentioned, only tested in a distracted way in a test environment on 1 test workstation in a lab.  This is probably horrible code, a horrible idea, and will need to be re-written from scratch.  Or.... it just might work fine.  <shrug>

IT/Dev 2014 Higlights

Here are my highlights from IT/Dev last week…

 

Kent and Johan's PreCon on ConfigMgr:Touched on a number of handy tips and tricks.  Links to all the goodies at located here.  http://blog.coretech.dk/kea/links-from-the-configmgr-2012-r2-precon-itdev-connections/

 

CM Database still one file?: Ah poo, I setup my SQL database and it has all the data in one file.  Now what?  John Nelson has a blog post on correcting this and distributing your data into 4 files.  http://myitforum.com/myitforumwp/2011/11/03/sql-for-smsconfigmgr-tip-distribute-data-evenly-from-1-sql-file-to-multiple-sql-files/

 

WSUS DB Cleanup: Kent covered the value of keeping the SUSDB clean for optimal query performance and minimal client data download when syncing the update catalog to the client.  http://blog.coretech.dk/kea/house-of-cardsthe-configmgr-software-update-point-and-wsus/

 

Optimize Those Databases:  Steve Thompson discussed common pitfalls of the database maintenance tasks for CM.  Bottom line, don't trust the built in process.  More on his blog post here,

http://stevethompsonmvp.wordpress.com/2013/05/07/optimizing-configmgr-databases/

 

Microsoft One:  Mary Jo Foley discussed Microsoft's current business strategies.  On premise, clouds, AD, phone, .Net, and many others were discussed along with questions from the audience.  Follow her perspectives here: http://www.zdnet.com/blog/microsoft/.

 

Mimikatz:  How secure are your windows systems?  Try Mimikatz to understand what the bad guys can see when they get in by using this tool to help you identify security gaps:  https://github.com/gentilkiwi/mimikatz

 

SQL Query Superstardom for Beginners:  A fantastic introduction to the do's and don'ts for the 'accidental' DBAs that we can me have become as System Center admins.  What is sargable anyway?  (There is also a great presentation on indexing strategies)  Find out here.  http://thesqlagentman.com/presentationfiles/

January 2015 Meeting

What a year!  2014 was a good one for MNSCUG and its members.  We held 12 meeting last year with speakers from all over the world*.  We ate some great food and swigged some even finer ales.  There were so many amazing topics and speakers it is going to be a challenging year to top.  There was just one more small thing we managed to accomplish last year, just the best conference of the year; MMS.  If you missed MMS you really missed something special.  Don't make the same mistake twice.

 

Not been to a meeting for a long time?  Considering the guests that have been coming to MNSCUG over the past year you are missing on out on MMS-quality content that is freely available.  Start the year off right and get back in the habit.  You can get much of the technical content online, but you can never replace the face to face one off meetings that happen at user groups.

 

We are going to start off 2015 with a round table discussion on the 2nd floorof the MTC.  There no need to go to the 6th floor!  You can go straight to the 2nd floor.  This meeting will be an open forum for anyone to ask any questions to the group about any thing that may be bugging you or of interest to you (with regard to System Center!).  I always find it very helpful to learn how others are dealing with the same challenges.  If there is time remaining we will talk about what we (MNSCUG) have planned for this year and we want some feedback for anything you would like to see.

 

AGAIN - This meeting is on the 2nd floorthis time

 

If you please fill out this survey, it would be very helpful:  https://www.surveymonkey.com/r/HPNMMWM

 

Please register for the SCU telecaston February 4th. See below for more information.

 

We are currently without a sponsor for this meeting.  If you would be interested in starting off the 2015 MNSCUG sponsorship campaign please let us know! 

 

Please register to help us gauge food and beer ordering.

Eventbrite - December 2014 MNSCUG Meeting

* U.S. and Northern Europe

Java Software Metering with CM - Java 7 End of Life

It is almost that time, another Java runtime will go end of life in just over a month.  This means we have only a month left to finish the Java upgrade we have already started, right?  Well I have lived through a few of these Java events over the years and they really don't seem to get any easier, or even ever really end.  In fact I am seriously considering removing Java from users systems.  The one only issue is, who is actually using it for work related purposes?  That has always been the million dollar question to me.

Turns out if you pay for Java support there are some tools that can help you determine this sort of thing.  So where does that leave the rest of us that maybe are not as fortunate as the aforementioned minority?  Well fortunately for us, Oracle did us all a small favor late last year and built-in some usage tracking mechanisms into the JRE's we are already be using.  Turns out "Usage Tracker is available in Oracle Java SE Advanced and Oracle Java SE Suite versions 1.4.2_35 and later, 5.0u33 and later, 6u25 and later, 7 and later, and 8 and later.".  (http://docs.oracle.com/javacomponents/usage-tracker/overview/index.html)  Think of this as software metering for Java plug-ins and VMs which run on your systems, it logs each users data into a log file in their user profile.

I just had to try this so I followed the instructions, dropped the usagetracker.properties file in the correct directory and then fired up a browser and ran a few Java plugins.  All of the data was right there, in a little txt file in my user profile.  So now what?  Turns out there are a few catches to all of this logs.

  • A properties file must be in the appropriate directory for each JRE if you want to log data.  For better or worse, maybe some machines have more than one JRE installed.
  • The default delimiter in their tracking file was a comma.  Typically this is great, until I noticed there are no text qualifiers in the data elements. Formatting nightmares.
  • The log files are stored in the user's profiles by default.  Typically a system should only have one user, but this is not always the case either.  So we need to aggregate the data together.

So based on my own initial assessment I came up with a few functional requirements on how I would want a data collection to work for this.

  • Enabling logging for all JRE's installed on the system.
  • Use a delimiter character that would be less likely to show up in the command line options very often, I chose a carrot '^' for this.
  • Enumerate all of the user profiles and centrally store the data on the system.

Based on how Java usage tracking works, and how I wanted to see it work, I setup a PowerShell compliance script that performs the following actions.

  • Logs all data associated to the script to the CM client logs directory (CM_JavaUsageLogging.log) when logging is enabled.  This is the default, this can be disabled by changing $LoggingEnable to $false at the top of the compliance script.
  • Query the registry for installed JRE's and creates the usagetracking.properties file in the lib/management folder to enabled logging for all instances.
  • Merge all of the data from all of the tracking logs on the system and add the user which executed the vm or plug-in to the dataset.
  • Creates a CM_JavaUsageTracking WMI class to store the data centrally on the system.  Then we can pull it off with hardware inventory!
  • Only add the new entries on subsequent executions.  The data in WMI can be inventoried.  (MOF below)

jutci

The cab located here can be downloaded, tested, and used to get you started on your way.  Please note this is a work in progress, so I will update this file with the new changes once they are ready.  If you have any feedback (it is welcome) on the compliance script please let me know at This email address is being protected from spambots. You need JavaScript enabled to view it..  This has been tested with PowerShell 2.0 and above, but as always test it first to veriy everything works in your environment.

The data in WMI can be inventoried!  So after running this script on a system, connect to it and add the CM_JavaUsageTracking to hardware inventory in CM, now you have Java software metering in a sense.  This is still a work in progress, here are a few more items I still want to add and cleanup.

  • The compliance rule is stating non-compliant even though the script appears to complete.
  • Add a day count rolling history feature so data older than 'x' number of days is removed from WMI and not edited.  This would allow a limit per system on the collected data.
  • Test and validate support for 64-bit JREs, I have 99% 32-bit so this was my priority.

For those of you concerned about how much space this will require in your cm database, I checked and in my case 30 days of data from approximately 2500 systems was a table around 50 MB.  This will vary greatly depending on how much Java plugins are in use in your environment.  Data is now being collected and I can sit back and see which sites users are using and determine what I am going to do about it. 

Happy data spelunking!

Johan Returns to Minneapolis - Best OSD Training Available

Mastering Windows 7 and 8.1 deployment using MDT2013 and Config Mgr 2012 R2 [4 days]

Build a Windows deployment solution using MDT 2013 and SCCM 2012 R2!

Build a real deployment solution using Microsoft Deployment Toolkit 2013 (MDT 2013) and System Center Configuration Manager 2012 R2 (SCCM 2012 R2). The first Windows 7 and Windows 8.1 deployment training where you pick the track to follow during four days!

The primary track is using MDT2013 and ConfigMgr2012 R2 to deploy operating systems, applications, and software updates.

The second, optional track, is how to build a deployment solution based on MDT 2013 Lite Touch. We simply wanted to give you the best possible windows deployment training, no matter if you plan to use ConfigMgr 2012 R2 or not.

Johan Arwidmark, Microsoft MVP in Deployment and the world's foremost expert in the OS deployment sector, developed this lab. Like all our labs, this is unique in that you access the best deployment specialists for four days. Time is set off for real life deployment issues where the instructor discuss and solve problems that you bring to the training. As always we take you for lunch at a nearby restaurant, where the discussions continue. Its almost like free consultancy.

During these four days you will learn how to:

  • Plan and design for ConfigMgr 2012 R2 infrastructure changes
  • Upgrade from ConfigMgr 2007
  • Upgrade from MDT 2010 to MDT 2012/2013
  • Create and deploy applications
  • Configuring Software Updates
  • Create and design reference images
  • Create VB script wrappers for configuration items and applications
  • Configure Security baselines for the Windows 7 & 8.1 image
  • Master the driver injection features for os deployment
  • Integrate MDT 2013 with ConfigMgr 2012 R2
  • Extend OS deployment in ConfigMgr 2012 R2, with scripts, frontends, databases and web services.
  • Configuring offline media
  • Troubleshoot MDT 2013 and ConfigMgr 2012 R2
  • Troubleshoot your Windows deployments
  • Bending the rules, understanding and customizing the rules (customsettings.ini)
  • Enbale Dynamic Deployments and much more!

Now updated to cover TPM, BitLocker, Orchestrator RunBooks etc.

July 2014 MNSCUG Meeting

The next user group meeting is Thursday, July 17th at our normal time 4:30 - 7:00pm at Microsoft's Edina HQ building - 6th Floor, Lake of the Woods.  Ryan Ephgrave will be presenting on the finer and more impressive points of the Now Micro Right Click Tools along with examples of how to create your own ConfigMgr console extensions.  Wrapping up the evening will be Stephen Jesok discussing how to maintain your Orchestrator environment. As always there will be a Q&A session at the end. 

Concurrency

Food and beverage will be provided by Concurrency!  Thanks!

Registration is free to the public, but please be sure to sign-up if you are attending so we can ensure everyone has enough food and drink.

Eventbrite - February 2014 MNSCUG Meeting

See you there!

June 2014 MNSCUG Meeting

Our next MN System Center User Group meeting will be an ALL DAYevent Thursday, June 5th from 9am to 3:30pm at Microsoft's Edina HQ building - 6th Floor, Lake of the Woods.  This will be a star studded extravaganza as we are pleased to welcome back presenter Wally Mead and guest participant Kent Agerlund.  There is a hard limit to 100 attendees for this event.  So please register in advance!

 

The following sessions are planned for the day -

  • 9:00 - 10:00am Wally Mead (ConfigMgr)
  • 10:15 - 11:15am Fred Bainbridge (App-V 5.0)
  • 11:30 Lunch
  • 11:45 - 12:45pm Fred Bainbridge (UeV 2.0)
  • 1:00 - 2:00pm Robert Wakefield (AD Best Practices)
  • 2:15 -3:15pm John Nelson (Configuring SQL for ConfigMgr)

 

There should be enough time for questions after each session, but if anything runs long we will hold a Q&A at the end.

 

 

Registration is free to the public, but please be sure to sign-up if you are attending so we can ensure everyone has food.

 

Eventbrite - February 2014 MNSCUG Meeting

 

See you there!

June 2014 MNSCUG Meeting Reminder

REMINDER - Our next MN System Center User Group meeting will be an ALL DAYevent Thursday, June 5th from 9am to 3:30pm at Microsoft's Edina HQ building - 6th Floor, Lake of the Woods.  This will be a star studded extravaganza as we are pleased to welcome back presenter Wally Mead and guest participant Kent Agerlund.  There is a hard limit to 100 attendees for this event.  So please register in advance!

 

Lunch will be provided by cdw

 

The following sessions are planned for the day -

  • 9:00 - 10:00am Wally Mead (ConfigMgr)
  • 10:15 - 11:15am Fred Bainbridge (App-V 5.0)
  • 11:30 Lunch
  • 11:45 - 12:45pm Fred Bainbridge (UeV 2.0)
  • 1:00 - 2:00pm Robert Wakefield (AD Best Practices)
  • 2:15 -3:15pm John Nelson (Configuring SQL for ConfigMgr)

 

There should be enough time for questions after each session, but if anything runs long we will hold a Q&A at the end.

 

 

Registration is free to the public, but please be sure to sign-up if you are attending so we can ensure everyone has food.

 

Eventbrite - February 2014 MNSCUG Meeting

 

See you there!

Keeping inactive clients alive in CM12 for fast Patching and Distributions

In order to keep our CM12 clean, we normally enable “Delete Inactive Client Discovery Data” under Site Maintenance properties. By default this is disabled, but when enabled it is set to 90, unless you change that. This removes the clients that are inactive from the CM12 console every 90 days unless again, you change that setting.

However, this presents a challenge for organizations that have laptops or remote users that remain offline longer than what that’s set for.   Or spare machines that get stuck in closets or storages for long periods of time. But are required to get deployments or patch deployments fast.   Because when you plug them in the network, it takes a while before they fall back in their collections after they get deleted, so that they can quickly get their deployments or patch deployments that they deserve.

So to keep these machines alive in the console, remember in CM12, these machines do not get deleted from the database. They just get their Decommissioned0 set to 1, and disappear from the console. So the trick is, just keep their decommissioned0 set to 0!

In our environment, we leverage SCORCH to detect these machines by executing SQL query below:

select Name0, Decommissioned0

from System_DISC

where Distinguished_Name0 LIKE 'CN=%,OU=Laptops,OU=Computers,OU=LOB,OU=ORG,DC=jeff,DC=com'

AND Decommissioned0='1'

We then resolve it by:

UPDATE System_DISC

SET Decommissioned0 = '0'

where Distinguished_Name0 LIKE 'CN=%,OU=Laptops,OU=Computers,OU=LOB,OU=ORG,DC=jeff,DC=com'

AND Decommissioned0='1'

 

Then these guys never leave their collections J

Kim's Chrome Search

I had a post back in March showing how to use Bing to help you find the same documentation. This weekend, Kim Oppelfens (MS MVP) made a nice post to help us find Microsoft documentation using search engine providers. He said he didn't test it with Chrome and I just did.

Kim's search page

If you go to Kim's post, you'll see a button to add providers based on what you're searching for. I clicked on the one for CM12 and got an error. Replace the word bing.comwith CM12 for the keyword.

So now if you type CM12 in the address\seach bar of Chrome (I'm using version 21), you'll see a box show in that bar to reflect that you're using the search engine provider targeting Microsoft's CM12 docs. So assume I'm looking for how to setup a replica for my MP, I just continue after typing CM12 with the word replica.

Search CM12 for the term replica

I assume you could do the same for each of these buttons using a keyword of your own choosing.

You can see how easy it is to get good results back, which blow away Google's search. Thanks Kim for the boost!

March 2015 Meeting - Notes

Thanks again to Parallels for sponsoring, presenting and for the surprise iPad 2 give away!  For those who were not able to get a card from Carlos or want to know more information, here is his contact info.

Speakers

Carlos Capó
Phone: 1-425-306-3640 email: This email address is being protected from spambots. You need JavaScript enabled to view it.

Speaker Chris Nackers, Microsoft MVP.
Topic - Macintosh Management with ConfigMgr

MMS

MMS MoA Nov 10-11-12 2014

For those of you who attended our last meeting, you might recall our survey about a conference this fall.   Well it's now official:  The Midwest Management Summit.  Watch the Twitter feeds for #MMSMinnesota and #MMS.  And learn about the conference at our site: http://mms.mnscug.org

And follow MMS on Facebook: https://www.facebook.com/MidwestManagementSummitMN

MNSCUG June Meeting Notes

Fred's Notes from the Active Directory Best Practices from Robert Wakefield with NowMicro:

Backup of group policies via GPMC or script. Most GP admins do not backup group policy objects.

It's not a bad idea to backup directly from the GPMC

Use PowerShell to backup GPOs, this can be scheduled. Group policy backup via PS doesn't get links or security, etc. Only the object itself.

Use a central store to gather and distribute ADMX files.

October 2014 MNSCUG Meeting

The October MNSCUG meeting is Wednesday, October 15th at 4:30 - 7:00pm at Microsoft's Edina HQ building - 6th Floor, Lake of the Woods. Sorry for the late notice, this one snuck up on us.  This month we will be having a good ole Configuration Manager meeting!  Also, we will be holding elections for MNSCUG board members. Get involved. It's well worth it.  Must be present to vote and to run.  Please register in advance so we can get a proper count for food and drink. 

Brian Mason will be presenting WSUS cleanup methods, why how and when.  Specifically with SUPs dying and/or WSUS slowing to a crawl.  He will highlight what you can do to mitigate issues and also share some valuable resources to use for help.

Fred Bainbridge will be doing going over tips and tricks for successful OSD with 2012 R2.  There are some significant changes with how 2012 R2 does it.  There are also some fun scenarios that if you are not aware of will cause you some very real pain.  

Eventbrite - October 2014 MNSCUG Meeting

Last, but not least we have something special for attendees that you won't want to miss out on!

See you there!

POSH for adding Security Updates to a Software Update Group in CM12

Ever used ADRs (Automatic Deployment Rules) functionality in CM12?  Use it!  Granted, the search engine in there could use some work, but it's probably the best feature that was added in CM12!   I have to give my buddy Mason credit for this, for he has a lot to do with this addition :).

Anyway, we leverage ADRs in our environment for simply downloading the Monthly patches. Then we just turn around and add these downloaded patches to our Monthly patching groups with a deployment set to it. Well ok, we’ve automated the monthly downloads, now I kind of wanted to automate the group membership addition step here, for managing and patching our own servers. Well, how do we do that without having to recreate the patch deployment to our servers or kicking off the patch installs on our servers right away and impacting them?   Granted we have maintenance windows set for them, but we still wanted to tightly control this process and set the deployment deadline time appropriately in the future. So the answer is, we set the Update deployment to our Pilot collection first (to avoid impact), then add the newly downloaded patches from the ADRs to our group. Then modify the existing deployment’s deadline time later when pilot servers are validated.

So the POSH script below is what I had put together for this process.   So after the ADRs are done downloading, intent is to run this script and here’s what it does:

  1. Loads the CM12 PS module and connects to the CAS
  2. Sets or points the existing deployment to our pilot collection
  3. Then it grabs the downloaded updates from the ADR groups, and add them to our Software Update Group

NOTE: You can change that Pilot collection to an Empty collection, for added safety measures.  And this also ensures that itanium updates are not added. For every now and then, those clowns get added in our group somehow. So this should avoid that. Oh, this would also go hand in hand with my other script/posting “Posh to remove expired and/or superseded updates from a CM12 Software Update Group”.

Use it at your own risk! :)

Set-CMUpdateGroupDep.zip

POSH for Switching from using non-shared to shared SUPDB on CM12 SUPs

It is generally recommended to use shared SUSDB In your CM12 environment when you have multiple SUPs (Software Update Points) in a single primary site. Thus, have you ever had the need to switch from using SUPs with their own SUPDBs to shared SUPDB?   We did this simply to avoid the clients in failed state for long periods and to avoid that network cost.   Below is the script that I put together to switch from using non-shared SUPDB to shared SUPDB on our SUPs>

This script pretty much follows the general guideline of setting up your SUPs with shared SUPDB.   However, since we had already had it set in place where our SUPs were already using their own SUPDBs, so this will uninstall WSUS off your existing SUP or remove the role (windows feature) so that it can reset which DB it’s pointing to.   Then it follows it up with post configuration to put things back, and to where your SUPs are pointing to that single or common shared WSUS Database.

General guideline of installing multiple SUPs with Shared SUPDB.

  1. Prepare the Database server, create the share (WSUSContent) and create the WSUS group that has access to the share
  2. Install the first SUP with WSUS pointing to the common SUPDB and move its content to a Central\shared location (copy content)
  3. Install the subsequent SUPs with WSUS pointing to the common SUPDB and move its content to a Central\shared location (-skip copy)

 

Here’s what the menu prompt looks like:

SUPDB

 

Quick breakdown of what each above does:

DB option

  • Creates the WSUSContent directory and shares it out
  • Then it creates the local WSUS Content group

SUP1

  • Removes the role and adds it back
  • Runs the post configuration using WSUSUTIL and points to the remote SUPDB
  • Runs the post configuration using WSUSUTIL and moves the content
  • Adds the SUP1 computername to WSUS Administrators group that has access to the content
  • Sets the Virtual Content access to use a service account (Change user and pass in this the script)

SUPX

  • Removes the role and adds it back
  • Runs the post configuration using WSUSUTIL and points to the remote SUPDB
  • Runs the post configuration using WSUSUTIL and moves the content with -skipcontent
  • Adds the SUP1 computername to WSUS Administrators group that has access to the content
  • Sets the Virtual Content access to use a service account (Change user and pass in this the script)

Lastly, this creates a log in the same folder you run this script under. $scriptname.log.

Again, use this at your own risk! But I hope it helps! J

 

NOTES:

  • Script only supports Windows Server 2012 and/or Windows Server 2012 R2
  • The WSUSDB server also holds the WSUSContent share here as well. You can change that in the script if you’d likeJ. And it obiously requires IIS on the SUPDBJ
  • Run this script locally on the box you are working on. You will need to run this on the remote database server first to prep the DB. Then run it on the SUP1, SUP2 SUP3 and so on, following the guideline above.
  • Pay attention to the variables that are set above the script.   And it is domain aware, so change the variables in there according to domain environments you have. This is really useful for folks that have Lab and production environments. So you only make one script change and it applies to both for consistency.

JInstall-SUPSharedDB.zip

 

 

 

 

POSH to import new machine objects for imaging along with OSD Variables

For the longest time, i couldn't find a good way to quickly and easily import a machine in CM12 for imaging along with necessary OSD variables to properly image our servers/workstations.  Now that we have CM12 R2, a new cmdlet "New-CMDeviceVariable" is put to use!  Here's a POSH i put together for importing new machine(s) in CM12 along with OSD variables.   It reads the .CSV file you provide (will be prompted for the path and name for it) and import the machines that are in that file, line by line.  This script also detects which domain you're in so you can set certain variables for let's say working you're in Lab or in your Production environment.   Just crack this posh open and change necessary variables in there to match your settings.   Below is the script, along with a sample .csv with required formats.  

 

Import-Server-OSDVariables.ZIP

 

 

POSH to remove Expired and/or Superseded Udpates from a CM12 Software Update Group

Up to this date, there’s still not a CM12 cmdlet that would help remove updates from Software Update Groups.   It makes it cumbersome on the monthly basis to remove the expired and superseded from these groups… Just a lot of clicking! :) Here’s a PowerShell code that I threw together to try to reduce the my mouse clicks every patch cycle :).   This code will prompt you for which you'd like to process or remove from the given group, E for expired or S for superseded.   I suppose i could add that as another parameter, but then it'd be too much typing :).   Alright, I’m fairly new to POSH, so don’t judge!

Usage :  Remove-ExpAndSuperseded.ps1 <CAS Server Name> <sitecode> '<Target SUP Group>'

 

Updated: 7/8/2014

JRemove-ExpAndSuperseded.zip

Selectively Disable Software Distributions and Application Deployments on Clients

Credit to Niall Brady of http://windows-noob.com fame (Thanks Niall!)

You might want to separate certain computers with the Configuration Manager client agent installed by disabling the ability to install or run available (optional) or required (mandatory) Application Deployments, or Packages/Advertisements. You could achieve this by moving the computer into a collection which is excluded from All deployments but what if someone accidentally added should-be-excluded computers to a collection containing a required deployment?  It may lead to a reboot or something entirely worse. The ability to disable the Software Distribution Agent and the Application Deployment Agent would indeed be useful in this scenario.

Normally you can enable or disable System Center 2012 R2 Configuration Manager Client agent functionality via client settings in the console; however the Software Distribution Agent and the Application Deployment Agents are an exception. For those agents, to disable the software distribution agent and Application Deployment Agent, a local policy can be used. To re-enable it when needed that local policy is deleted, allowing the Site-wide settings to be reapplied.  Although it is certainly possible to do this using "mof" files and importing them, the method outlined here will use Compliance Settings to disable or enable those two agents on one or more computers depending on the collection they are in.

How To Step 1:  Attached --> Here <-- are two files.  In your Console, Compliance Settings, Import both of those .cab files by right-clicking on Configuration Baselines, Import, and pointing to each of those .cab files.  Once imported, to a previously created-by-you collection of computers you wish to disable Software Deployments, deploy the "Disable Software Distribution and Application Deployments" baseline to that collection.  Very carefully make sure you check the box about "Remediation".  In general, I recommend a schedule of daily; but really, once this is deployed those clients have this local policy. Weekly is likely frequent enough.

And... that's it.  You are really done at this point. 

Optional Step 2:  The baseline disables those two agents on clients that run it (with remediation enabled), but it was mentioned to me (Todd Hemsell and Eswar Koneti pointed it out) that it may still be possible for a person interactively logged on to the machine might still be able to see deployments in Software Center, and choose to manually install them.  If you want to prevent that possibility as well this should work (but test, I didn't test this myself), in your console, Administration, Client Settings, create a Custom Client Agent Device Setting; add Computer Agent, and in there, set the "Install Permissions" to "No Users", and deploy that custom client device setting to the same collection.

...time passes...

So, now it's months/years later.  And you want to either find out how many local policies are out there in your environment, or want to remove those local policies.  If you want to just inventory to find them, implement this: http://myitforum.com/cs2/blogs/skissinger/archive/2009/07/06/hardware-inventory-mof-edit-for-local-policies.aspx .  If you want to undo those local policies which were created by the Baseline, first remove the existing deployment; and then deploy the other Baseline of "Remove Previously Created Local Policies to Disable SWDist and App Deployment" (One of the baselines which you imported from the .zip attachment above, but never deployed).  Once the local policy is removed, whatever site-wide settings you have in default Client settings will be applicable to those machines again.

 

Copyright © 2018 - The Minnesota System Center User Group