ConfigMgr 2012 OSD Notes

Attached is the slide deck and the refreshMP script that I referenced during the OSD presentation at the October MNSCUG meeting.  This script should be copied the device early in the task sequence and then run after every reboot via run command line and a static path to the vbs file.  This will help avoid problems where the device can't contact the MP/DP after a reboot.  Be aware, an application/package installation that returns a 3010 will reboot the task sequence unless you define it not to in the package/application itself.  Know were your reboots are happening so you can run this script after each reboot.  

Rumor has it if the configuration manager client has the CU2 update installed this reboot issue is a non issue.  Give it a shot and let me know.

Take aways from the presentation - 

  • Know your application exit codes
  • Be prepared to break down the app model if it has reboots
  • Application Model works fine with OSD.
  • Configure appropriate task sequence variables for your environment.
  • Make sure your problems are not external to ConfigMgr.  Networking issues perhaps?
  • Get statistical significance with your builds.  1 successful build is useless.  10 in a row is a good start.

Here are good references for building your OSD Task Sequence - 

http://blogs.msdn.com/b/steverac/archive/2008/07/15/capturing-logs-during-failed-task-sequence-execution.aspx

http://technet.microsoft.com/en-us/library/hh273375.aspx

I can be reached @FredBainbridge.  Thanks!

OSD Presentation Slidedeck

RefreshDefaultMP Script

Gather some Adobe Serial Numbers and Version using ConfigMgr Compliance Settings and Hardware Inventory

Update to an older blog entry...

http://www1.myitforum.com/2012/06/13/gather-some-adobe-software-serial-numbers-using-configmgr-dcm-and-hardware- inventory/ :

Because this thread: http://social.technet.microsoft.com/Forums/en-US/configmgrinventory/thread/7243fac9-36c4- 4d1f-9b2b-eb1b2f53ed87, got me thinking about it, I went to the adobe blog entry they referenced, here: http://blogs.adobe.com/oobe/2009/11/software_tagging_in_adobe_prod_1.html

Searched our lab for a couple of clients with full Adobe products, and low and behold… found the .swtag files mentioned. Interestingly, that blog was a little misleading–it didn’t seem to cover some of the tags that are really in the .swtag files for serial number, version, etc… so I doubt the script (attached) will actually find everything. but it’s a start; so I thought I’d throw this out into the wild (blog it) and see what others can make of it.

Attached is a script, which you’d run similar to the "all members of all local groups" type of thing–run it on clients (either as a recurring advertisement or as a DCM ConfigItem, with no validation), and the sms_def.mof edit to pull the info back into your DB. Some of what it returns you’ll already have from ARP (name, version), but the golden nuggets of info are the SerialNumber, and whether it’s part of a Suite (according to that blog, anyway). There’s also something about "licensedState", but one of my test boxes had a serial number, but said it was unlicensed. Not sure what that is really about–that the human didn’t click on something after launching to register online? Not sure. But hey, that field is there if it means anything. You could always set that to FALSE in the mof if that LicenseState information is pointless.

What was nice about the above routine was that in the "partofasuite" returned results, it would say "Std" or "Pro" right in there, so that when the licensing folk would come knocking and ask for your pro vs std counts, it was relatively easy to run a report, and show them exactly what you had out there, based on Adobe's own information. With the "DC" version, they've apparently decided to make it even MORE difficult to tell the difference between Pro vs. Std.

Here's a new link to their swid tag information: http://www.adobe.com/devnet-docs/acrobatetk/tools/AdminGuide/identify.html

Fortunately, the Script + Mof edit will pull back all of the information necessary to tell the difference, it just makes reports more, uh... "fun"

http://www.adobe.com/devnet-docs/acrobatetk/tools/AdminGuide/identify.html#identifying-dc-installs

and basically you'll see that that std, the serial numbers start with 9101 and for pro, the serial numbers start with 9707

Here's a sample report, once you've created the ConfigItem and Baseline, deployed it, and imported the mof snippet into inventory, and start getting back results:

This sample report is ONLY for Acrobat, there are other Adobe products returned with the AdobeInfo routine, so this is just a sample report, it's not meant to showcase everything returned.

;with cte as (
Select distinct resourceid, Case when a.SerialNumber0 like '9101%' then 'Std'
when a.SerialNumber like '9707%' then 'Pro' end as 'Type',
Case when a.PartOfASuite0 like 'v7%' then 'DC'
when a.PartOfASuite0 like 'v6%' then '11'
when a.PartOfASuite0 like 'Acrobat%' then '10' end as 'Version'
from v_gs_AdobeInfo0
where a.PartOfASuite0 like 'v%{}Acrobat%' or a.PartOfASuite0 like 'Acrobat%'
)
select cte.version as [Acrobat Version] , cte.type as [Acrobat Type] ,count(*) as 'Count'
from cte group by [version], [type]
order by [version], [type]
would result in something sorta like this (#'s have been changed from my production environment to fake #'s)
Acrobat Version Acrobat Type Count
10                    Pro                20
10                    Std                15
11                    Pro                300
11                     Std               210
DC                   Pro                700
DC                   Std                800
Of course, the best part of this routine is *if* Adobe comes knocking, you can show them that the information about pro vs. std originates from their SWID tag files, and you can point to their web site about how to tell the difference, so they should be satisfied and quickly leave you alone (unless, of course... you did deploy Pro to all of your environment, and you thought you were deploying Standard... well, then... pay up...)

--> Link--< to get the mof file for importing for ConfigMgr Inventory, and the script to add to a Configuration Item (or you could deploy it as a recurring Advertisement, if you are adverse to Configuration Baselines).  Basically, the client, on a recurring basis, needs to run the script to populate--or wipe and re-populate--the custom WMI location with the Adobe swid tag information.

How to melt a SUP

We have 3 primary sites under a CAS (bad, but we have no choice with so many clients). Because we also have Nomad, we don't care where clients get assigned. We care only that each site has roughly the same client count as the others. But we drifted about 30K clients too many on one site and simply made use of CM12 R2's function to move clients. So we moved them to level set the count.

The downside, and we knew this, was that each client would have to do a full inventory and SUP scan. That's a lot of traffic but we've done this before without issue. But this time we melted the SUPs with many full scans. And the wonderful Rapid Fail detection built into IIS decided to protect us by stopping our WSUS App pool. Late at night.

Now in CM12 post SP1 (we're on R2), clients make use of the SUPList which is a list of all possible SUPs available. Clients find one SUP off that list and stick to it. They never change unless they can't reach their SUP after 4 attempts (30 minutes between each - the 5th attempt is to a new SUP). Well with the app pool off, all clients trying to scan would fail and start looking for new SUPs. A new SUP means a full scan. A full scan from 110K clients is far worse than from just 10K when we're moving things. Needless to say our SUPs were working very hard the next morning to serve clients. On a normal day the NIC on one of our SUPs shows about 1Mbps of traffic, but after starting the WSUS App pool we were at over 850Mbps going out per SUP.

Disabling Rapid Fail is one nice fix to help keep that app pool from stopping, but we also increased the default RAM on that from 5GB to 20GB (the SUPs have 24GB so we were clearly wasting most of that). I know of another company who has 85K clients on 2 SUPs who boosted their RAM from 24 GB to 48 GB to help IIS serve clients. Another option is to add more SUPs, but RAM is probably cheaper than another VM. This default Private Memory Limit is 5GB, so for those of us weirdoes with lots of RAM, it makes sense to crank this up if you can. We actually did this long ago, but we're thinking the Server 2012 R2 upgrade over Server 2012 wiped our settings out.

By the way, the obvious 'treatment' during such a meltdown is to throttle IIS. We set our servers down to 50 Mbps and the network team was happy; your setting will vary based on client count and bandwidth. Our long term insurance here will be QoS. UPDATE: Jeff Carreon just posted a tidbit on how to throttle quickly in case of an emergency using PowerShell.

So how do we keep our settings? We ask Sherry who knows DCM! Read more on her CIs to enforce our settings here.

Java Software Metering with CM - Java 7 End of Life

It is almost that time, another Java runtime will go end of life in just over a month.  This means we have only a month left to finish the Java upgrade we have already started, right?  Well I have lived through a few of these Java events over the years and they really don't seem to get any easier, or even ever really end.  In fact I am seriously considering removing Java from users systems.  The one only issue is, who is actually using it for work related purposes?  That has always been the million dollar question to me.

Turns out if you pay for Java support there are some tools that can help you determine this sort of thing.  So where does that leave the rest of us that maybe are not as fortunate as the aforementioned minority?  Well fortunately for us, Oracle did us all a small favor late last year and built-in some usage tracking mechanisms into the JRE's we are already be using.  Turns out "Usage Tracker is available in Oracle Java SE Advanced and Oracle Java SE Suite versions 1.4.2_35 and later, 5.0u33 and later, 6u25 and later, 7 and later, and 8 and later.".  (http://docs.oracle.com/javacomponents/usage-tracker/overview/index.html)  Think of this as software metering for Java plug-ins and VMs which run on your systems, it logs each users data into a log file in their user profile.

I just had to try this so I followed the instructions, dropped the usagetracker.properties file in the correct directory and then fired up a browser and ran a few Java plugins.  All of the data was right there, in a little txt file in my user profile.  So now what?  Turns out there are a few catches to all of this logs.

  • A properties file must be in the appropriate directory for each JRE if you want to log data.  For better or worse, maybe some machines have more than one JRE installed.
  • The default delimiter in their tracking file was a comma.  Typically this is great, until I noticed there are no text qualifiers in the data elements. Formatting nightmares.
  • The log files are stored in the user's profiles by default.  Typically a system should only have one user, but this is not always the case either.  So we need to aggregate the data together.

So based on my own initial assessment I came up with a few functional requirements on how I would want a data collection to work for this.

  • Enabling logging for all JRE's installed on the system.
  • Use a delimiter character that would be less likely to show up in the command line options very often, I chose a carrot '^' for this.
  • Enumerate all of the user profiles and centrally store the data on the system.

Based on how Java usage tracking works, and how I wanted to see it work, I setup a PowerShell compliance script that performs the following actions.

  • Logs all data associated to the script to the CM client logs directory (CM_JavaUsageLogging.log) when logging is enabled.  This is the default, this can be disabled by changing $LoggingEnable to $false at the top of the compliance script.
  • Query the registry for installed JRE's and creates the usagetracking.properties file in the lib/management folder to enabled logging for all instances.
  • Merge all of the data from all of the tracking logs on the system and add the user which executed the vm or plug-in to the dataset.
  • Creates a CM_JavaUsageTracking WMI class to store the data centrally on the system.  Then we can pull it off with hardware inventory!
  • Only add the new entries on subsequent executions.  The data in WMI can be inventoried.  (MOF below)

jutci

The cab located here can be downloaded, tested, and used to get you started on your way.  Please note this is a work in progress, so I will update this file with the new changes once they are ready.  If you have any feedback (it is welcome) on the compliance script please let me know at This email address is being protected from spambots. You need JavaScript enabled to view it..  This has been tested with PowerShell 2.0 and above, but as always test it first to veriy everything works in your environment.

The data in WMI can be inventoried!  So after running this script on a system, connect to it and add the CM_JavaUsageTracking to hardware inventory in CM, now you have Java software metering in a sense.  This is still a work in progress, here are a few more items I still want to add and cleanup.

  • The compliance rule is stating non-compliant even though the script appears to complete.
  • Add a day count rolling history feature so data older than 'x' number of days is removed from WMI and not edited.  This would allow a limit per system on the collected data.
  • Test and validate support for 64-bit JREs, I have 99% 32-bit so this was my priority.

For those of you concerned about how much space this will require in your cm database, I checked and in my case 30 days of data from approximately 2500 systems was a table around 50 MB.  This will vary greatly depending on how much Java plugins are in use in your environment.  Data is now being collected and I can sit back and see which sites users are using and determine what I am going to do about it. 

Happy data spelunking!

Using Configuration Items instead of Software Inventory

Issue to be solved: Software Inventory (which is really FILE inventory) you've noticed takes FOREVER when you define a rule like "dropbox.exe on c:\users\", the clients take several minutes to run that query.  and the more rules you make, the longer it might take your clients to run software inventory.

Resolution:  whenever possible, forget creating software (file) inventory rules.  This blog post will show how to setup a rule looking for dropbox.exe on c:\users\, and you can get back File Version so you can run reports.

Take the attached --> here<-- and import it into your console, Configuration Items.  Create a Baseline and deploy that Baseline to a target collection.  What should happen is if the file dropbox.exe is in c:\users\ somewhere, those boxes will report non-compliant, and will report the Fileversion, and path location, where dropbox.exe is located.

Why I used dropbox.exe... dropbox.exe may not be listed in Installed Software, nor in Add/Remove Programs.  It might be listed in ccm_recentlyusedapps.  Using this as a sample, after you've deployed this, run this .sql query to see which computers, the version, and path.

select
  s1.netbios_name0,
  ci.displayname,
  rooles.RuleName,
  perclientdetails.DiscoveredValue,
  perclientDetails.InstancePath as 'Found In'
from
v_localizedciproperties ci
join vDCMDeploymentNonCompliantRuleDetailsPerClientMachine perclientDetails
 on perclientdetails.ci_id=ci.ci_id
join v_CIRules rooles on rooles.rule_id=perclientdetails.rule_id
join v_r_system s1 on s1.ResourceID=perclientDetails.ItemKey
where
  ci.displayname = 'File Inventory Dropbox.exe'
 and
 ci.localeid = 1033
order by s1.Netbios_Name0

One interesting caveat: every time you change a ConfigItem (add something) the vDCMDeploymentNonCompliantRuleDetailsPerClientMachine will "reset", so if you don't want to "lose" history, you'll want to likely simply make more CIs.  Not edit existing ones.

This sample was to show you can get version of any dropbox.exe file, for reporting purposes.  If, for example, what you really need to be able to do is create a collection of "machines where widgets.exe located in c:\program files\widgets is less than version 5.4.3.2", then make a ConfigItem for widgets.exe, in that folder, and "compliant" means version is greater than or equal to 5.4.3.2.  You can then easily right-click the CI and make a collection of Non-Compliant machines.

I encourage you to test it out for yourself; and see how quickly on a client a CI runs; vs. software file inventory for the same file. 

Visual Studio 2017 Editions using ConfigMgr Configuration item

This is a companion to https://mnscug.org/blogs/sherry-kissinger/416-visual-studio-editions-via-configmgr-mof-edit It *might* be a replacement for the previous mof edit; but I haven't tested this enough to make that conclusion--test yourself to see.

Issue to be resolved:  there are licensing groups at my company who are tasked with ensuring licensing compliance.  There is a significant difference between Visual Studio costs for Standard, Professional, and Enterprise.  Prior to Visual Studio 2017, that information was able to be obtained via registry keys, and a configuration.mof + import (see link above) was sufficient to obtain that information.

According to https://blogs.msdn.microsoft.com/dmx/2017/06/13/how-to-get-visual-studio-2017-version-number-and-edition/ (looks like published date is June, 2017), that information is no longer in the registry.  There is a uservoice published --> https://visualstudio.uservoice.com/forums/121579-visual-studio-ide/suggestions/19026784-please-add-a-documentation-about-how-to-detect-in <--, requesting that the devs for visual studio put that back--but there's no acknowledgement that it would ever happen.

So that means that us lonely SCCM Administrators, tasked with "somehow" getting the edition information to the licensing teams at our companies have to--yet again--find a way to "make it happen", using the tools provided.  So here's "one possible way". 

This has only been tested on ONE device in a lab... so it's probably not perfect.  Supposedly, using the -legacy switch it'll also detect "old versions" installed--but I have no idea if that works or not.  Might not.

Here's how I plan on deploying this...

1)  configuration Item, Application Type.
    a) 'Detection Method", use a powershell script... this may not be universal, but currently in my lab, this location of 'vswhere.exe' is consistently in the same place.  Here's hoping it'll not change.  So the detection logic for the CI to bother to run at all would be "do you have vswhere.exe where I think it should be":

 $ErrorActionPreference = 'SilentlyContinue'
 $location = ${env:ProgramFiles(x86)}+'\Microsoft Visual Studio\Installer\vswhere.exe'
 if ([System.IO.File]::Exists($location)) {
  write-host $location
  }

    b) Setting, Discovery Script, see the --> attached <-- .ps1 file.  Compliance Rule would be just existential, any result at all.
2)  Deploy that CI in a Baseline, as 'optional'; whether or not I just send it to every box everywhere, or create a collection of machines with Visual Studio 2017 in installed software--either way should work.
3)  Once Deployed and a box with Visual Studio 2017 has run it, confirm that a sample box DOES create a root\cimv2, cm_vswhere class, and there is data inside.
4)  Enable inventory
    a) In my SCCM Console, Administration, Client Settings, right-click Default Client Settings, properties
    b) Hardware Inventory, Set Classes...
    c) Add...
    d) Connect... to "the computer you checked in step 3 above; where you confirmed there is data locally on that box in root \cimv2, cm_vswhere"  and root\cimv2
    e) find the class "cm_vswhere"  check the box, OK. OK. OK.
5) monitor
    a) on your primary site, <installed location for SCCM>\Logs, dataldr.log 
    b) It'll chat about pending adds in the log.  Once that's done, you'll see a note about how it made some views for you.  "Creating view for..."
6) Wait a day, and then look if there is any information in a view probably called something like... v_gs_cm_vswhere.  But your view might have a different name--you'll just have to look.
    a) if you're impatient, back on that box from step 3 above, do some policy refreshes.  then a hardware inventory.
5) End result, you should get information in the field "displayName0", like "Visual Studio Professional 2017", and you'll be able to make custom reports using that information.  Which should hopefully satisfy your licensing folks.

To reiterate... tested on ONE box in a lab.  Your mileage my vary.  Additional tweaks or customizations may be needed to the script.  That's why in the script I tried to add a bunch of 'write-verbose'.  If you need to figure out why something isn't working right, change the VerbosePreference to Continue, not SilentlyContinue, and run it interactively on a machine--to hopefully figure out and address any un-anticipated flaws.

Visual Studio Editions via ConfigMgr MOF Edit

This ask came up recently, so I did a bit of research.  It is NOT, I repeat NOT perfect, and NOT complete.  I did NOT include all of the possible "Team" editions for Visual Studio 2005 or Visual Studio 2008. But for Visual Studio 2010, 2012, 2013, and (I think) 2015, the editions should be reported correctly. If you want to (need to) add in all of the potential "team" editions for Visual Studio 2005/2008, you can use this as a guide and add in all the additional columns for those.

If you're familiar with the "DotNetFrameworks" mof edits, it's similar to that type of MOF edit.  --> attached <--, are what you would add to the bottom of your configuration.mof file in <installed location>\inboxes\clifiles.src\hinv, and the snippet you would import into your "Default Client Settings", hardware inventory, and then enable, or create a custom client agent setting to enable it only to a specific collection of machines.

Using a sql query like this, you can then pull out the "highest" Edition.

select
sys1.netbios_name0 as 'computername',
vised.version0 as 'Visual Studio Version',
case when vised.ultimate0 = 1 then 'Ultimate'
  when vised.Enterprise0 = 1 then 'Enterprise'
  when vised.Premium0 = 1 then 'Premium'
  when vised.Professional0 = 1 then 'Professional'
  when vised.Community0 = 1 then 'Community' end as 'Highest Edition Installed'
from v_gs_visualstudioeditions0 vised
join v_r_system sys1 on sys1.resourceid=vised.resourceid
where ( vised.professional0 is not null or vised.premium0 is not null or
        vised.ultimate0 is not null or vised.standard0 is not null or
        vised.community0 is not null or vised.enterprise0 is not null
)
and vised.version0 in ('2005','2008','2010','2012','2013','2015')
order by sys1.computername, vised.version0

which could help you make a report like that could look like this. Computer names have been changed, but note that Computer3 and Computer5 have two versions of Visual Studio, and then their editions:
VisualStudioEditions

Sources for where I got these regkeys:

for Visual Studio 2005: http://blogs.msdn.com/b/heaths/archive/2006/12/17/detecting-visual-studio-2005-service-pack-1.aspx
  There may be more subkeys for Team System, but I didn't grab them..
for Visual Studio 2008: http://blogs.msdn.com/b/heaths/archive/2009/05/29/detecting-visual-studio-2008-service-pack-1.aspx
   There's a WHOLE bunch of VSDB, VSTA, VSTD, VSTS, VSTT for all the team System 2008 editions
for Visual Studio 2010: http://blogs.msdn.com/b/heaths/archive/2010/05/04/detection-keys-for-net-framework-4-0-and-visual-studio-2010.aspx
    Note, Ultimate replaces Team Suite
for Visual Studio 2012: http://blogs.msdn.com/b/heaths/archive/2012/08/03/detection-keys-for-visual-studio-2012.aspx
for Visual Studio 2013: Couldn't find a direct link, but found a note that there's Professional, Premium, and Ultimate, so guessing it's these. And data comes back from clients, so appears to work.
for Visual Studio 2015: http://blogs.msdn.com/b/heaths/archive/2015/04/13/detection-keys-for-visual-studio-2015.aspx
   Note, Enterprise replaces premium and ultimate

 

Copyright © 2018 - The Minnesota System Center User Group