Compliance Setting to Enable WinRM

The Situation:  ConfigMgr 2012 clients can be managed remotely (and troubleshooting remotely) quite handily... if only Powershell were installed and Remote Management via Powershell (WinRM) were enabled.

This article presumes you are deploying Powershell via other means, and this routine is just 1 of several ways to get WinRM enabled, if Powershell is installed.  Note you don't have to use this at all; by far the most popular method is to simply have a GPO, or if you must, interactively login to a computer and run   winrm quickconfig -q  from a command prompt (if you have the rights).

This situation may or may not be an edge case for you... but in our environment there are a few workstations, which are ConfigMgr Clients, but which for whatever reason are not candidates for the GPO, and to have a human interactively connect to each of those machines and run the winrm config (with our settings) is cumbersome. 

I grabbed Roger Zander's Baseline from here: http://myitforum.com/cs2/blogs/rzander/archive/2013/06/21/configure-winrm-by-using-cm12-settings-management.aspx, and found that there were a few things inside that just weren't working in my environment--some old clients, or older versions of Powershell just were not being detected or remediated well.  So I tweaked it to work in my environment.  The tweaking I did may or may not work for your environment--only you can determine that.

The baseline attached is just a SAMPLE of a Configuration Item; using the settings as created if you were to run winrm quickconfig; however you or your security team may have determined not to use those defaults--you may need to modify the port used, or change the ipv4 or ipv6 listening ports.  So take the attached as-is ONLY if you know you are using the defaults, and they are acceptable in your environment.  If you've modified how WinRM is configured in your company, you will definitely need to either modify the ConfigItem detection and remediation, or not use this at all.

How to use:

  1. Import the --> Attached <--  baseline into your Compliance Settings, Baselines 
  2. PARANOIA: Deploy the Baseline to a collection withOUT checking the box about remediation.  
    1. Monitor, and for the machines which say "non-compliant", check that you really cannot remotely connect to them with Powershell Remote Management.  
    2. To a collection of those non-compliants, Deploy the baseline again, but DO check the remediation box.  
    3. Confirm that the remediation Baseline runs, and that you now can remotely connect to them with Powershell remote management. 
  3. Repeat the Paranoia steps as many times as you need to until you are comfortable that it's doing what you think it should be doing. 
  4. Once you've passed your own internal Paranoia Steps (above), you can remove the test deployments, and deploy it again to your 'main' collection, with the remediation checkbox checked.

Again, to repeat... this is just a sample.  and this sample will only be logical to use in your environment if you simply can't use a GPO to enable WinRM on all of your CM clients.  If you CAN use a GPO against your entire environment; then perhaps all you'd maybe, and I mean MAYBE want this for is to Monitor only (no remediation) and just check if the GPO is in fact getting to all your clients.  I wouldn't bother, personally; if I had a GPO that could get to every client.

Puzzling Behavior: When I was testing, more often than I was comfortable with (when using remediation enabled), client computers would report "Failed"--but at re-run they would report Compliant, and forever after report compliant.  What I suspect is happening (but couldn't verify, because a rerun was compliant) was that during Remediation, AS it was remediating one of the 1st non-compliant results... other tests would fail.  But by the time of a human (me) following up on it, WinRM was all enabled and configured and a re-run of the Baseline would indicate absolutely nothing wrong.  So... if you get a lot of failures in Remediation... just wait for your next cycle or re-run the baseline manually.  I suspect it's fine; just a timing issue.

CM12

  • Created on .

Use Compliance Settings to Disable Firefox AutoUpdates in ConfigMgr 2012

This is very much an "edge case" type of situation... but this came up internally where I work, so I thought I'd put this out there for public consumption, in case this isn't as much of an edge case as I think it is.

The -->attached<-- has only had a brief life in pilot... so if you do need this, PLEASE test thoroughly. 

The scenario / issue to be solved was this... Firefox releases updates frequently, and internally the goal was to use SCUP (System Center Updates Publisher) to deploy those updates, just like any other security update--and here's the fun part--using the exact download from mozilla (no modifications).  This tested great, but then they also didn't want the end users to get those reminders about updates... the instant Mozilla releases an update.  If the plan is to manage them with SCUP-offered updates, then they wanted the client-side Update Prompts to go away.

Unfortunately, not quite that easy with Firefox.  It's not registry keys, it's not WMI, it's two files, with specific lines inside those files, to disable updates.

What the attached Baseline will do, if you target it to your machines, is a) first look if firefox is installed (looking for firefox.exe in program files).  b) If it's there, then it'll check, and if you have "remediate" checked when you deploy the baseline, optionally create the 2 files, with the required data inside those two files.

How To Implement:

  1. Take the Attached, and import into your CM12 console (Assets and Compliance, Compliance Settings, Configuration Baselines) the Firefox Disable AutoUpdates-Baseline.cab.
  2. Once Imported, Deploy that baseline to a test collection; I recommend one with at least two boxes: one with firefox and one without; so you can confirm for yourself that it doesn't do anything when firefox is not there.

How to Check if it's working:

  1. Interactively from Firefox itself:    
    1. before deployment, in Firefox, if you go to the pull-down for Firefox (on the left), then the -> arrow by Help, then About FireFox, in the middle-ish will be a message about whether or not you are up to date.    
    2. After deployment, (and after you restart Firefox, if the Compliance Setting ran while Firefox was already open), when you go to About Firefox it will now say "Updates disabled by your system administrator"
  2. Remotely:   
    1. there are two files, and those two files need very specific things inside:  
      1. File #1: In the same folder as firefox.exe, mozilla.cfg with these exact lines:  
        lockPref("app.update.auto",false);  
        lockPref("app.update.enabled",false;
      2. file #2:  In the subfolder \Defaults\Pref, local-settings.js with these exact lines:  
        pref("general.config.filename", "mozilla.cfg");  
        pref("general.config.obscure_value", 0); // use this to disable the byte-shift

Naturally... the assumption is that you'll be forever after vigilent about deploying firefox updates using SCUP, or somehow else managing firefox deployments.  Because just like any other browser... occasionally "bad" people decide to release trojans or viruses or something else that can cause harm to your computer or company via a unpatched or old browser.  So... just because you no longer see popups about "new version is available" doesn't mean you are safe! 

  • Created on .

Local Policy Override to Disable Inventory Throttling

Although the default for Software Inventory is disabled in ConfigMgr 2012; you perhaps have enabled Software Inventory for file inventory.  If you've done so... have you noticed that on some clients it can take hours and hours and HOURS before it finishes?  Or even on some clients it never finishes; just exits with a message that it will retry later? "The system cannot continue. Cycle will be aborted and retried."  will be in the inventoryagent.log .

There's a local policy override that you can set, on each of your clients, to change the default of inventory throttling from TRUE to FALSE.  Inventory throttling, in this case, is when you have multiple software inventory rules, like perhaps... to inventory *.exe from %programfiles%, and then another one for *.exe from c:\SomeLocalFolder.  and inbetween rule 1 and rule 2 it waits several hours to move from rule 1 to rule 2 in the inventoryagent.log

Here's a way to quickly implement (and quickly undo, if you need to) this local policy override.

-->Attached<-- are two baselines you can import into your Console.  The only one you actually need is the one called "Local Policy Override to Disable Inventory Throttling". In your CM12 Console, Assets and Compliance, Compliance Settings, Compliance Baselines, import that .cab file.  Now that you have it, deploy it to a test collection.  You may want to target a group of computers which you know are exhibiting the behavior in their local inventoryagent.log as mentioned above.  Make sure when you deploy the baseline, that you DO check the box about remediate.

Because software inventory is (in general) slow... you may want to wait a few days to see that this baseline does what you expect it to do.  Once you are satisfied with the results, it is up to you if you want to deploy this Local Policy Override to all of your Windows systems in CM12.

If, at some future time, you want to take away this local policy override, import the baseline "Delete The LPO for Inventory throttling Disabling".  Obviously remove the deployment of the original; and deploy the Delete baseline.  (if both are deployed at the same time to the same machines...  those machines will get and remove, remove and then get, the local policy override... just messy.)

Thanks to Robert Hastings and Microsoft for the local policy override syntax!

  • Created on .

Configuration Manager 2012: Inventory Customizations when you use Distributed Views

I suspect few ConfigMgr 2012 environments will encounter this potential issue.  You have to have 3 very unique circumstances.  a) you are a big enough environment that you have a CAS and Primaries to begin with.  b) You are a big enough environment to leverage Distributed Views.  c) You've previously customized inventory, and then you've decided to ADD to that customization instead of creating a new one (very few environments do that, even if they have a CAS and they use distributed views).  So maybe I'm the only one that would ever encounter this issue.  But just in case I'm not... putting this out there in a blog for others to find in case they are just as strange as I am. 

 

The Issue: you see something like this in your dataldr.log:

*** exec dbo.spRenewChangedInvViews

*** [42S22][207][Microsoft][SQL Server Native Client 11.0][SQL Server]Invalid column name 'Supported00'. : v_GS_MoreInfo0

 

You have a CAS and a Primary, and you have enabled Distributed Views for Hardware Inventory (you can check by going to Administration, Hierarchy Configuration, Database Replication, for each Parent Site to Child Site replication link, right-click, and go to "Link Properties" If you have a checkbox next to "Enable the following types of site data for distributed views" for Hardware Inventory, then this applies.

 

How to resolve:

One way, I guess… would be to turn off distributed views for hardware inventory, wait a day or so, then turn it back on. (but who wants to do that).

 

These instructions aren't exactly um… supported. But it seemed to work for me.

 

The reason for the error is that the local table which was changed as part of the process of adding an attribute to a pre-existing custom import isn't what is actually being referenced by the new view. There's a View, which uses Union All; to grab info from the child site and the CAS site database, so that it looks like just 1 view (when you do reports)

 

So… the fix is to update that Distributed View. For Each of the 4 views that are there for that custom inventory.

 

For whatever reason, you can't see the Distributed Views from a remote SQL Management Studio connection.  So you have to RDP into your Server which houses the Database for your CAS.  Launch SQL Management Studio from there, then go to your CAS site, database, views, and go find the dbo.<TableName0> ; In my case it was dbo.MoreInfo_data. Right-click Design on that.

 

If you're a deep down sql geek, you'll see that it's just 2 select statements with a union all in there--and guess what's missing? Yep, that new attribute you just added. So in both select statements, add in the missing attribute (in my example,   ,Supported00, exactly what the error message is whining about), and hit save.

 

Go back and watch dataldr.log. I'll bet it continues past that error (for v_gs_moreInfo0) and now is whining about v_hs_moreinfo0. I'm Right, aren't I? Ok, now you have to right-click design on the _HIST view; same thing, 2 select statements with a Union All; and you add in the missing attribute and hit save.

 

Continue doing that until dataldr.log stops whining.

  • Created on .

ConfigMgr Inventory: Who is using Outlook PST files, where are they, and how big are they?

I can't think of anyone who has been supporting Outlook for more than a few years where they *haven't* been asked that question.

Until now, the best answer we could come up with was "we can scan the local drives for *.pst".  But that doesn't necessarily tell us if people are connected to them.  Nor does it take into account if people are storing that outlook pst file on a network share.

With some clever scripting from different people, and stealing code from here, from 'robszar' (sorry, don't know his real name) as a base:  http://www.visualbasicscript.com/Find-PST-files-configured-in-outlook-m44947-p2.aspx , John Marcum and Sherry Kissinger, and we've got a routine that will, for the most part, answer those three questions.  The basics of how it works is this.  There are two vbscripts that run.  One runs as SYSTEM, and it's only purpose is to create a custom namespace in WMI, and grant permissions to all of your domain users to that custom namespace--so they can populate it with the results of script #2.  Script #2 runs, only when a user is logged in, with user rights.  That's because the majority of what the script needs to do is read information about that specific logged-in users Outlook configuration, and (potentially) any mapped drive information which may be referenced by the PST file location.

The results of the 2nd script end up in that custom WMI namespace, and will have the following information:

DateScriptRan = the exact date and time that the script ran to gather this user-specific information.
FileSizeinMB = If it could be detected, and the file size was 1mb or larger, the size of the PST.  If it's less than 1mb, or for whatever reason could not be detected, the value will be 0.
PSTFile = just the file.pst (the last value after the last \ in PSTLocation)
PSTLocation = The location as known to Outlook.  This could be c:\somewhere\file.pst, \\server\share\file.pst, or Q:\file.pst (where q: is a mapped network drive).
Type = If it could figure out that Q: was a mapped network drive, it'll say 'Remote', otherwise it'll say local
UserDomain = whomever is logged in, what their domain is.
UserName = whomever is logged in, what their username is.
NetworkLocation = This will almost always be NULL, but, if the PSTlocation was something like q:\file.pst, where q: was a mapped network drive for the user, this field will contain what Q: was mapped to.

End result:  After deploying these two scripts, you will be able to answer those pesky questions from your Exchange team about who, where, and how large, are referenced PST files.  Of course, the main limitation is this is per-user information.  If you have a lot of shared machines, or the same user has multiple computers (and connects to the same PST files on those multiple computers) you'll have to do some creative reporting to ensure you don't double-count the same PST files.

Ok, enough of how it works.  You really want to know *exactly* what to do, right?  Let's start!
 
Your Source folder for the package will contain 3 things:
WMINameSpaceAndSecurity.VBS
WMISecurity.exe
PSTFinder.vbs

The .vbs files are at this  -->link<--.  Note that WMISecurity.exe is not attached here; just search using your favorite search engine to find and download wmisecurity.exe.  The one I used was version 1.0.1.31058 --maybe there are later versions of this .exe; but that's the one I found, and it worked.

You will need to make 1 change to "WMINameSpaceAndSecurity.vbs", this line:
strDomain = "YOURDOMAINHERE"
Modify that to be your domain (the domain your users are in that will be logging in and running script #2).

Create two programs; the first runs cscript.exe WMINameSpaceAndSecurity.vbs, whether or not a user is logged in, with Administrator rights.  The second runs cscript.exe PSTFinder.vbs, only when a user is logged in, with user rights.  The 2nd one; you want to "run another program first", and have it run the first one.  It only needs to run the 1st program once, per computer; it doesn't need to re-run.

Advertise the 2nd program to a collection (I recommend a test/pilot first), and confirm that it works as you expect.  If you want to confirm the data is there, look in root\CustomCMClasses  (not root\cimv2) for cm_PSTFileInfo, that there are instances there for any Outlook-attached PST files for that user.

If you are satisfied it's there locally, either add the below to sms_def.mof (if you are ConfigMgr07) or import it into Default Client Agent Settings, Hardware Inventory (if you are CM12)

[SMS_Report(TRUE),
 SMS_Group_Name("PSTFileInfo"),
 SMS_Class_ID("PSTFileInfo"),
 SMS_Namespace(FALSE),
 Namespace("\\\\\\\\localhost\\\\root\\\\CustomCMClasses")]

class cm_pstfileinfo : SMS_Class_Template
{
  [SMS_Report(TRUE)] string DateScriptRan;
  [SMS_Report(TRUE)] uint32 FileSizeinMB;
  [SMS_Report(TRUE)] string NetworkLocation;
  [SMS_Report(TRUE)] string PSTFile;
  [SMS_Report(TRUE),key] string PSTLocation;
  [SMS_Report(TRUE)] string Type;
  [SMS_Report(TRUE)] string UserDomain;
  [SMS_Report(TRUE)] string UserName;
};


sit back, relax for a bit... then invoke a hardware inventory on your test boxes, and see if the data shows up in your database in v_gs_pstfileinfo0.  If so, deploy the advert to your real target collection of users or computers, and wait for the data to show up.  Depending upon your need for this information; you may or may not want to have the advert run on a recurring basis (weekly? monthly?) or just gather it for a week or so (just enough to answer the question) then delete the advert and change the Inventory from TRUE to FALSE (until the next time they ask).

Here's a potential sql report to get you started:

select sys.Name0 as [Computer Name],
pst.UserName0 as [User],
pst.PSTFile0 as [File Name],
pst.PSTLocation0 as [File Location],
pst.Type0 as [Local/Remote],
case
  when pst.NetworkLocation0 is not null then pst.NetworkLocation0
  else 'Local'
end as [Network Location],
pst.FileSizeinMB0 as [Size in MB],
pst.DateScriptRan0 as [Date Collected]
from v_R_System sys
Inner Join v_GS_PSTFileInfo0 pst on sys.ResourceID = pst.ResourceID
order by sys.Name0
 
which will look something like this:

pstsamplereport

  • Created on .

Copyright © 2018 - The Minnesota System Center User Group