It has been a long while since i blogged last! Um, almost 4 years, yikes? You know, work, family, kids, martial arts, etc... I know, i know... Excuses :).
Anyway, what is up with SUPs??? (SUPs – Software Update Point servers… You know, the only antiquated server role in your CM hierarchy. :)) We’ve had so many SUP storms in our organization I have seriously lost count… This is where WSUSpools are just severely getting hammered constantly on our SUPs… CPU and RAM are thru the roof, clients are constantly generating errors or timing out, and network consumption on our low bandwidth sites are at capacity. This recent one, was more than a storm. It literally halted our users from working at the branch sites, due to high network consumption from all of the scanning and rescanning that were coming from the clients. Maybe storm is not the word, it was a hurricane! We’ve always thought running the default WSUS maintenance on our SUPs periodically thru POSH cmdlets is enough. But something is off, clearly… This has always left us scratching our heads trying to figure out exactly what’s going on with our SUPs. We’re constantly searching for answers on how to tame our SUPs down, and constantly adjusting the pool and IIS settings. (Which I believe we’ve got it right this time. Check out my peer’s blog Sherry Kissinger regarding WSUSPool, web.config, and CI settings.). But this time around, it seems the clients are just not completely downloading all of the metadata...
So we reached out to our dedicated MS support folks (which btw are awesome), and worked with them closely on figuring out exactly what’s going on with our WSUS environment. We wanted to know if there’s a way to identify and measure the metadata that the clients are downloading, and they gave us the SQL below to run against the SUSDBs. This tells us articles the are deployable and the size of each article. And the recommendation was to go straight to WSUS console and decline the updates with large metadata that we’re not using. Hmm, we thought that could be a lot! Since we had never ever gone in WSUS console for anything! Who does, right? Since that’s always been the rule, never mess with WSUS Console. NOT this time.
Run this SQL (from MS support) against your SUSDB to view all of the deployable updates you have. (This was separated into two queries, but Sherry put it together).
;with cte as (SELECT dbo.tbXml.RevisionID, ISNULL(datalength(dbo.tbXml.RootElementXmlCompressed), 0) as LENGTH FROM dbo.tbXml
INNER JOIN dbo.tbProperty ON dbo.tbXml.RevisionID = dbo.tbProperty.RevisionID
)--order by Length desc
pr.ExplicitlyDeployable as ED,
inner join tbRevision r on u.LocalUpdateID = r.LocalUpdateID
inner join tbProperty pr on pr.RevisionID = r.RevisionID
inner join cte on cte.revisionid = r.revisionid
inner join tbLocalizedPropertyForRevision lpr on r.RevisionID = lpr.RevisionID
inner join tbLocalizedProperty lp on lpr.LocalizedPropertyID = lp.LocalizedPropertyID
lpr.LanguageID = 1033
and r.RevisionID in (
inner join tbBundleAtLeastOne t2 on t1.BundledID=t2.BundledID
ishidden=0 and pr.ExplicitlyDeployable=1)
order by cte.length desc
Once we’ve gotten the number of the articles that we had as “deployable”, we noticed that there were tons that updates that we were not using or have never really used. So clients were clearly downloading/scanning for all of these unnecessary articles, hence why we were seeing a lot of timeouts. Thus, cleaning up is what we needed to do by declining all of these updates in WSUS in attempt to make the metadata lean.
Meghan Stewart from MS has a really great guide for maintaining WSUS/Software Update Points, (which i strongly recommend you follow). I grabbed the script from her post, and enhanced it a little by adding functions for declining Itanium and Windows XP updates. For not only did we need to decline superseded ones, but we also needed to decline Itanium and Windows XP updates. And we had to find a way to automate this process so we could include it in our maintenance plan. Lastly, I added event logging since we need SCOM to be able to pick up errors so we can be alerted upon failures. I thought about adding the email function, but not necessary for us since we use SCOM to alert us. (See the comments section in the script for more details). So prior to actually declining all of these unnecessary updates, we had over 14k articles that were marked as deployable. After running the script, we now have about less than 5k. HUGE chunk was taken off, and this obviously made the scanning times MUCH faster, timeouts when away, and network bandwidth consumption dropped significantly at no time. Script download link below.
UPDATE: (4/13/2018) On top of being able to decline superseded, Itanium, and XP updates, you can now also decline the following updates:
- Internet Explorer 7, 8, and/or 9
Here’s what the script does:
- Decline superseded updates. (# of days can be specified by using the –ExclusionPeriod)
- Decline Itanium updates. (can be omitted by using the –SkipItanium switch)
- Decline Windows XP updates. (can be omitted by using –SkipXP switch)
- Decline Preview updates. (can be omitted by using –SkipPrev switch) NEW!!
- Decline Beta updates. (can be omitted by using –SkipBeta switch) NEW!!
- Decline IE 7 updates. (can be omitted by using –SkipIE7 switch) NEW!!
- Decline IE 8 updates. (can be omitted by using –SkipIE8 switch) NEW!!
- Decline IE 9 updates. (can be omitted by using –SkipIE9 switch) NEW!!
- Decline Embedded updates. (can be omitted by using –SkipEmbedded switch) NEW!!
- Can be run with –TrialRun which only records what you can decline (I highly recommend running this first. And examine the data in the “UpdatesList” folder it creates)
- It creates event logs for success/failure of the script or failure during decline process.
NOTE: I strongly recommend running this with -TrialRun switch first, and evaluate what it would decline by reviewing the htm and csv files it creates under "UpdateList" folder. See the comment section in the Script for more details.
Requirement: Must have WSUS Console where the script is being executed on. If CAS is in place, downlevel servers MUST run the script first, then the upstream or Top SUP server for declining updates.
This script can be run against individual WSUS/SUP server, or a line of WSUS servers. For running against individual server, just use the –Servers <WSUSServer>. If you have a CAS, this script must be run on the lower tier SUPs first, then run on the top. This can also be automated!
For automating it with CAS and Child sites (Using task Scheduler)
1. Modify the script and adjust the $Servers parameter (lower tier SUPs to run first, then the top SUP server). NOTE: If SUSDB is shared, it only needs to run on one SUP.
$Servers = @("<lowerSUPServer1>","<lowerSUPServer2>","<CASLevelSUPServer>")
2. Pick a server with WSUS Console installed to run this on (we run this on our Top SUP, since WSUS console is already on it.)
3. Add this server to all WSUS/SUP servers’ local\admins group.
4. Give the server the appropriate access to the SUSDB
5. On this server, make a Task Schedule, and define the schedule accordingly to fit your need (Recommendation is to run it monthly to keep the metadata lean)
6. Add a program using the following settings:
Add arguments: C:\APPSFOLDER\Run-DeclineUpdate-Cleanup.ps1
Start in: C:\APPSFOLDER
Voila! Automated. And all you need to is review the results periodically, if necessary.
Again, follow the basic WSUS maintenance from Meghan's post, look at your WSUSpool/web.config settings and consider the settings in Sherry's blog (working great for us), and decline superseded updates and updates on WSUS servers that are no longer being used.
That is what's SUP!!!