All of the scripts that we’ve written for the monitoring system have been powered by lists of severs. Where do you get the lists from? You use a script to generate them of course! You could easily configure each of the monitoring scripts to run independently by filtering AD for a specific set of computers and storing them in a variable instead of using a file, but the list method has several advantages.
In our case we want to run multiple scripts against the same group of servers repeatedly. Searching AD over and over again for the same data isn’t very efficient. Especially when it’s a list of machine names that rarely change. That being said, servers do get added and removed. Typing them in a list and calling it a day is a recipe for crap cake surprise. The solution is to script the lists so that they are dynamic.
How you get the lists is going to be dependent on how your Active Directory is structured. I have a pretty strict naming convention in place, so I run filters against the names. You might need to search specific OUs or some other property (see code in box 3).
For example, all my production domain controllers are named PRODAD**** so to build a list of them:
Get-ADcomputer -Filter {name -like "prodad*"} -Properties dnshostname| select dnshostname -ExpandProperty dnshostname| out-file C:\Sources\Prod\ad_servers.txt
Create lines like the one above, for each group of systems that you want to monitor. The output path must exist before you run the script or you’ll get errors. My example is C:\Sources\Prod\ so you need to make those folders on your C drive or change the path. I suggest that you create a group of folders on the IIS server you plan on running your monitoring from and storing the files there.
If you need multiple names for a filter just use -or and add as many as you need.
Get-ADcomputer -Filter {name -like "prodexch*" -or name -like "prodmbx*"} -Properties dnshostname| select dnshostname -ExpandProperty dnshostname| out-file C:\Sources\Prod\exchange_servers.txt
After you have code for all of your groups, you should consider adding a catch all at the bottom. This will add servers that don’t follow the standards to a file. We all have them LOL. In my environment I search the servers OU that a GPO drops our servers into and add every system that doesn’t match one of my previous groups to a Misc_servers.txt file. Note that the example below uses “-notlike” and “-and” versus the “-like” and “-or” we used above.
Get-ADComputer -Searchbase "OU=Production Servers,OU=Servers,OU=MY_Server_OU,DC=MY_DOMAIN,DC=com" -Filter {name -notlike "prodad*" -and name -notlike "prodexch*" -and name -notlike "prodmbx*"}| select dnshostname -ExpandProperty dnshostname| out-file C:\Sources\Prod\misc_servers.txt
Once you’ve put it all together, run the script and you should end up with a text file for each group of servers in your Active Directory. Open the files and you should see the FQDN for each server that matched its filter, each on a separate line. You can use these lists as the input parameters for the monitoring scripts we’ve already written in PowerShell Monitoring Part 1, PowerShell Monitoring Part 2, and PowerShell Monitoring Part 3.
As you’ll see in a future article regarding remote systems management, these dynamically built lists of servers by type can be used for other useful projects as well. My production script ended up with 19 of the “type” filters plus the “catch all” so expect to spend some time getting this right on your network. Next in this series we’ll configure an IIS server to run all of these scripts as scheduled tasks and display the HTML files.
1 Comment