08 March 2019

Bitlocker Active Directory Recovery Password Backup Compliance

Recently, we had an issue of some machines not backing up the Bitlocker recovery password to active directory, even with the GPO in place. They ended up being offline while the bitlocker process took place. Plus, some of the systems in AD had multiple entries, which can be cumbersome. To mitigate this issue, I have implemented an SCCM Configuration Baseline that makes sure the Bitlocker recovery password is backed up to AD and that it is the only recovery password present.

NOTE: This script is being used in an environment that only encrypts the %systemdrive%. If your environment encrypts other items such as flash drives, removable HDDs, and etc, you will need to modify these scripts to meet your environment needs. It will delete those items from active directory also. 

To do this, I first implemented a baseline that enabled the RSAT active directory feature in Windows 10. This is needed so the scripts can query and write to AD. Once this was deployed, I created the BitLocker Recovery Password Backup configuration item.

 Platforms must be set to Windows 10 as some of the cmdlets used in the scripts only exist in that OS and newer.

The script returns a true or false value that dictates if remediation is needed.

The first script queries the local system and AD for the recovery passwords to compare. If they match and only one is in AD, then True is returned that dictates the system is in compliance. False is returned if there is no password stored in AD, there is more than one password in AD, or the wrong password is stored in AD.

Here is the discovery script:

 $RecoveryKey = (Get-BitLockerVolume -MountPoint $env:SystemDrive).KeyProtector | Where-Object {$_.KeyProtectorType -eq 'RecoveryPassword'}  
 $ADBitLockerRecoveryKey = (Get-ADObject -Filter {objectclass -eq 'msFVE-RecoveryInformation'} -SearchBase (Get-ADComputer -Identity $env:COMPUTERNAME).DistinguishedName -Properties 'msFVE-RecoveryPassword')  
 If ($ADBitLockerRecoveryKey -eq $null) {  
      Echo $false  
 } elseif ($ADBitLockerRecoveryKey -isnot [system.Array]) {  
      If (([string]$RecoveryKey.RecoveryPassword).Trim() -eq ([string]$ADBitLockerRecoveryKey.'msFVE-RecoveryPassword').Trim()) {  
           Echo $true  
      } else {  
           Echo $false  
 } elseif ($ADBitLockerRecoveryKey -is [system.Array]) {  
      Echo $false  

Next comes the remediation script. This is what will be executed if the discovery script returns a False value:

 $RecoveryKey = (Get-BitLockerVolume -MountPoint $env:SystemDrive).KeyProtector | Where-Object {$_.KeyProtectorType -eq 'RecoveryPassword'}  
 Write-Host 'Local Recovery Password:'$RecoveryKey.RecoveryPassword  
 $ADBitLockerRecoveryKey = (Get-ADObject -Filter {objectclass -eq 'msFVE-RecoveryInformation'} -SearchBase (Get-ADComputer -Identity $env:COMPUTERNAME).DistinguishedName -Properties 'msFVE-RecoveryPassword')  
 Write-Host '  AD Recovery Password:'$ADBitLockerRecoveryKey.'msFVE-RecoveryPassword'  
 If (($ADBitLockerRecoveryKey -isnot [system.Array]) -and ($ADBitLockerRecoveryKey -ne $null)) {  
      Remove-ADObject -Identity $ADBitLockerRecoveryKey.DistinguishedName -Confirm:$false  
 } elseif ($ADBitLockerRecoveryKey -is [system.Array]) {  
      Foreach ($Key in $ADBitLockerRecoveryKey) {  
           Write-Host 'Removing'$Key.DistinguishedName  
           Remove-ADObject -Identity $Key.DistinguishedName -Confirm:$false  
 Backup-BitLockerKeyProtector -MountPoint $env:SystemDrive -KeyProtectorId $RecoveryKey.KeyProtectorId  

The final thing to set in the configuration item is the compliance rule as shown below:

Now that the configuration item is created, the configuration baseline must be created and deployed. Here are the screenshots of my configuration baseline that I later deployed out to all laptop systems, which are the systems here that are BitLockered.

07 March 2019

Active Directory PowerShell Module Configuration Baseline

With the recent 1809, RSAT is now integrated into Windows, which is a major plus for the admin side. In my environment, I have the active directory PowerShell module enabled on all machines for two reasons. The first is I use it to move the machine in AD during the build process. The second is that I have an SCCM baseline that makes sure the Bitlocker key matches the one stored in AD. For these, I need the module installed and thankfully it is now just a simple Add-WindowsCapability cmdlet.

I implemented the following baseline that first checks to make sure the Rsat.ActiveDirectory.DS-LDS.Tools~~~~ feature is enabled. It returns a boolean value of $true if it is Installed and $false if it is Not Present. If $false is returned, then the remediation script will turn on the feature. 

I am going to assume you already know how to setup a configuration item, so I am not going to go through the screen by screen process. This is the main screen of the Item. 

Here is the PowerShell query for checking if it is enabled and returning the $true or $false. 

 If ((Get-WindowsCapability -Online -Name 'Rsat.ActiveDirectory.DS-LDS.Tools~~~~').State -eq 'Installed') {Echo $true} elseif ((Get-WindowsCapability -Online -Name 'Rsat.ActiveDirectory.DS-LDS.Tools~~~~').State -eq 'NotPresent') {echo $false}  

Here is the remediation script for enabling RSAT AD if it is not enabled.

 Add-WindowsCapability -Online -Name 'Rsat.ActiveDirectory.DS-LDS.Tools~~~~'  

Finally, this is the compliance rule that enables the remediation if it is not enabled.

Now to deploy the Configuration Item, the Baseline needs to be created and deployed. This is a very simple procedure. Here are screenshots of my setup of the Baseline.

04 March 2019

Local Administrator Baseline Compliance

One of the issues we have had is some users ending up being in the administrators group. There are circumstances to which we have to sometimes put a user in that group to install an application which is both user and computer-based. It can be easy to forget to take the user back out of that group. We don't allow the end users here to have local administrator privileges for security reasons.

I have finally gotten around to using PowerShell along with the compliance settings in SCCM to manage this issue. To implement a compliance setting to monitor systems where users have local admin privs, I first setup the configuration item. As shown below, I setup the configuration item to look for an integer value, 0 or 1, returned from the PowerShell script. It returns 0 if nothing shows up in the query and a 1 if there are users in the query.

The first step is to create the Configuration Item as shown in the following instructions:

In SCCM under Assets and Compliance-->Compliance Settings-->Configuration Items, click Create Configuration Item from the toolbar above.

In my environment, we are now only Windows 10, so I selected that as the platform.

The next screen will be to create the conditions associated with the configuration item. Under this, you will click New

The next screen is creating the setting. I used the name Local Administrators, the setting is defined by a PowerShell script that returns an integer value of 0 or 1.

The next screen is entering the PowerShell script to query for users that may exist in the group. If there are users in your environment that need to be there by default, you will need to add them to the where-object to exclude from the query. You could also put them in a text file on a UNC share the script could read and compare against.

Here is the script for easy copy and paste.

 If ((Get-LocalGroupMember -Group Administrators | Where-Object {($_.ObjectClass -eq 'User') -and ($_.Name -notlike '*Administrator*')}).Count -gt 0) {Echo 1} else {Echo 0}  

Next comes the Compliance Rules. This is where you specify what the value returned from the query is considered as complying.

This is the specification defined for the rule.

Now that Configuration Items is created, we must create a Baseline that will use the Configuration Item when deployed out to collections.

In SCCM under Assets and Compliance-->Compliance Settings-->Configuration Baselines, click Create Configuration Baseline from the toolbar above.

I used the Name Local Admin. Next, click on Add-->Configuration Items. The following screen will appear:

Select the Local Administrator and click Add. Mine in the pic is slightly different in naming because I already had this created before writing this blog.

Now click OK and the Configuration Baseline will be created. The Baseline is now ready to be deployed out. Select the Local Admin Baseline from the Configuration Baselines and click Deploy. The following screen will appear:

These are the specifications I decided to use. I made the alert to generate if 100% compliance is not met so I know by the next day if someone has local admin. As you can see in the results below, the system I deployed it to is compliant.

I also went into that system and added a user to the administrators group it returned the result of non-compliant when I reran the compliance scan. Another thing that can be done here is to create collections that are based on the compliance and non-compliance of the baseline. This can be done by clicking on the configuration baseline and then right-clicking on the deployment at the bottom. Click on Create New Collection and the options to create the collections by the results will come up as shown below.

If you want to expedite the evaluation time while testing this out, you can go to a system you have deployed this to and open up Configuration Manager from the control panel. Under that, click configurations. If it is not appearing there yet, click refresh and the new baseline should appear. Now that it is displayed there, you can click evaluate at the bottom to run the baseline.

15 February 2019

Loss of Bluetooth Connectivity Resolved via PowerShell

Recently, we ran into the issue of users replacing their keyboard and mouse with Bluetooth devices. What happened was they would lose connectivity and the error below would appear in the event viewer logs.

While researching the issue, we found that the user could open up the laptop that was docked and get connectivity back by hitting any key. The culprit was the Power Management setting of the Bluetooth device. The "Allow the computer to turn off this device to save power" setting disconnected the Bluetooth devices, and because both were the keyboard and mouse, there was no way for them to wake Bluetooth back up. The fix for this was to uncheck the setting as shown below. 

At first, I thought because this was a similar setting as is on the NIC Power Management that I could manipulate it via the registry. I could not find any key that configures this setting. Through more research, I was able to find this solution in a posting on Reddit

I took the script and made it into a one-liner with the Enable variable set to $false in the beginning since it is enabled ($true) by default.  This will allow for the script to be implemented inside a task sequence in MDT or SCCM. 

Here is the script in both a one-liner and regular code. 

PowerShell One-Liner if you want to use this in an MDT or SCCM command line task sequence:
 powershell.exe -executionpolicy bypass -command "&{$Enable=$false;$BTDevice=Get-PnpDevice -Class Bluetooth -InstanceId USB*;$BTDevice | ForEach-Object -Process {$WQL='SELECT * FROM MSPower_DeviceEnable WHERE InstanceName LIKE ' + [char]34 + [char]37 + $([Regex]::Escape($_.PNPDeviceID)) + [char]37 + [char]34;Set-CimInstance -Namespace root\wmi -Query $WQL -Property @{Enable = $Enable} -PassThru};Get-PnpDevice -Class Bluetooth -InstanceId USB* | ForEach-Object -Process {$Test='InstanceName LIKE ' + [char]34 + [char]37 + $([Regex]::Escape($_.PNPDeviceID)) + [char]37 + [char]34;If ((Get-CimInstance -ClassName MSPower_DeviceEnable -Namespace root\wmi -Filter $Test).Enable -eq $Enable) {Write-Host 'Success';Exit 0} else {Write-Host 'Failed';Exit 1}}}"  

PowerShell .PS1 File

           Bluetooth Power Management  
           This script will enable or disable the Power Management Setting that allows the computer to turn off the Bluetooth device to save power  
      .PARAMETER Enable  
           $true will check the Allow the computer to turn off this device to save power. $false will do the opposite. The default has been set to $false since it is originally checked in the OS  
           Created with:    SAPIEN Technologies, Inc., PowerShell Studio 2017 v5.4.142  
           Created on:      2/12/2019 3:20 PM  
           Created by:      Mick Pletcher  
           Filename:        BluetoothPowerState.ps1  
      $Enable = $false  
 #$Enable = $false  
 $BTDevice = Get-PnpDevice -Class Bluetooth -InstanceId USB*; $BTDevice | ForEach-Object -Process {  
      $WQL = 'SELECT * FROM MSPower_DeviceEnable WHERE InstanceName LIKE ' + [char]34 + '%' + $([Regex]::Escape($_.PNPDeviceID)) + '%' + [char]34  
      Set-CimInstance -Namespace root\wmi -Query $WQL -Property @{  
           Enable = $Enable  
      } -PassThru  
 Get-PnpDevice -Class Bluetooth -InstanceId USB* | ForEach-Object -Process {  
      $Test = 'InstanceName LIKE ' + [char]34 + '%' + $([Regex]::Escape($_.PNPDeviceID)) + '%' + [char]34  
      If ((Get-CimInstance -ClassName MSPower_DeviceEnable -Namespace root\wmi -Filter $Test).Enable -eq $Enable) {  
           Write-Host $BTDevice.FriendlyName'Power Management Successfully Configured'  
           Exit 0  
      } else {  
           Write-Host $BTDevice.FriendlyName'Power Management Failed to Configure'  
           Exit 1  

04 February 2019

Default Printer Report

When our build team builds new machines for users, we provide a convenience to the user of letting them know what their default printer is. I wrote this script that will parse through all user profiles in HKU to find the default printer of each profile. It will then write the results to the screen if the script is manually executed, while also writing to the DefaultPrinter.CSV file located at the default directory of each user profile. This allows for the script to be deployed through SCCM, or be executed remotely or locally with PowerShell. The reason I have it write to a CSV file instead of reporting to SCCM is that not everyone has access to SCCM and for universal compatibility, not all companies have SCCM.

I deployed this script through a package in SCCM that is scheduled to execute once a week. Every time this script executes, it will replace the current file with a new one.

You can download the script from here.

           Default Printer Report  
           This script will retrieve a list of all user profiles and report to a text file inside each user profile what the default printer is.   
           Created with:     SAPIEN Technologies, Inc., PowerShell Studio 2017 v5.4.142  
           Created on:       2/4/2019 8:56 AM  
           Created by:       Mick Pletcher  
           Filename:         DefaultPrinterReport.ps1  
 param ()  
 $Profiles = (Get-ChildItem -Path REGISTRY::HKEY_USERS -Exclude *Classes | Where-Object {$_.Name -like '*S-1-5-21*'}).Name  
 $ProfileArray = @()  
 foreach ($Item in $Profiles) {  
      $object = New-Object -TypeName System.Management.Automation.PSObject  
      $object | Add-Member -MemberType NoteProperty -Name Profile -Value ((Get-ItemProperty -Path ('REGISTRY::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList\' + ($Item.split('\')[1].Trim())) -Name ProfileImagePath).ProfileImagePath).Split('\')[2]  
      $object | Add-Member -MemberType NoteProperty -Name DefaultPrinter -Value ((Get-ItemProperty -Path ('REGISTRY::' + $Item + '\Software\Microsoft\Windows NT\CurrentVersion\Windows') -Name Device).Device).Split(',')[0]  
      $ProfileArray += $object  
 foreach ($Item in $ProfileArray) {  
      Export-Csv -InputObject $Item -Path ($env:SystemDrive + '\users\' + $Item.Profile + '\DefaultPrinter.csv') -NoTypeInformation -Force  

01 February 2019

Mozilla Firefox One-liner Installer

Here is a PowerShell one-line installer for Mozilla Firefox. This allows you to download the latest version of Mozilla Firefox during the build process without having to maintain the package each time. The URI used in this is for the 64-bit version of Firefox. If you need a different version, you will need to locate the download URI and copy and paste it in the one-liner thereby changing the value of $URI. I have been using this one-liner to install Firefox in the build for almost a year.

This one-liner includes error checking so in the event the URI is no longer valid, you will be alerted to it and be able to update the script with the new URI.

 powershell.exe -executionpolicy bypass -command "&{$URI='https://download.mozilla.org/?product=firefox-latest-ssl&os=win64&lang=en-US';$AppInstaller=$env:TEMP+'\'+'Firefox.exe';Invoke-WebRequest -Uri $URI -OutFile $AppInstaller -ErrorAction SilentlyContinue;$ErrCode=(Start-Process -FilePath $AppInstaller -ArgumentList '/silent /install' -Wait -Passthru).ExitCode;Remove-Item $AppInstaller -ErrorAction SilentlyContinue -Force;Exit $ErrCode}"  

Deploying Ping Automated Timekeeping for Lawyers

This application is straightforward to deploy. The PowerShell script below will kill all processes associated with Microsoft Office. Ping requires closing Outlook, but I have seen in the past where other Office apps can also interfere by keeping a component of Outlook open, so to be on the safe side, I included closing the entire suite, along with closing Ping if it is already installed.

I designed the script to first kill the necessary processes. Next, it will search for previously installed versions of Ping and uninstall them. I include this for two reasons. First, if the application is installed, but is busted, rerunning the installer will uninstall it and reinstall it as a fix. A lot of times, it is much faster to do this than to go through an entire session of troubleshooting that often comes back to this. Second, if there is an upgrade, this will uninstall the old version. The uninstaller is designed to be able to query the add/remove programs in the registry for an application that matches the name used. Next, the two Ping components are installed. I have also included the uninstaller, which is the same script without the two application installs. 

You can download both from my GitHub site by clicking on the links below:

23 January 2019

PowerShell One-Liner to Configure the NIC Power Management Settings

While working on a series of one-liners for configuring the NIC on machines, I created this needed to makes changes to the power management settings of the NIC. This is something that will be implemented in the build, so I also wanted to make the script into a one-liner so the code itself can reside within the task sequence.

This one-liner can check/uncheck the boxes within the Power Management tab of the network adapter. There are two ways this can be done. The first is by WMI and the second is by the registry. The WMI method failed across the different versions of Windows 10, whereas the registry method stays the same. 

The script first finds the correct NIC by querying for the one that is enabled and is also a physical NIC. Next, it locates the correct registry key for that NIC by comparing the driver names. It then checks the PnPCapabilities value. The value associated with this entry determines which boxes are checked/unchecked. It will then check if the registry has the same value as is specified in the script. If not, it will change the registry value, disable, and enable the NIC, and retrieve the registry value. It will then compare that value with the specified $PnPValue to verify the change took place. If it did not, the script will exit with an error code 1, thereby reporting to SCCM/MDT that it failed. 

There are four values that can be used to change the boxes. The value is associated with the $PnPValue variable. The only thing you should have to change with this one-liner is the $PnPValue, which is why I placed that in the front of the one-liner for easy editing. The $PnP values are as follows:

  • 0 - Checks both Allow the computer to turn off this device to save power and Allow this device to wake the computer, while leaving Only allow a magic packet to wake the computer unchecked
  • 24 - Unchecks all three boxes
  • 256 - Checks all three boxes
  • 272 - Checks Allow Allow the computer to turn off this device to save power, while unchecking Allow this device to wake the computer and Only allow a magic packet to wake the computer

NOTE: I did learn one interesting thing while writing and testing this. The WMI method will disable and enable the NIC when making the change, whereas, with the registry method, you must disable and enable the NIC separately as is done in the one-liner below.

Here is the one-liner:

 powershell.exe -executionpolicy bypass -command "&{$PnPValue=256;$Adapter=Get-NetAdapter | Where-Object {($_.Status -eq 'Up') -and ($_.PhysicalMediaType -eq '802.3')};$KeyPath='HKLM:\SYSTEM\CurrentControlSet\Control\Class\{4D36E972-E325-11CE-BFC1-08002bE10318}\';foreach ($Entry in (Get-ChildItem $KeyPath -ErrorAction SilentlyContinue).Name) {If ((Get-ItemProperty REGISTRY::$Entry).DriverDesc -eq $Adapter.InterfaceDescription) {$Value=(Get-ItemProperty REGISTRY::$Entry).PnPCapabilities;If ($Value -ne $PnPValue) {Set-ItemProperty -Path REGISTRY::$Entry -Name PnPCapabilities -Value $PnPValue -Force;Disable-PnpDevice -InstanceId $Adapter.PnPDeviceID -Confirm:$false;Enable-PnpDevice -InstanceId $Adapter.PnPDeviceID -Confirm:$false;$Value=(Get-ItemProperty REGISTRY::$Entry).PnPCapabilities};If ($Value -eq $PnPValue) {Write-Host 'Allow the computer to turn off this device is configured'} else {Write-Host 'Failed';Exit 1}}}}"  

09 January 2019

PowerShell One-Liner to Enable Features in Microsoft Windows 1809

In Windows 10 1809, I needed to enable some RSAT features that are now included in the OS. I figured this would be a good time to go from using a script to using one-liners for the build process. Mike Robbins's blog was a good start to developing this one-liner. This allows for you to manage the code within the task sequence, thereby negating the issue of storing a script and the possibility of the script accidentally being deleted.

NOTE: This script will only execute in the Microsoft Windows 1809. It will not run in 1803 or older.

The one-liner below enables a feature defined in the $Name variable. The $Name variable is used to make it easy to define what feature to enable by having it right at the front of the one-liner. Once that is defined, and the script is executed, it will check to make sure the feature is enabled and returns an error code 0, along with an "Enabled" message in the event the script is manually executed, or it returns an error code 1 if it failed to enable along with the message "Failed to Enable." I did verify that it will still report enabled, even if a system reboot is required.

When defining the $Name variable, you do not need to use the full feature name. For instance, the Active Directory feature's name is Rsat.ActiveDirectory.DS-LDS.Tools~~~~ If there is no other feature with the name ActiveDirectory in it, you can use a wild card like this:

  • $Name = 'Rsat.ActiveDirectory*'
NOTE: You see that I use apostrophes in the script. That is because of the quotation marks before the & and at the end of the script. 

Here is the one-liner script that I use to enable AD. You just need to change the $Name variable to whatever feature you want to be activated. 

 powershell.exe -executionpolicy bypass -command "&{$Name='Rsat.ActiveDirectory*';Get-WindowsCapability -name $Name -Online | Add-WindowsCapability -Online;If ((Get-WindowsCapability -name $Name -Online).State -eq 'Installed') {Write-Host 'Enabled';Exit 0} else {Write-Host 'Failed to Enable';Exit 1}}"  

Copy and paste the script into a command line task sequence as shown below.

01 November 2018

Upgrading Microsoft Orchestrator from 2012 and 2016

It was time for us to upgrade the Microsoft Orchestrator to the newest 1801 version. We were three versions behind as we have been using 2012. Luckily starting with 1801, upgrades are performed via windows updates.

Microsoft provides a well-documented page on setting up Orchestrator located here. The problem with upgrading from Orchestrator 2012 and 2016 is the fact that you must uninstall the old version and reinstall the new version. The SQL server Orchestrator was connected to had not been documented in the beginning. We could find nothing in the console on what it was connected to. The registry was useless and the event viewer logs did not help. We started going through the Orchestrator logs located at %Programdata%\Microsoft System Center 2012\Orchestrator\. Each subdirectory has a logs folder. We finally located the log that contained SQL server and instance Orchestrator was connecting to. The log is located at %ProgramData%\Microsoft System Center 2012\Orchestrator\ManagementService.exe\Logs. You will likely have to go through each log in that directory to find the SQL server. The line will look like this:

  • <Param>App=<Instance>;Provider=SQLOLEDB;Data Source=<SQLServer>\<Database>;Initial Catalog=Orchestrator;Integrated Security=SSPI;Persist SecurityInfo=False;</Param>
Once we found the SQL server information, we were able to successfully upgrade the server to 1801 using the classical uninstall/reinstall, and then onto 1809 via windows updates. 

24 October 2018

User Logon Reporting

If you have to track the login times for a specific user, this tool will generate a report for you that scans the event viewer logs for ID 4624. The tool parses each event and retrieves the user name, securityID, type of logon, computer name, and time stamp. It formats the output and writes it to a centralized CSV file in the event this tool is deployed to multiple machines at once. The tool has the ability to 'wait for its turn' to write to the file when it is deployed to multiple systems.

I have the script translate what each of the logon types is. If you do not want a specific logon type to be reported, you can comment out that type within the switch cmdlet and it will not appear in the report.

NOTE: I originally wrote this script to have Get-WinEvent remotely execute on a machine using the -computer parameter and the time required was huge, especially on older systems with three months plus of event viewer data. It took almost 30 minutes. It ended up being much quicker to deploy the script via an SCCM package.

You can download the script from my GitHub site located here.

           Logon Reporting  
           This script will report the computername, username, IP address, and date/time to a central log file.  
      .PARAMETER LogFile  
           A description of the LogFile parameter.  
           Created with:     SAPIEN Technologies, Inc., PowerShell Studio 2017 v5.4.142  
           Created on:       10/22/2018 10:13 AM  
           Created by:       Mick Pletcher  
           Filename:         LogonReport.ps1  
      [string]$LogFile = 'LogonReport.csv'  
 $Entries = @()  
 $IPv4 = foreach ($ip in (ipconfig) -like '*IPv4*') {($ip -split ' : ')[-1]}  
 $DT = Get-Date  
 foreach ($IP in $IPv4) {  
      $object = New-Object -TypeName System.Management.Automation.PSObject  
      $object | Add-Member -MemberType NoteProperty -Name ComputerName -Value $env:COMPUTERNAME  
      $object | Add-Member -MemberType NoteProperty -Name UserName -Value $env:USERNAME  
      $object | Add-Member -MemberType NoteProperty -Name IPAddress -Value $IP  
      $object | Add-Member -MemberType NoteProperty -Name DateTime -Value (Get-Date)  
      $Entries += $object  
 foreach ($Entry in $Entries) {  
      Do {  
           Try {  
                Export-Csv -InputObject $Entry -Path $LogFile -Encoding UTF8 -NoTypeInformation -NoClobber -Append  
                $Success = $true  
           } Catch {  
                $Success = $false  
                Start-Sleep -Seconds 1  
      } while ($Success -eq $false)  

17 October 2018

PowerShell One-Liners to ensure Dell system is configured for UEFI when imaging

While planning and configuring the Windows 10 upgrades, we had to also include the transition to UEFI from BIOS. I wanted to make sure that when the build team builds new models that they are configured for UEFI when applicable, otherwise the build fails within seconds after it starts.

We use Dell systems, so interacting with the BIOS is simple. The Dell Command | Configure allows for the BIOS to be queried, which is what we need here to verify specific models are set correctly. We do have a few models that are not compatible with UEFI, so those have to be exempted. In looking at Dell Latitude models, anything newer than the E6320 is compatible with UEFI. Granted, there may be other models that we never had that could be compatible.

There are four key settings in the BIOS that determine if a system is compatible with UEFI. Those settings are the Boot List Option, Legacy Option ROMs, UEFI Network Stack, and Secure Boot. I have found the most reliable one of the four to verify compatibility is the UEFI Network Stack. If a system does not have this option, then UEFI is not compatible.

I set this up as four task sequences within a folder called Verify UEFI. The folder performs two WMI queries to make sure it is a Dell machine, and it is not one of the five models we still have in production that are not UEFI compatible. The conditions are set up as shown in the screenshot below.

The first WMI query makes sure the system is a Dell.

  • select * from Win32_ComputerSystem WHERE Manufacturer like "%Dell%"

The second WMI query makes sure the system is not one of the specified models that are not compatible with UEFI.

  • select * from Win32_ComputerSystem WHERE (model != "Latitude E6320") and (model != "Latitude E6410") and (model != "Optiplex 980") and (model != "Optiplex 990") and (model != "Optiplex 9010")
This is the setup in MDT that I have configured

Now that the folder is set up, you will need to create each of the four Run Command Line task sequences. Before doing this, you will need to have Dell Command | Configure installed and loaded into the WinPE environment. You can refer to my blog posting that details how to load this into WinPE. 

Each one of the four tests is a Run Command Line. They will look like the pic below. All you will need to do is to copy the PowerShell one-liner code below and paste it into the command line of each task sequence.

Here is the PowerShell one-liner code for each task sequence:
  • Boot List Option
    • powershell.exe -executionpolicy bypass -command "&{If ((x:\cctk\cctk.exe bootorder --activebootlist) -like '*uefi') {exit 0} else {exit 1}}"
  • Legacy Option ROMs
    • powershell.exe -executionpolicy bypass -command "&{If ((x:\cctk\cctk.exe --legacyorom) -like '*disable') {exit 0} else {exit 1}}"
  • UEFI Network Stack
    • powershell.exe -executionpolicy bypass -command "&{If ((x:\cctk\cctk.exe --uefinwstack) -like '*enable') {exit 0} else {exit 1}}"
  • Secure Boot
    • powershell.exe -executionpolicy bypass -command "&{If ((x:\cctk\cctk.exe --secureboot) -like '*enable') {exit 0} else {exit 1}}"
As you can see, if any of these fail, they will return an error code 1 and then fail the build. 

05 October 2018

Application List Report

We have started the Windows 10 upgrades and part of this process is installing applications for users that are not included in the standard build. One option is to use the SCCM Resource Explorer for a list of apps installed. The problem with that is it is a blanket report. It shows everything and all we were wanting is a report of the additional apps installed after a build.

I wrote this PowerShell script that can be executed as a package in SCCM against machines to generate an application report. The tool is specifically designed to work with MDT. You will define both the reference and production task sequences. The script will read the XML files and know to exclude those applications from the listing. Specifically, the script reads tasks that are Install Application types. There are going to be applications installed that you do not care about such as video driver packages that got installed automatically. They can be filtered out by populating the add/remove programs exclusions file ($ARPExclusionsFile). There is also the task sequence exclusions file which you can specify items that will get excluded from the task sequence. The final parameter to define is the $OutputDIR, which is the UNC path to the location where you want the text file written to containing a list of additional apps needing to be installed.

You can download the script from my GitHub site located here.

Here is an example of my ARPExclusions.txt file:

64 Bit HP CIO Components Installer
Active Directory Authentication Library for SQL Server
Active Directory Authentication Library for SQL Server (x86)
Administrative Templates (.admx) for Windows 10 April 2018 Update
Adobe Refresh Manager
AMD Catalyst Control Center
AMD Fuel
Apple Application Support (32-bit)
Apple Application Support (64-bit)
Apple Mobile Device Support
Apple Software Update
Catalyst Control Center - Branding
Catalyst Control Center InstallProxy
Catalyst Control Center Localization All

Here is an example of my TSExclusions.txt file:

.Net Framework 3.5
Activate Office and Windows
Avenir Fonts
Bitlocker System
Configure Dell Power Management Settings
Configure NIC Advanced Properties
Delete Dell Command Configure Shortcut

07 September 2018

Robocopy User Profile Contents to UNC Path

The Windows 10 upgrades required us to move profile contents off of the machines to a file share and then move them back. This was because USMT could not be used due to the architecture changing from 32-bit to 64-bit.

This script I wrote will copy all of the pertinent data from a user profile to a specified UNC path. I made two text files to include all exclusions for directories and files. The exclusion files need to reside in the same directory as this script. I have added the examples of the files and directories exclusion files that we excluded. The other variable you need to define is the DestinationUNC which is the path to the folder where the profile will be backed up to. The script also creates a 0RobocopyLogs directory at the specified UNC path containing a log of each transfer. One more thing I added to the script is the ability to check if the computer name is correct, which includes is online and if the username is correct. At the end of the process, it will return the robocopy error code.

You can download the script from my GitHub Repository.

Below is the contents of the DirectoryExclusions.txt file.


Below is the contents of the FileExclusions.txt file.


31 August 2018

MDT Build Application Report One-Liner

While building a new reference image, I always want to make sure every application got installed before the WIM is generated. I have done this in the past by placing a pause in the build immediately after the windows update post-application installation is completed. It definitely takes time for me to go through the list of apps and verify they are there.

While researching the process, I found the ZTIApplication.log file that is generated during a build. It contains the list of all applications and the return code after the install. I wrote this script that will query that file and generate a report of all installs so that I can quickly look at the list to make sure everything is there before continuing with the WIM file generation. One thing I did have to do is to move some Run Command Line tasks to application tasks so they would be in the above-listed log file. 

This was all achievable with a PowerShell one-liner that can be executed in the build via a Run Command Line task as shown below. 

When the one-liner executes in the task sequence, it will generate a report as shown below. The report also pauses the build until you click OK giving you time to review the report and possibly install something that may have failed. 

Below is the one-liner you can copy and place in the task sequence.

 powershell.exe -executionpolicy bypass -command "&{$Apps=@();$ZTIAppsLog=Get-Content -Path ($env:SystemDrive+'\MININT\SMSOSD\OSDLOGS\ZTIApplications.log');$AppNames=($ZTIAppsLog|Where-Object {$_ -like '*Name:*'})|ForEach-Object {(($_.split('[')[2]).split(':')[1]).split(']')[0].Trim()};Foreach ($App in $AppNames) {$obj=New-Object -TypeName PSObject;If (($ZTIAppsLog|Where-Object {$_ -like ('*Application'+[char]32+$Application+'*')}|ForEach-Object {($_.split('[')[2]).split(']')[0]}|ForEach-Object {$_ -replace 'Application ',''}) -like '*installed successfully*') {$Status='Installed'} else {$Status='Failed'};$obj|Add-Member -MemberType NoteProperty -Name Application -Value $App;$obj|Add-Member -MemberType NoteProperty -Name Status -Value $Status;$Apps+=$obj};$Apps|Out-GridView -PassThru}"  

22 August 2018

Deleting Previous MDT Build Logs with this PowerShell One-Liner

If you have the SLShare variable defined in MDT to write logs to the specified UNC path and the SLShareDynamicLogging defined to write to the same path including the %ComputerName%, you have probably run into the issue of the logs being enormous. This is since each time a system is reimaged with the same computer name, the new logs are appended to the old logs, and they do get big.

This PowerShell one-liner will delete the folder containing all of the old logs if it exists. To get this to work, you will need to get the full UNC path to the ZTIUtility folder, which I found is located at %DeployRoot%\Tools\Modules\ZTIUtility. To map to this, you need to explicitly define the UNC path as the PowerShell one-liner cannot read the task sequence variable %DeployRoot% until this module is loaded.

To use this in MDT, I created a Run Command Line task as shown below.

The task was placed into the Initialization phase, so the log directory is deleted at near the start of the task sequence.

Here is the  one-liner. You will need to set the executionpolicy as it will be initially set to restricted and you will need to update <UNC Path to the MDTDeploymentShare> to the path of the MDT deployment share.

 powershell.exe -executionpolicy bypass -command "&{import-module '<UNC Path to the MDTDeploymentShare>\Tools\Modules\ZTIUtility';$LogDir=$TSEnv:DEPLOYROOT+'\Logs\'+$TSEnv:OSDCOMPUTERNAME;If ((Test-Path $LogDir) -eq $true) {Remove-Item -Path $LogDir -ErrorAction SilentlyContinue -Recurse -Force}}"  

14 August 2018

Profile Size Reporting

While in the middle of the planning phase for the Windows 10 rollout, we wanted a report on the size of the My Documents and Desktops of all users. These will be the folders we have decided to back up. USMT is not possible in our environment due to the cross-architectures. Plus, we want users to have new profiles for the new OS.

The first thing I thought about was writing a script that would custom report this data back to SCCM, but then I thought this is a one-time task and we will probably never look at it again. I decided to write a script that would gather the sizes of the two folders for each profile on a system and then report that to a single excel spreadsheet.

I wrote the script so that it can be used in the new SCCM scripts section, or it can be deployed as a package. You can even manually execute it. There are two lines you will need to modify. Those are lines 2 and 4. Line 2 will need the full path to the CSV file. Line 4 is the list of profiles to exclude. I have included the three that are included in all systems. There are two additional ones I added for the environment I work in.

You are probably wondering why I put in lines 19 through 23, as that would seem somewhat odd. Because a lot of systems are simultaneously competing for the same CSV file, there can be only one write to the file. To do this, I put all content to be written inside a single variable and use the [char]13 (CR) for line breaks. The next part is where the script enters a loop until $success equals $true. Each time the script tries to write to the CSV file and fails due to it being locked, $Success is set to $false.

To use this with the newer SCCM, you can enter it into the scripts section as shown below.

You can download the script from my GitHub Site

I would like to thank Mike Roberts from The Ginger Ninja for the resource on how to calculate folder sizes. That helped a lot in writing this script.

 #Full path and filename of the file to write the output to  
 $File = "<Path to CSV file>\ProfileSizeReport.csv"  
 #Exclude these accounts from reporting  
 $Exclusions = ("Administrator", "Default", "Public")  
 #Get the list of profiles  
 $Profiles = Get-ChildItem -Path $env:SystemDrive"\Users" | Where-Object { $_ -notin $Exclusions }  
 #Create the object array  
 $AllProfiles = @()  
 #Create the custom object  
 foreach ($Profile in $Profiles) {  
      $object = New-Object -TypeName System.Management.Automation.PSObject  
      #Get the size of the Documents and Desktop combined and round with no decimal places  
      $FolderSizes = [System.Math]::Round("{0:N2}" -f ((Get-ChildItem ($Profile.FullName + '\Documents'), ($Profile.FullName + '\Desktop') -Recurse | Measure-Object -Property Length -Sum -ErrorAction Stop).Sum))  
      $object | Add-Member -MemberType NoteProperty -Name ComputerName -Value $env:COMPUTERNAME.ToUpper()  
      $object | Add-Member -MemberType NoteProperty -Name Profile -Value $Profile  
      $Object | Add-Member -MemberType NoteProperty -Name Size -Value $FolderSizes  
      $AllProfiles += $object  
 #Create the formatted entry to write to the file  
 [string]$Output = $null  
 foreach ($Entry in $AllProfiles) {  
      [string]$Output += $Entry.ComputerName + ',' + $Entry.Profile + ',' + $Entry.Size + [char]13  
 #Remove the last line break  
 $Output = $Output.Substring(0,$Output.Length-1)  
 #Write the output to the specified CSV file. If the file is opened by another machine, continue trying to open until successful  
 Do {  
      Try {  
           $Output | Out-File -FilePath $File -Encoding UTF8 -Append -Force  
           $Success = $true  
      } Catch {  
           $Success = $false  
 } while ($Success = $false)  

08 August 2018

Install Dell Command Configure in WinPE

Dell Command | Configure can be of great use in the WinPE environment. It allows you to configure and/or query the BIOS before an operating system is laid down. This is easy to do.

The first thing is to determine the architecture of the WinPE environment. This will determine which Dell Command | Configure to use. If you use a 64-bit machine, you will have two folders under %PROGRAMFILES(X86)%\Dell\Command Configure\. They are x86 and x86_64. Depending on the WinPE architecture, you will use the appropriate directory. The first thing you will do is to copy the contents of the chosen directory to a UNC path. The files in that directory are what will be used to execute the CCTK.exe.

I used to have a complete PowerShell script written to do all of the steps, but I have gotten away from that and gone more task sequence steps so I don't have to maintain a .PS1 file. Below is a screenshot of the configuration I use for installing this.

I first map a T: drive to the UNC path containing the files. I arbitrarily chose T:. All of the commands listed below are entered and executed through the Run Command Line task sequence.

Here is a screenshot of the directory. I used to have to also explicitly enter credentials with the net use command, but with recent windows updates, that no longer works. Now all I enter is net use t: \\<UNC Path>.

The next thing is copying the above files and directory to the x: drive. The x: drive contains the WinPE operating system. I create a folder in the root named CCTK. To do the copy, I use the following command.

xcopy.exe "t:\*.*" "x:\CCTK\" /E /C /I /H /R /Y /V

Next comes the HAPI driver. This is necessary to interface CCTK with the Dell system hardware. Here is the command for running HAPI. This will be different if it is being executed in an x86 environment. Instead of hapint64.exe, it would be hapint.exe.

x:\CCTK\HAPI\hapint64.exe -i -k C-C-T-K -p X:\CCTK\HAPI\

Finally, I unmap the mapped t: drive by using the following command line.

net use t: /delete

This is an example of using the CCTK.exe to clear the BIOS password after the utility is installed.

NOTE: You can update the CCTK for the WinPE environment. To do so, execute the Dell Command | Update which will also update the Dell Command | Configure utility. Once that is updated, recopy the contents as described above to the designated UNC path. 

27 July 2018

Cleaning Up and Automating the Backup of Bitlocker Passwords to Active Directory

Recently, I was reviewing the bitlocker recovery password backups. We still use active directory to store them, and yes, we are planning on moving to MBAM. That is a ways off as we're in the process of the Windows 10, Exchange 2016, and Office 2016 migrations. While looking over the AD backups, I noticed some machines stored multiple recovery passwords due to systems being reimaged and then some had duplicates.

To solve this, I was initially going to write a PowerShell script to delete the AD entry during a task sequence build process for a clean slate. Going through the testing phase, I ran into other issues that required more testing in the script. Therefore a one-liner was out of the question. In the end, this cleaned up our active directory bitlocker password entries and verified all stored passwords were valid. 

This script I have written does the following:

  1. Queries the bitlocker password and ID from the system
  2. Queries active directory for the backed up Bitlocker ID(s) and password(s)
  3. Cycles through the active directory entries and deletes those that do not match the stored local ones and removes duplicates.
  4. Queries active directory once again for the stored ID and password to see if it matches the locally stores ones
  5. If there are no entries, the info is backed up to active directory and verified the backup was successful
  6. If an error -2147024809 occurs during the backup, the system checks if Bitlocker is enabled and returns system is not Bitlockered. Otherwise, an unspecified error message is displayed. 
  7. Does not exist in active directory, the info is backed up. If it does exist but does not match, the key in AD is deleted, and then the new key is uploaded. If there are duplicates in AD that match to locally stored key, all are deleted, except for one. If bitlocker is not enabled on a machine, then an error 3 is returned. If an unspecified error occurs, an error 2 is returned. These return codes allow for the script to alert to issues within a build or if it is used in the scripts section of SCCM. 
  8. If the Bitlocker info matched on both the local system and AD, then the info is displayed on the screen and an exit code 0 is returned. 
This script requires domain admin access to run as it needs to have access to active directory. The SCRIPTS section of SCCM cannot run this as it uses the system account. The same goes for an SCCM package. The only way this can be executed through SCCM is to implement it in a task sequence. The Run Command Line is what needs to be used, specifying a domain admin account under the Run this step as the following account, as shown below. This same thing has to be done in MDT and/or SCCM to get this to work in cleaning up active directory when building a new system. 

You can download the script from my GitHub site

19 July 2018

Accessing MDT and SCCM Task Sequence Variables

While rewriting a PowerShell automation script to move machines in Active Directory, I had been trying to pass the MachineObjectOU task sequence variable to the PowerShell script within the command line task sequence. It constantly failed. I finally put in a cmd.exe task sequence to pause the build and allow me to interact directly with the task sequence. What I found out is that MDT and SCCM task sequence variables from a build process can only be accessed under the administrator account. In the screenshot below, the command line on the top is what was launched by the task sequence with no specified account credentials. The command line at the bottom was opened using domain admin credentials. As you can see, the command line on the top was able to access the task sequence variable, whereas the one on the bottom was not. If a task sequence uses any other account, task sequence variables are null. 

17 July 2018

Moving Computers to Designated OU during Build Process

It has been four years since I published the last version of the PowerShell script to move systems from one OU to another during a build process. In that version, it required making a task sequence for each OU, which if there are a lot of OUs, that would be a very daunting process.

In this new version, I have done two things to improve it. The first is making it into a one-liner, so you don't have to maintain the script on some share. Second, it can now move systems to any OU using the one-liner thereby cutting down on the number of required task sequences.

To use this method, RSAT must be installed on the system. I have RSAT as part of our reference image, so it is already present when the reference image is laid down. The next step is to create the task sequences. There are three task sequences required. The first is to get the OU that was selected in the initial build screen. This is done by querying the MachineObjectOU task sequence variable. Task sequence variables are only accessible to the administrator account. If you try and access them from any other user account, the output is null. This is the reason why three task sequences are required for this process. So to pass the MachineObjectOU to the next task sequence which will move the system, I have it write the OU to the text file OU.txt located in the c:\ directory.

This is the one-liner for creating the file containing the OU to move the system to:

 powershell.exe -executionpolicy bypass -command "&{$TSEnv = New-Object -ComObject Microsoft.SMS.TSEnvironment;$TSEnv.Value('MachineObjectOU') | out-file c:\OU.txt}"  

The next step is moving the system. This one-liner will read the OU in the text file, check to see if the system is in the desired OU, and then move it if it is. Lastly, the one-liner will do a check to make sure the system is in the correct OU after the system is supposedly moved. It exits with a 0 if successful and a 1 if unsuccessful.

This is the one-liner for moving the system:

 powershell.exe -executionpolicy bypass -command "&{Import-Module ActiveDirectory;[string]$CurrentOU=((Get-ADComputer $env:ComputerName).DistinguishedName.substring((Get-ADComputer $env:ComputerName).DistinguishedName.IndexOf(',')+1));[string]$NewOU=Get-Content c:\OU.txt;If ((Get-WmiObject Win32_Battery) -ne $null) {$NewOU=$NewOU.Insert(0,'OU=Laptops,')};If ($CurrentOU -ne $NewOU) {Move-ADObject -identity (Get-ADComputer $env:ComputerName).DistinguishedName -TargetPath $NewOU};$CurrentOU=((Get-ADComputer $env:ComputerName).DistinguishedName.substring((Get-ADComputer $env:ComputerName).DistinguishedName.IndexOf(',')+1));If ($CurrentOU -eq $NewOU) {Exit 0} else {Exit 1};}"  

Lastly, the third task sequence will delete the OU.txt file. There are no WMI queries that have to be done in the newly updated script as was required in the old one.

 powershell.exe -executionpolicy bypass -command "&{Remove-Item -Path c:\OU.txt -Force}"  

This is how I have it in the task sequence hierarchy: