14 August 2018

Profile Size Reporting

While in the middle of the planning phase for the Windows 10 rollout, we wanted a report on the size of the My Documents and Desktops of all users. These will be the folders we have decided to back up. USMT is not possible in our environment due to the cross-architectures. Plus, we want users to have new profiles for the new OS.

The first thing I thought about was writing a script that would custom report this data back to SCCM, but then I thought this is a one-time task and we will probably never look at it again. I decided to write a script that would gather the sizes of the two folders for each profile on a system and then report that to a single excel spreadsheet.

I wrote the script so that it can be used in the new SCCM scripts section, or it can be deployed as a package. You can even manually execute it. There are two lines you will need to modify. Those are lines 2 and 4. Line 2 will need the full path to the CSV file. Line 4 is the list of profiles to exclude. I have included the three that are included in all systems. There are two additional ones I added for the environment I work in.

You are probably wondering why I put in lines 19 through 23, as that would seem somewhat odd. Because a lot of systems are simultaneously competing for the same CSV file, there can be only one write to the file. To do this, I put all content to be written inside a single variable and use the [char]13 (CR) for line breaks. The next part is where the script enters a loop until $success equals $true. Each time the script tries to write to the CSV file and fails due to it being locked, $Success is set to $false.

To use this with the newer SCCM, you can enter it into the scripts section as shown below.


You can download the script from my GitHub Site

I would like to thank Mike Roberts from The Ginger Ninja for the resource on how to calculate folder sizes. That helped a lot in writing this script.

 #Full path and filename of the file to write the output to  
 $File = "<Path to CSV file>\ProfileSizeReport.csv"  
 #Exclude these accounts from reporting  
 $Exclusions = ("Administrator", "Default", "Public")  
 #Get the list of profiles  
 $Profiles = Get-ChildItem -Path $env:SystemDrive"\Users" | Where-Object { $_ -notin $Exclusions }  
 #Create the object array  
 $AllProfiles = @()  
 #Create the custom object  
 foreach ($Profile in $Profiles) {  
      $object = New-Object -TypeName System.Management.Automation.PSObject  
      #Get the size of the Documents and Desktop combined and round with no decimal places  
      $FolderSizes = [System.Math]::Round("{0:N2}" -f ((Get-ChildItem ($Profile.FullName + '\Documents'), ($Profile.FullName + '\Desktop') -Recurse | Measure-Object -Property Length -Sum -ErrorAction Stop).Sum))  
      $object | Add-Member -MemberType NoteProperty -Name ComputerName -Value $env:COMPUTERNAME.ToUpper()  
      $object | Add-Member -MemberType NoteProperty -Name Profile -Value $Profile  
      $Object | Add-Member -MemberType NoteProperty -Name Size -Value $FolderSizes  
      $AllProfiles += $object  
 }  
 #Create the formatted entry to write to the file  
 [string]$Output = $null  
 foreach ($Entry in $AllProfiles) {  
      [string]$Output += $Entry.ComputerName + ',' + $Entry.Profile + ',' + $Entry.Size + [char]13  
 }  
 #Remove the last line break  
 $Output = $Output.Substring(0,$Output.Length-1)  
 #Write the output to the specified CSV file. If the file is opened by another machine, continue trying to open until successful  
 Do {  
      Try {  
           $Output | Out-File -FilePath $File -Encoding UTF8 -Append -Force  
           $Success = $true  
      } Catch {  
           $Success = $false  
      }  
 } while ($Success = $false)  
   

08 August 2018

Install Dell Command Configure in WinPE

Dell Command | Configure can be of great use in the WinPE environment. It allows you to configure and/or query the BIOS before an operating system is laid down. This is easy to do.

The first thing is to determine the architecture of the WinPE environment. This will determine which Dell Command | Configure to use. If you use a 64-bit machine, you will have two folders under %PROGRAMFILES(X86)%\Dell\Command Configure\. They are x86 and x86_64. Depending on the WinPE architecture, you will use the appropriate directory. The first thing you will do is to copy the contents of the chosen directory to a UNC path. The files in that directory are what will be used to execute the CCTK.exe.

I used to have a complete PowerShell script written to do all of the steps, but I have gotten away from that and gone more task sequence steps so I don't have to maintain a .PS1 file. Below is a screenshot of the configuration I use for installing this.


I first map a T: drive to the UNC path containing the files. I arbitrarily chose T:. All of the commands listed below are entered and executed through the Run Command Line task sequence.

Here is a screenshot of the directory. I used to have to also explicitly enter credentials with the net use command, but with recent windows updates, that no longer works. Now all I enter is net use t: \\<UNC Path>.

The next thing is copying the above files and directory to the x: drive. The x: drive contains the WinPE operating system. I create a folder in the root named CCTK. To do the copy, I use the following command.

xcopy.exe "t:\*.*" "x:\CCTK\" /E /C /I /H /R /Y /V



Next comes the HAPI driver. This is necessary to interface CCTK with the Dell system hardware. Here is the command for running HAPI. This will be different if it is being executed in an x86 environment. Instead of hapint64.exe, it would be hapint.exe.

x:\CCTK\HAPI\hapint64.exe -i -k C-C-T-K -p X:\CCTK\HAPI\



Finally, I unmap the mapped t: drive by using the following command line.

net use t: /delete


This is an example of using the CCTK.exe to clear the BIOS password after the utility is installed.


NOTE: You can update the CCTK for the WinPE environment. To do so, execute the Dell Command | Update which will also update the Dell Command | Configure utility. Once that is updated, recopy the contents as described above to the designated UNC path. 

27 July 2018

Cleaning Up and Automating the Backup of Bitlocker Passwords to Active Directory

Recently, I was reviewing the bitlocker recovery password backups. We still use active directory to store them, and yes, we are planning on moving to MBAM. That is a ways off as we're in the process of the Windows 10, Exchange 2016, and Office 2016 migrations. While looking over the AD backups, I noticed some machines stored multiple recovery passwords due to systems being reimaged and then some had duplicates.

To solve this, I was initially going to write a PowerShell script to delete the AD entry during a task sequence build process for a clean slate. Going through the testing phase, I ran into other issues that required more testing in the script. Therefore a one-liner was out of the question. In the end, this cleaned up our active directory bitlocker password entries and verified all stored passwords were valid. 

This script I have written does the following:

  1. Queries the bitlocker password and ID from the system
  2. Queries active directory for the backed up Bitlocker ID(s) and password(s)
  3. Cycles through the active directory entries and deletes those that do not match the stored local ones and removes duplicates.
  4. Queries active directory once again for the stored ID and password to see if it matches the locally stores ones
  5. If there are no entries, the info is backed up to active directory and verified the backup was successful
  6. If an error -2147024809 occurs during the backup, the system checks if Bitlocker is enabled and returns system is not Bitlockered. Otherwise, an unspecified error message is displayed. 
  7. Does not exist in active directory, the info is backed up. If it does exist but does not match, the key in AD is deleted, and then the new key is uploaded. If there are duplicates in AD that match to locally stored key, all are deleted, except for one. If bitlocker is not enabled on a machine, then an error 3 is returned. If an unspecified error occurs, an error 2 is returned. These return codes allow for the script to alert to issues within a build or if it is used in the scripts section of SCCM. 
  8. If the Bitlocker info matched on both the local system and AD, then the info is displayed on the screen and an exit code 0 is returned. 
This script requires domain admin access to run as it needs to have access to active directory. The SCRIPTS section of SCCM cannot run this as it uses the system account. The same goes for an SCCM package. The only way this can be executed through SCCM is to implement it in a task sequence. The Run Command Line is what needs to be used, specifying a domain admin account under the Run this step as the following account, as shown below. This same thing has to be done in MDT and/or SCCM to get this to work in cleaning up active directory when building a new system. 



You can download the script from my GitHub site


19 July 2018

Accessing MDT and SCCM Task Sequence Variables

While rewriting a PowerShell automation script to move machines in Active Directory, I had been trying to pass the MachineObjectOU task sequence variable to the PowerShell script within the command line task sequence. It constantly failed. I finally put in a cmd.exe task sequence to pause the build and allow me to interact directly with the task sequence. What I found out is that MDT and SCCM task sequence variables from a build process can only be accessed under the administrator account. In the screenshot below, the command line on the top is what was launched by the task sequence with no specified account credentials. The command line at the bottom was opened using domain admin credentials. As you can see, the command line on the top was able to access the task sequence variable, whereas the one on the bottom was not. If a task sequence uses any other account, task sequence variables are null. 


17 July 2018

Moving Computers to Designated OU during Build Process

It has been four years since I published the last version of the PowerShell script to move systems from one OU to another during a build process. In that version, it required making a task sequence for each OU, which if there are a lot of OUs, that would be a very daunting process.

In this new version, I have done two things to improve it. The first is making it into a one-liner, so you don't have to maintain the script on some share. Second, it can now move systems to any OU using the one-liner thereby cutting down on the number of required task sequences.

To use this method, RSAT must be installed on the system. I have RSAT as part of our reference image, so it is already present when the reference image is laid down. The next step is to create the task sequences. There are three task sequences required. The first is to get the OU that was selected in the initial build screen. This is done by querying the MachineObjectOU task sequence variable. Task sequence variables are only accessible to the administrator account. If you try and access them from any other user account, the output is null. This is the reason why three task sequences are required for this process. So to pass the MachineObjectOU to the next task sequence which will move the system, I have it write the OU to the text file OU.txt located in the c:\ directory.

This is the one-liner for creating the file containing the OU to move the system to:

 powershell.exe -executionpolicy bypass -command "&{$TSEnv = New-Object -ComObject Microsoft.SMS.TSEnvironment;$TSEnv.Value('MachineObjectOU') | out-file c:\OU.txt}"  



The next step is moving the system. This one-liner will read the OU in the text file, check to see if the system is in the desired OU, and then move it if it is. Lastly, the one-liner will do a check to make sure the system is in the correct OU after the system is supposedly moved. It exits with a 0 if successful and a 1 if unsuccessful.

This is the one-liner for moving the system:

 powershell.exe -executionpolicy bypass -command "&{Import-Module ActiveDirectory;[string]$CurrentOU=((Get-ADComputer $env:ComputerName).DistinguishedName.substring((Get-ADComputer $env:ComputerName).DistinguishedName.IndexOf(',')+1));[string]$NewOU=Get-Content c:\OU.txt;If ((Get-WmiObject Win32_Battery) -ne $null) {$NewOU=$NewOU.Insert(0,'OU=Laptops,')};If ($CurrentOU -ne $NewOU) {Move-ADObject -identity (Get-ADComputer $env:ComputerName).DistinguishedName -TargetPath $NewOU};$CurrentOU=((Get-ADComputer $env:ComputerName).DistinguishedName.substring((Get-ADComputer $env:ComputerName).DistinguishedName.IndexOf(',')+1));If ($CurrentOU -eq $NewOU) {Exit 0} else {Exit 1};}"  



Lastly, the third task sequence will delete the OU.txt file. There are no WMI queries that have to be done in the newly updated script as was required in the old one.

 powershell.exe -executionpolicy bypass -command "&{Remove-Item -Path c:\OU.txt -Force}"  



This is how I have it in the task sequence hierarchy:



11 July 2018

Google Chrome Installation One-Liner

Instead of wanting to keep up with the latest Google Chrome installers, I wanted to create a one-liner that can download Chrome from the Google URI and then silently install it in the reference image build process. This one-liner does just that. It will download the latest Chrome version and install it. This means you no longer have to take time keeping up with the latest installer and the code is compacted down to a one-liner so it is only in the task sequence.

I placed the $URI variable at the beginning, as shown below, so if it ever needs to be changed, it is easily accessible in the one-liner within either SCCM or MDT. If the download site is changed and the installer fails, it will return the error code back so you are aware of the issue. 

As you can see below, I used a Run Command Line task sequence to incorporate this into the build. 


NOTE: The URI in the one-liner downloads the Chrome 64-Bit public version. You will need to update the URI if you use a different version.

Here is the one-liner:


 powershell.exe -executionpolicy bypass -command "&{$URI='http://dl.google.com/chrome/install/375.126/chrome_installer.exe';$ChromeInstaller=$env:TEMP+'ChromeInstaller.exe';Invoke-WebRequest -Uri $URI -OutFile $ChromeInstaller -ErrorAction SilentlyContinue;$ErrCode=(Start-Process -FilePath $ChromeInstaller -ArgumentList '/silent /install' -Wait -Passthru).ExitCode;Remove-Item $ChromeInstaller -ErrorAction SilentlyContinue -Force;Exit $ErrCode}"  

28 June 2018

Shortcut GPO for Root Network Share

If you have tried adding a GPO to create a shortcut to the root network share, \\contoso for instance, then you have probably seen that it is not working. The solution is to not use Shell Object and not File System Object. Once you select Shell Object, click on Browse for Shell Object and select Network. This will take some time for it to scan the network. In my experience, I have to do this twice. The first time shows a couple of items under Network. When I click cancel and then go back and browse, click on Network, the full list appears at which point I can select the network share I needed to make a shortcut to. Here is the example below that once I browsed for and selected the shell object on the network, the shortcut was successfully created.



14 June 2018

Office 2016 Outlook Cannot Log On Upon the First Launch

While building the new Windows 10 image with Office 2016 along with exchange 2016 on the backend, I got the following error message every time I tried to open up Outlook for the first time.


What was so frustrating about this issue was that if I went into Mail, within the control panel, and deleted the existing profile, Outlook would open and configure with no problems.

The first thing I did was to check with the exchange admins to make sure autodiscovery was enabled. It was. The second thing was to make sure the newly imaged machine could ping the exchange servers and it could. While going through the troubleshooting process, I learned that even though the creation of PRF files exists in the office customization tool, it is not valid for office 365/2016. At that point, I went back in and removed the PRF content from the customization tool (.MSP file), and the issue persisted. Finally, I found a PRF file in the following directory: c:\Program Files (x86)\Microsoft Office\. After making changes to the contents of the file, the problem still persisted.

The final fix was to delete that PRF file (OutlookProfile.PRF) shown below. The may or may not exist in other environments. How the file was being created, I have no idea. I did not find it anywhere in the directory tree of the office 2016 installer. The MSP file had been completely recreated from scratch to make sure no remnants were left over from the original MSP file that did contain parameters for generating a PRF file. The only other fix, which I did not want to do was to possibly create an entirely new installation tree with a fresh download of Office 2016. My fix was adding a line to my PowerShell installer to delete the PRF file. It may also be possible to include that deletion within the office customization tool.


How to Remove Bulk Facebook Profile Content

Here is a video guide on how to remove bulk Facebook content from your Facebook profile. Some will just say to delete the profile and create a new one. That can be rather troublesome, especially if you have a lot of Facebook contacts and/or are well known in certain areas of expertise. In the video, I go through the easiest way I found to delete Facebook posts and likes. My video includes several gotchas in the process, along with how the Chrome add-in works. I also include valid reasons why you might want to clean up the profile.

Reasons for removing bulk Facebook content:

  • International travel where certain posts may be illegal in other countries
  • A potential employer may want to look at your Facebook profile
  • You broke up with someone and want all associated content deleted
  • You are applying to colleges and are concerned they may look at your Facebook content that may have an impact on admissions
Here are some links that pertain to some of the content of the video:


01 June 2018

Filtering out Windows Activations When Imaging from Test MDT Share or Task Sequence

The environment I work in entails using MAK activation instead of KMS. This means that we have a set number of MAK activations given by Micorosft. One of the issues is that you can use quite a few when needing to build a new image while working through problems. 

In my environment, we do have SCCM, but we use MDT for imaging. We have two MDT deployment shares, one for testing and one for production. I use a PowerShell script to activate the Office and Windows licenses. My test task sequences are exactly like my production task sequences when they are synchronized after testing. I don't like to have any differences in them. To stop my test shares from activating every time I image, I came up with the following solution. 

In the task sequence(s) used to activate windows and/or office, I implemented a Task Sequence Variable condition under the Options tab. In the Variable field, enter DEPLOYROOT. The conditions field should be equals. Finally, the Value field should contain the UNC path to your production MDT share. This will stop the task sequence from executing when building from your test build share. 


Although I have not tried this, if you have only one deployment, I think you can add a Task Sequence Variable Condition to the task sequence where you defined TaskSequenceID for the Variable, equals for the Condition, and the name of your production task sequence for the Value. This should stop that TS from executing if run from the test TS. 

21 May 2018

Fix for FAILURE (9705): Unable to find USMT file, cannot capture/restore user state

Recently, I upgraded to the new MDT 6.3.8450.1000. It was a fresh install of the new MDT product in which I created an entirely new MDT share for our Windows 10 build. I had done an upgrade to the other MDT share that contains the Windows 7 builds, and it busted all of them at which point I had to do a restore.

After creating the new task sequences, applications, drivers, OS, and packages, I went to capture my first Windows 10 image and got the error below.


The third line down was the key to the error. I went in and looked at the ZTIUserState.wsf and saw this line:
sScanStateFolder = oEnvironment.Item("DeployRoot") & "\Tools\" & sUSMTArchitecture & "\" & sUSMTVersion

I went to the MDT deployment share and under it to tools\x64\USMT5. There was nothing in that directory. I then looked at tools\x86\USMT5, and there was nothing there either. It was evident that this was the cause. To fix this, I needed to download the latest WADK, which also required uninstalling the current version first. Once it was downloaded, I checked off the User State Migration Tool (USMT) option. Once it completed, I copied over the USMT tool x86 and x64 from

  • C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\User State Migration Tool\x86
  • C:\Program Files (x86)\Windows Kits\10\Assessment and Deployment Kit\User State Migration Tool\amd64
to the following directories

  • %MDTDeploymentShare%\Tools\x86\USMT5
  • %MDTDeploymentShare%\Tools\x64\USMT5
Once I did this, the task sequence can now capture an image with no problems. 

17 May 2018

Explaining the %WINDIR%\Installer Folder

While recently writing an MSI uninstaller script, I needed to be able to associate the GUID with the applications name. In the process, I was finally able to associate the MSI files inside the Installer folder with the application.

You may wonder why you want to know this. Most of this is probably just for general knowledge, but there are a couple of instances I can think of on why you might want to know. For one, I used it as described above. You may also want to know this if you are checking to see if the MSI is available to do an uninstall or a repair.

The MSI and MSP files inside the %WINDIR%\Installer folder are associated with the installed application or update in the following registry location:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\S-1-5-18\Products\<Unique Identitier>\InstallProperties

The screenshot below shows an example of the unique MSI filename in the %Windir%\Installer directory associated with the installed application. The association is made under the LocalPackage.


07 May 2018

Blank Screen after Enabling Secure Boot in the BIOS

I am working on the Windows 10 image, and part of this project is converting to UEFI. We do have several older systems that are still in production because systems that are used for temporary or loaner aren't as important to keep up-to-date. I use the oldest model system when creating a reference image so I am sure the image will work across all models.

I first started with a Dell Optiplex 990 and quickly realized it was not compatible with Windows 10 UEFI because of missing BIOS features. I then moved up to a Dell Optiplex 9010, and it included secure boot. Once I set the BIOS for UEFI and rebooted, there was no more screen. The monitor was blank. The first thing I tried was turning off the machine, unplugging the power cord, holding in on the power button for 15 seconds, removing the battery and holding in on the power button for 30 seconds, and then reinstalling the battery before turning the machine back on. This did not reset the BIOS, and the screen was still blank.

The next thing I did was to remove the DVI cable. It was replaced with the VGA cable and was plugged into the built-in motherboard video port. When I powered on the system, I got the following screen to display.


The instructions provided do not work. The video card I experienced this with was the AMD Radeon HD 6350.

What I did to resolve this was to completely remove the video card, connect the VGA cable, and then power the system up. After that, I got the following screen.


Once I got this screen, I was able to go into the BIOS. It ended up being two settings in the BIOS. The first was the Enable Legacy Option ROMs. It must be turned off for the secure boot to be enabled. It was this option that actually caused the screen to be blank when using the video card.


The second option is the Secure Boot Enable. To use this, Enable Legacy Option ROMs has to be turned off.



Once I set these back to the defaults, the system was then able to be boot up and be displayed on the monitor using the DVI video card. It is not the computer if it is an Optiplex 9010 or higher. It is the video card that cannot support the secure boot in Windows 8 or higher due to the lack of UEFI Option ROM drivers as described by Dell and ZDNet. The solution would be to replace the video cards that can support the UEFI Option ROM drivers.

04 May 2018

MDT Not assigning the correct Drive Letter to the Windows Primary Partition with UEFI

When I installed the new MDT 6.3.8450.1000 to build the deployment package for Windows 10 1709, I ran into issues with the operating system deployment. In the process of building out the new task sequence, I also decided to convert over to UEFI. The OS was laying down, but it was installed on the wrong drive letter, D: instead of C:. After trying many things, I finally decided to abandon the Format and Partition Disk task sequence provided by Microsoft and create my own using PowerShell.

The first step of this is to create the text configuration file. I could have created the file and had the build point to it, but I would rather PowerShell do this. After researching the failed build from other issues, I found it created the following drives listed below:


  • Boot (EFI) with the drive letter W:
  • (MSR) with no designated drive letter
  • Windows (Primary) with the drive letter C:
  • Recovery (Recovery) with the drive letter E:
To achieve this, I use two task sequences in MDT as shown below,

The first task sequence consists of the PowerShell one-liner as a Run Command Line shown below. The one-liner creates the DiskpartHDD.txt file on the WinPE bootable drive (X:). The numbers I used for the sizes came from MDT. 


This is the PowerShell one-liner within the task sequence:

powershell.exe -command "&{new-item -Name X:\WINDOWS\system32\DiskpartHDD.txt -ItemType File -Force; add-content -value 'select disk 0' -path X:\WINDOWS\system32\DiskpartHDD.txt; add-content -value 'clean' -path X:\WINDOWS\system32\DiskpartHDD.txt; add-content -value 'convert gpt' -path X:\WINDOWS\system32\DiskpartHDD.txt; add-content -value 'create partition efi size=499' -path X:\WINDOWS\system32\DiskpartHDD.txt; add-content -value 'format quick fs=fat32 label=System' -path X:\WINDOWS\system32\DiskpartHDD.txt; add-content -value 'assign letter=W' -path X:\WINDOWS\system32\DiskpartHDD.txt; add-content -value 'create partition msr size=128' -path X:\WINDOWS\system32\DiskpartHDD.txt; add-content -value 'create partition primary' -path X:\WINDOWS\system32\DiskpartHDD.txt; add-content -value 'shrink minimum=50000' -path X:\WINDOWS\system32\DiskpartHDD.txt; add-content -value 'format quick fs=ntfs label=Windows' -path X:\WINDOWS\system32\DiskpartHDD.txt; add-content -value 'assign letter=C' -path X:\WINDOWS\system32\DiskpartHDD.txt; add-content -value 'create partition primary' -path X:\WINDOWS\system32\DiskpartHDD.txt; add-content -value 'format quick fs=ntfs label=WinRE' -path X:\WINDOWS\system32\DiskpartHDD.txt; add-content -value 'set id="de94bba4-06d1-4d40-a16a-bfd50179d6ac"' -path X:\WINDOWS\system32\DiskpartHDD.txt; add-content -value 'assign letter=E' -path X:\WINDOWS\system32\DiskpartHDD.txt}"

As for the Diskpart task sequence, I used the following Run Command Line task sequence:


This is all that is required for creating the partitions. I commented out the ones provided by Microsoft and it now assigns drive C: for the Windows operating system. 

23 April 2018

Oracle Java Runtime Installer

As often as Java must be updated, I wanted to have an auto installer that would make the update a breeze. The way this installer has been written is it will first determine if the system is x86 or x64. At that point, it will uninstall the old version first and then install the x86  if the system is 32-bit, or it will install the x86 and x64 versions if the system is 64-bit. The parameters are the same for both 32-bit and 64-bit versions so you can define the parameters once.

When a new version is released, all you need to do is swap out the installer executables and update the package in SCCM. The script will find the appropriate executable associated the architecture, as Oracle includes the architecture within the filename.

You can download and view the installer code from my GitHub site.


10 April 2018

Inno Setup PowerShell Uninstaller

I recently encountered an application that uses the Inno Setup installer. Part of my process when I deploy an application is to also create an uninstaller. While creating the uninstaller, I decided to make a function for uninstalling Inno Setup installed applications.

The way I have written this function is that you need to use the exact name as displayed in the add/remove programs for the AppName parameter. The function will then query the Add/Remove programs registry entries to get the quiet uninstall string and execute it.

You can download the function from my GitHub site.


03 April 2018

One-Liner that Updates the Dell Application Component Updates in the Reference Image

While building out the Windows 10 reference image task sequence, it dawned on me that I should be making sure the latest Dell Application Component Updates are installed. Since this is a reference image, the system drivers being up-to-date is not essential to me because they will be stripped during the Sysprep process. This does require that you already have the Dell applications installed before executing this one-liner.

I devised this one-liner that can be implemented as a command line task sequence to check for the latest application component updates only. To limit this down to just application component updates, you will need to open the Dell Command | Update GUI application to create an XML file to reference from the command line. Once in the GUI app, click on the Settings icon. Click on Update Filter. Under Recommendation Level, I checked everything. Under Update Type, I checked Application Software. Everything else is left unchecked. Configure every other settings tab the way you want. Now click on Import/Export and click Export. Export the XML to the desired UNC path in which the one-liner below can access. You can also download the XML file I use from my GitHub site.

As for the one-liner below, update the <UNC Path> to the location where the Applications.XML file is located. It does not need to be called Applications.XML. That was my choice.


 powershell.exe -command "&{If ((Test-Path 'C:\Program Files\Dell\CommandUpdate\dcu-cli.exe') -eq $true) {$ExitCode = (Start-Process -FilePath 'C:\Program Files\Dell\CommandUpdate\dcu-cli.exe' -ArgumentList '/policy \\<UNC Path>\Applications.xml' -Wait -PassThru).ExitCode} elseif ((Test-path 'C:\Program Files (x86)\Dell\CommandUpdate\dcu-cli.exe') -eq $true) {$ExitCode = (Start-Process -FilePath 'C:\Program Files (x86)\Dell\CommandUpdate\dcu-cli.exe' -ArgumentList '/policy \\<UNC Path>\Applications.xml' -Wait -PassThru).ExitCode};Exit ($ExitCode)}"  

Putting this into MDT or SCCM is easy. Once you have the one-liner customized and tested, copy and paste it into a Run Command Line task sequence as shown below. That is all it takes to implement this.


23 March 2018

KB40888878 Patch for Spectre and Meltdown on Windows 7 x86 and x64 systems

Recently, Dell released the BIOS updates covering systems starting with the Intel Family 6 Model 42 and later processors. This is the first part of the patching process. The second part is to apply all windows updates, which I also included all optional updates. That was my personal preference. The third step is to apply the appropriate KB4088878 patch. 

The first two systems, Dell Optiplex 990s with Windows 7 64-Bit, I did these patches on were successful. GRC's InSpectre tool was executed and returned the following. 


The next two failed. These systems were Windows 7 32-Bit installed on Dell Optiplex 990s with 64-Bit processors. The BIOS was patched with the latest A23 version Dell had published. The windows updates were all installed. When the windows6.1-kb4088878-x86_7512ab54d6a6df9d7e3d511d84a387aaeaeef111.msu was applied, the following crash screen appeared when the OS booted back up.


One tactic I tried was to configure the registry to clear out the page file when the system shuts down by changing the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\ClearPageFileAtShutdown to a 1. The next thing I did was to boot the system into safe mode to execute the patch. I got the following message. 


In conclusion, the only solution is to have the hardware architecture match the OS architecture. If they match then applying the appropriate patch will be successful. 

Here is a note on patching. Applying the latest BIOS does not pass the GRC Inspectre test. The Microsoft OS patch must also be applied for the system to pass the test. 

16 March 2018

Microsoft Compatibility Reporting Tool Spectre and Meltdown Patch

Ever since the Spectre and Meltdown issues arose, we have been waiting on patching, at least reliable patching. Microsoft has taken it on itself to patch systems for the vulnerability. ExtremeTech wrote an excellent article on Microsoft's solution which gave me the thought to write a script for telling which systems are compatible. To determine the minimum family and model compatible with the patch, I used the data from this Intel page that associates family and model to the microarchitecture code name. I converted the family and model from hexadecimal to decimal. That is how I came up with the bare minimum being Family 6 Model 42.

NOTE: The ExtremeTech article includes the Haswell processor as also being compatible. We do not have any Haswell processors in my environment, so I am not able to know what the minimum family and model are for Haswell. If you do have Haswell processors in your environment, I would appreciate you running the following PowerShell cmdlet and reply here with the output so that I can include it in the script. Thanks.

(Get-WmiObject win32_processor).Caption

The script can be executed using the new Scripts tool in SCCM, which is how this was done in my environment.

You can find the script from my GitHub site.

Here is what the output looks like:



12 March 2018

Retrieve MSU Information

While working on the Windows 10 upgrade project, I ran into a situation which I needed the information from an MSU file for the task sequence. Specifically, I needed the KB number. The first thing I did was to try and use the same method used in retrieving info from MSI and MSP files by trying to query the database. That does not work with an MSU file. An MSU is nothing more than a zipped up file of several files. In each MSU file, there is a *Properties.txt file which contains all of the info. 

This script contains the function Get-MSUFileInfo which will retrieve all available info on the MSU. I designed it so that it creates an extracted folder in the relative path of the script. The MSU is then extracted to that extracted folder. Next, the script will read all of the contents of the *Properties.txt file into an object. Finally, the extracted folder is deleted. 

Here is an example of the script retrieving the info into an object:


You can download the script which contains the function from my GitHub site. I put the function into a full script for easy testing in your environment. 

01 March 2018

Adding ShareThis to Blogger

Below is a video on how to add ShareThis to Blogger. It is a very easy process. Apparantly the process has changed since other instuction pages were created. I spent a few hours trying to figure out why injecting the javascript into the HTML code was not working. I cover how to implement both the sticky and inline buttons. As you will see, the bar works on my blog with no problems. Also, at the end of the video, it does show the sticky share buttons turn off. I turned them back on and they are working perfectly.

Here is a picture of this blog page after it was implemented:






26 February 2018

Uninstall MSI by GUID

This script function will uninstall an MSI installed application by specifying the GUID and the switches. I have included the ability for the script to query the registry for the name of the application to display for user output. The function also will exit the script if there was a failure.

NOTE: The script uses write-host for user output so that if it is manually executed, the admin will be able to easily see if it was successful by success being in yellow, not installed in green, and failure in red. Write-Host is the only option for being able to display in multiple colors and the ability to not start a new line when it displays "Uninstalling Java 8 u 161....." as it waits for the exit code of the uninstall to show one of the three outputs above in the designated colors. If you do not want to use write-host for this, you are welcome to rewrite the code, which is being openly shared.

Here is an example of the function running in the script provided below. This is not in color because it was executed within PowerShell Studio. This is in a script format so you can easily test this out before using the function in another script.


You can download the script from my GitHub site

21 February 2018

Uninstall MSI by Application Name

Here is a function that will uninstall an MSI installed application by the name of the app. You do not need to input the entire name either. For instance, say you are uninstalling all previous versions of Adobe Reader. Adobe Reader is always labeled Adobe Reader X, Adobe Reader XI, and so forth. This script allows you to do this without having to find out every version that is installed throughout a network and then enter an uninstaller line for each version. You just need to enter Adobe Reader as the application name and the desired switches. It will then search the name fields in the 32 and 64 bit uninstall registry keys to find the associated GUID. Finally, it will execute an msiexec.exe /x {GUID} to uninstall that version.

Update: 

This is the third revision of the function that will uninstall an MSI by its application name. The last revision was an efficiency improvement. This revision adds the ability to uninstall all instances of an application. For instance, if several versions of Java 8 are installed, this function can uninstall all of them by just defining Java 8. The function covers both x86 and x64 based apps. The previous versions of this function could only uninstall one app at a time. This will uninstall all of them.

Here is a visual on the script uninstalling multiple versions of Java 8.


You can download the code from my GitHub site located here.

20 February 2018

Mozilla Firefox Installer and Uninstaller

As we all know, Mozilla Firefox is not the easiest application to deal with when it comes to deploying it in an enterprise environment. I have finally taken the time to write a PowerShell script that will install it using the executable provided by Mozilla.

This installer will kill all instances of firefox, execute the uninstaller helper file, and then delete the programdata folder. Next, it will run the Firefox installer, create the autoconfig file and the Mozilla config file. The autoconfig.js file will point firefox to the mozilla.js file. I have written the script, so it creates and injects the configuration information within the CFG files. If you do not want this, you can comment out the New-AutoConfigFile and New-MozillaConfig lines. I also created a Configuration.ini file to configure the desktop shortcut during the installation.

Also, we still have some 32-bit machines, so I set up the script and file structure as shown below with the individual executable in the appropriate architecture folder.


Here are the links to the GitHub site:

13 February 2018

Check if RSAT is installed with this one-liner

You are installing RSAT in a build, and you want to check if it is installed if it is included in the windows updates. Recently, there has been the issue in Windows 10 where RSAT cannot be found in the Windows Features. It is also not found in the Win32_OptionalFeature. To work around this, I have this one-liner incorporate checking for the feature first and if that turns up nothing, it then checks for the active directory module, which exists if RSAT has been installed. It will return an exit code of 0 for success and 1 for failure which can be used to either pop up a warning or kill a build if not present. This has been tested on both Windows 7 and Windows 10.

powershell.exe -command "&{If ((Get-WmiObject -class win32_optionalfeature | Where-Object { $_.Name -eq 'RemoteServerAdministrationTools'}) -ne $null) {Exit 0} else {If ((Get-Module -Name ActiveDirectory -ListAvailable) -ne $null) {Exit 0} else {Exit 1}}}"