Last Updated on February 22, 2023 by rudyooms
This blog will be no deep dive into a weird topic but more a warning when you decided to do something stupid but forgot about it….
I will divide this blog into multiple parts
1. The Issue
Before showing what exactly broke, let’s start by looking at the issue itself. When I was writing my latest blog that mentions the fake Autopilot@ and fooUser when using Autopilot for Pre-provisioned deployments I stumbled upon some weird “Identifying” delay and decided to write a unique blog for it.
After the device preparation ESP phase, the device started the device setup part and needed to start the “Apps Identifying” part.
Please Note: This is a pure AADJ environment, so no evil or weird HAADJ issues here 🙂
Please Note: This “Identifying” issue could also occur when you configured the Enrollment Status page and configured the “Block Device use until all apps and profiles are installed”
This time it wasn’t failing because of a faulty configured ESP
As shown below the device was taking its time to identify all of those 7 apps that needed to be installed. After waiting exactly 30 minutes the “setup” key was created with all of the Win32 apps in it that needed to be tracked.
At the same time, that “setup” key was created the device started installing the ESP-required apps. Installing those 7 small apps (Office 365 Apps weren’t included) “only” took about 36 minutes and I was shown a nice green Autopilot Sealing screen!
36 minutes… for a couple of apps… that ain’t right! Let’s dive into it a bit more
2. Troubleshooting part 1
I decided to start by running an ETL trace and while running the trace I also watched Fiddler go. When taking a closer look at the Fiddler trace, I noticed that almost every minute there was an outbound connection to r.maange.microsoft.com.
Every couple of minutes for about 30 minutes long it sends the same information over and over again and nothing more…
So I needed to take a look at the ETL trace but before I could do so I needed to add some additional providers to my WPRP file
As shown above, I made sure I added the configurmanager2 event provider with GUID “0ba3fb88-9af5-4d80-b3b3-a94ac136b6c5” to my WPRP file. Feel free to take a look at it.
After stopping the trace I noticed that the ETL file was a little bit larger than I expected but hey, I am not complaining. The bigger it is the more information it has right?
When opening this etl file in the WPA tool I immediately noticed a lot of “counts” in the Microsoft.Windows.DeviceManagement.Configmanager2 provider section
At first, I had the “stupid” idea it was also trying to list all of the Microsoft Store apps that were available before starting with the App identifying part but I guess that wasn’t the case! Let me show you why.
Luckily I am keeping an archive of all the traces I performed in the past… Because you can’t have enough of them! Just a couple of days ago I almost did the same thing on the same device, but that time I did NOT use the Autopilot pre-provisioning option but the normal user-based Autopilot enrollment
When opening this user-based Autopilot trace, I am noticing only a count of 16.000 rows instead of 95.000!
So when comparing those 2, I was pretty sure it was doing “something” that was only occurring when running Autopilot for pre-provisioned deployments
3. The ESP Flow
I decided to do something funny. I started reading my own blog about the Enrollment Status (ESP) page, to hopefully spot something because it is “something” that is occurring in the ESP right?
I guess I did… I even placed a note about it… Oh my….did I drink too much #membeer?
PowerShell scripts aren’t tracked during the ESP!!! So something not tracked could definitely give us some issues. But how, as I can’t come up with any reason why the PowerShell scripts could “only” give us issues when performing a pre-provisioning… or??
4. Troubleshooting part 2
I decided to run another pre-provisioning but this time I am going to add some PowerShell logging! To do so I enabled PowerShell transcription logging and configured the Output directory
With the PowerShell transcription logging in place, I started the pre-provisioning. Before I am going to show you the output of the transcription log file I also opened the task manager and the Intune Management Extension agentexecutor log file.
At the moment the ESP was busy “working on it….” I noticed that a PowerShell session in the system context was launched…. But NOT closed! It just stood there and did nothing!
Also when looking at the resource monitor, there was no disk activity at all! So I guess we got ourselves a lingering PowerShell session in the system context as it looks like. After closing the PowerShell sessions itself the device continued and started to install the Required Apps
After closing that PowerShell session, I decided to open the AgentExecutor log
As shown below… ow my… that’s the Windows10_Bitlocker script I used when trying to come up with a bonkers solution
That’s odd because that script should only configure Bitlocker and would try to start encrypting the drive with the specified encryption during the device phase instead of waiting until some user logs in. Let’s continue to the PowerShell transcription log
As shown above, the device can’t be encrypted because the Active Directory Domain Services forest does not contain the required attributes and classes to host Bitlocker Drive Encryption or TPM information.
But…. It tries it again, again and again until eternity….
Or when the PowerShell script times out after 30 minutes. Because 30 minutes is exactly the time a PowerShell script deployed to the device will time out. Guess what happens when the PowerShell script times out! Yes… the device will continue with the ESP and will start with the Win32app installations and will start tracking those apps
First, some more explanation before I am going to show you the “ooooooopsssieee”. When you are enrolling a device with Autopilot Pre-Provisioning the device will be joined to Azure Ad with a fake autopilot@ account as I showed you in my latest blog
After the device is enrolled into Intune, the Azure device certificate will be whacked and NO user will be logged in. Guess what doesn’t work when your device isn’t Azure Ad joined anymore and you aren’t logged in with a Microsoft Account?
When we have a nice NO longer Azure Ad joined device and we are not logged in with our Microsoft Account, it’s pretty obvious we can’t back up our Bitlocker Drive Encryption recovery information. You will notice a nice 0x801c0450 error in your Bitlocker-Api event log. Of course, when trying to encrypt the device manually with Bitlocker and trying to upload the key, you will also be prompted with a message that you can’t sign in with your Microsoft Account
But who cares? Because you would expect that when launching that PowerShell script to encrypt the disk, it will just try to encrypt it and when it fails..uh the script will fail, right?
So I decided to fetch back the PowerShell script that was uploaded to Intune by using another PowerShell script
Guess what happened when opening that PowerShell script
When taking a look at the PowerShell script I was playing around with an idea to make sure the Bitlocker recovery key is escrowed to Azure
If you combine the Waiting for / While part and the requirement to store recovery information in Azure Active Directory before enabling BitLocker without having an Azure Ad Joined device, guess what happens?
Damn… damn… stupid me! Okay, I didn’t break anything. Sometimes you try to fix something but end up breaking something else.
I decided to add a couple of lines to the Bitlocker PowerShell script (of course I could also convert it to a Win32app app or just remove those lines… ) I made sure the script first checks if the device is Azure Ad Joined before trying to encrypt the drive
After enrolling my device again with Autopilot Pre-Provisioning it only took 7 minutes instead of the 36 minutes we noticed at the start of this dumb blog
Please be aware that when your device takes 30 minutes on identifying the Apps of the Device ESP stage, something is timing out! That “something” could very much be the PowerShell scripts which you are deploying to your devices!
Feel free to reach out to me if you are experiencing the same issue and it isn’t caused by your PowerShell Scripts