Azure Linux VM Agent leaks secrets || How to harden your config

| Dec 17, 2022

Cover Photo by Joe Zlomek on Unsplash

In this post, I’m going to talk about something that I discovered whilst working on a project a little while ago, some default behaviour in the Microsoft Azure Linux VM Agent which can lead to credential/secret leakage in your linux VM.

What is the Azure Linux VM Agent?

The Azure Linux VM Agent is a software agent provided by Microsoft that handles provisioning and management of Linux and FreeBSD Virtual Machines in Microsoft Azure cloud.

In addition, you can have the agent run commands or scripts for you on the VM - one way of doing this is to use the Az Powershell cmdlet Invoke-AzVMRunCommand.

I was messing with this with a colleague and cloud engineer to configure the OS on a RHEL VM, and we needed to also pass in credentials.

This was achieved in the Azure DevOps pipeline via a Powershell task that ran a powershell script that would take in some parameters from secret variables in the pipeline (retrieved from an Azure KeyVault).

Example syntax below:

   [Parameter (Mandatory = $true)]
   [Parameter (Mandatory = $true)]

$scriptParams=[ordered]@("arg1" = "'$($uname)'"; "arg2" = "'$($secret)'")
Invoke-AzVMRunCommand -ResourceGroupName "$ResourceGroup" -Name "$VMName" -CommandId 'RunShellScript' -ScriptPath "$(PsScriptRoot)\" -Parameter $scriptParams

Your powershell task would have the following config:

Script Path : $(System.DefaultWorkingDirectory)/scripts/test-Script.ps1

Script Arguments : -ResourceGroup '$(ResourceGroup)' -VMName '$(VMName)' -uname '$(uname)' -secret '$(secret)'

Your shell script may be something as simple as the following (it’s probably far more complex):

curl -u '$arg1:$arg2'

So where’s the problem?

As I was having my manager review the overall solution, and with him being far more experienced with Linux, he pointed out that commandline arguments of Linux processes can be exposed in a number of ways:

  1. ps command output can display arguments of running processes
  2. The bash shell history files may expose process arguments
  3. The /proc filesystem contains information about running processes in files such as /proc/<pid>/cmdline and /proc/<pid>/environ where pid is the process id of the process

I done some playing around in both RHEL and Ubuntu test machines, using the following simple bash script:

echo $$
echo ${PW} | wc -m
cat /proc/$$/cmdline
cat /proc/$$/environ
ps -ww -fp $$

What this does is:

  • Echo the process id to the terminal
  • Add the text in the script arguments into a variable called PW
  • count the characters in the variable PW
  • cat the cmdline and environ files for the process
  • list full process info for this process

You can also try the history command.

My findings from this were as follows (standard image, no settings changes)

Ubuntu 22.04 Azure Marketplace image

Ubuntu 22.04
Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-1029-azure x86_64)

gadmin@vmctsuklnxp01:~$ uname -a
Linux vmctsuklnxp01 5.15.0-1029-azure #36-Ubuntu SMP Mon Dec 5 19:31:08 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

gadmin@vmctsuklnxp01:~$ echo $HISTCONTROL

gadmin@vmctsuklnxp01:~$ wget
… ‘’ saved [454/454]

gadmin@vmctsuklnxp01:~$ chmod +x 

gadmin@vmctsuklnxp01:~$ ./ Secret

UID          PID    PPID  C STIME TTY          TIME CMD
gadmin      1983    1934  0 15:20 pts/0    00:00:00 /bin/bash ./ Secret

gadmin@vmctsuklnxp01:~$ history
    3  echo $HISTCONTROL
    9  ./ Secret
   10  history

gadmin@vmctsuklnxp01:~$  ./ Secret2

UID          PID    PPID  C STIME TTY          TIME CMD
gadmin      2012    1934  0 15:28 pts/0    00:00:00 /bin/bash ./ Secret2

gadmin@vmctsuklnxp01:~$ history
    9  ./ Secret
   10  history

gadmin@vmctsuklnxp01:~$  ./ Secret3 > /dev/null

Red Hat Enterprise Linux 7.9 Azure Marketplace image

Welcome to RHEL-7.9-x86_64-Minimal-30GiB-VHD-20200917_190845-v2.

[gadmin@vmctsuklnxp02 ~]$ uname -a
Linux vmctsuklnxp02 3.10.0-1160.el7.x86_64 #1 SMP Tue Aug 18 14:50:17 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
[gadmin@vmctsuklnxp02 ~]$ echo $HISTCONTROL

[gadmin@vmctsuklnxp02 ~]$ export HISTCONTROL=“ignoreboth”
[gadmin@vmctsuklnxp02 ~]$ echo $HISTCONTROL

[gadmin@vmctsuklnxp02 ~]$ sudo yum install wget
  wget.x86_64 0:1.14-18.el7_6.1                            


[gadmin@vmctsuklnxp02 ~]$ wget
… ‘’ saved [454/454]

[gadmin@vmctsuklnxp02 ~]$ chmod +x 

[gadmin@vmctsuklnxp02 ~]$ ./ Secret

gadmin    9232  1867  0 15:35 pts/0    00:00:00 /bin/bash ./ Secret

[gadmin@vmctsuklnxp02 ~]$ history
   17  ./ Secret
   18  history
[gadmin@vmctsuklnxp02 ~]$  ./ Secret29242

gadmin    9242  1867  0 15:36 pts/0    00:00:00 /bin/bash ./ Secret2

[gadmin@vmctsuklnxp02 ~]$ history

   17  ./ Secret
   18  history
[gadmin@vmctsuklnxp02 ~]$  ./ Secret3 > /dev/null
[gadmin@vmctsuklnxp02 ~]$ history
   17  ./ Secret
   18  history 

Summary Findings

Operating System HISTCONTROL Setting Command Prefix Secret Visible (History) Secret Visible (ps output) Secret Visible (/proc/pid/cmdline)
Ubuntu 22.04 ignoreboth none Yes Yes Yes
Ubuntu 22.04 ignoreboth space No Yes Yes
RHEL 7.9 ignoreboth
(Default is ignoredups)
none Yes Yes Yes
RHEL 7.9 ignoreboth
(Default is ignoredups)
space No Yes Yes

Is there any other scenario where it can leak credentials/secrets?

The agent and the cmdlet don’t expose the secret (used as a parameter to the script), however the agent creates a copy of the script under /var/lib/waagent/run-command/download/n/ where n is a number from 0 upwards (if there is a download still there from a previous run, the next number is used for this run).

When it creates a copy of the script in the download directory, any commandline arguments are added directly into this copy of the script using export and set commands - therefore the secret is now in plaintext.

The directory is owned by root and only accessible with root permissions, however this file should still be removed as soon as the script has run so that the secrets are exposed for as short a time as possible.

How do you recommend that I mitigate against this?

  1. To mitigate against the script arguments in plaintext in the shell script under the agent download directory - in the pipeline, run an additional script, again using the Invoke-AZVMRunCommand powershell cmdlet to have the Microsoft Azure Linux Agent (waagent) run a script that removes all files and folders under /var/lib/waagent/run-command/download e.g. rm -rf /var/lib/waagent/run-command/download/*

In this case, you must ensure that the run of Invoke-AZVMRunCommand uses the -AsJob switch - otherwise it may fail as it cannot delete files it is using - running the script as a job means the script source can be deleted at runtime.

  1. To minimise exposure via bash history, set HISTCONTROL to ignoreboth

  2. To minimise exposure via the ps command or the /proc filesystem:

  • You can hide process information by mounting /proc with the appropriate hidepid setting, e.g. mount -o remount /proc -o hidepid=2. However, this will not hide process information from anyone with root access.
  • On Linux with versions prior to 4.2, you can limit the exposure by making sure the password is not in the first 4096 bytes of the command line so other processes can’t obtain it via reading /proc/<pid>/cmdline (like ps does). 4.2 and above no longer truncate /proc/<pid>/cmdline.
  • Pipe the script contents into the bash shell if it’s a bash script e.g.  echo $scriptcontents | bash

What could Microsoft do differently to remedy this?

I know that if someone has root on a Linux system there is no way to stop them being able to access this information. Nonetheless it shouldn’t be as trivial as it is at present.

I would have hoped that Microsoft would be deleting the downloaded shell script as soon as execution completes. I would also have hoped that the agent might encrypt the arguments (perhaps even the shell script itself) with a key/certificate stored in a KeyVault, and decrypt at runtime.

I recognise there is no perfect solution, however any mitigations that reduce the window where secrets are exposed in plaintext should be prioritised.

So what are Microsoft saying about this?

I signed up as a Security Researcher at the Microsoft Security Response Center (MSRC) Researcher Portal and reported this to Microsoft and it is tagged as VULN-080939.

Timeline as below:

  • 17th November 2022 - case raised
  • 23rd November 2022 - MSRC now reviewing and trying to reproduce
  • 1st December 2022 - MSRC respond to advise that this is classified as “low severity with defense in depth” and that because several large customers rely on the current logic re persistence of downloaded scripts, they cannot change it. I replied and challenged this.
  • 6th December 2022 - MSRC responded to say that following discussion with the product team, they will look to add an option (false by default) that would remove all scripts automatically, so as to be a non-breaking change. It is in the backlog for the product team without an ETA, given the low severity of the vulnerability. They will also consider documentation changes for the agent early in 2023 to make it clear to customers that this risk exists and give advice on how to reduce the risk. MSRC also gave clearance for me to blog about this because of their classification as Low severity.


Whilst I don’t necessarily agree with Microsoft that this is as low a severity as stated, I do recognise that you first have to be authenticated to the system, as root or with sudo access, to be able to access secrets exposed in the manner described, which does offer a reasonable amount of mitigation.

Consider though that the team building a VM and configuring it may not be the application team and would otherwise be expected/required not to know highly sensitive application secrets/credentials (Segregation of Duties, a key tenet of Least Privilege, Privileged Access Management etc).

You’d be forgiven for thinking that storing those in an application keyvault, using secret variables in your ADO pipeline etc would go some way to ensuring that is the case, but as I’ve shown here, you can’t assume that is the case.

It is always worth exploring how agents, pipelines, operating systems etc truly work to allow you the best chance of securing your systems, applications and data.

My thanks go to Karthik Balu and Gary Smith at M&G Plc for their support in investigating this and testing mitigations.

My thanks also to MSRC for their responses.

As ever, thanks for reading and feel free to leave comments down below!

If you like what I do and appreciate the time and effort and expense that goes into my content you can always Buy Me a Coffee at

comments powered by Disqus