Getting Started with custom Docker containers on Azure

This post is something a little bit different to what I normally post (Hint: Linux :)), but I’m not a platform snob, you’ve got to use the best tool for the job!

In this case, the customer requirements were to run a PHP5 web application in Apache, with a MySQL backend, plus a few PHP modules and [PhantomJS]( for charting, so Linux it is.

Before I got involved, the plan was for a two-tier architecture hosted using Azure IaaS VMs. From a customer point of view, this is a reasonably expensive solution, as there is a fair bit of intervention required for maintenance and backups, etc. which would inflate the cost. This solution also falls into a bit of a cost pit from Azure availability recommendations, which would require at least 4 VMs, plus load balancing get deployed, making the solution even more expensive.

Essentially, it sounded like a good time to sprinkle some cloud magic on it and see where the infrastructure ended up!

All of the command lines I use in this example are right to the best of my ability, but please let me know if there are any issues!

Initial Idea

I’d heard about Docker containers a bit, from a few people and I knew it would be a good fit for the web frontend for this (assuming I could get it working) and although this would be my first proper foray into using Docker for a production service, I was reasonably confident this would be the way forward. I could always fall back on the old plan, right?

That covers the web frontend you say, but what about the database! For the DB I noticed [this little gem]( appear not so long ago in Azure, so that’s my target for that sorted too! (Yes I know it’s preview, but the roadmap points to early 2018 as GA. This app won’t be in production before then).

Let’s play… with Docker containers!

The very first step is running up my test machine. Since this is going to be a production service (eventually), my Linux distribution of choice is Debian. You may have a different take on it, but when it’s got to be Linux and work in a production environment, I’ve yet to have a better experience.

Once I’ve got everything installed (Git, Docker-CE, etc.), it’s time to get an initial PHP web app up to test. Since my plan was to deploy a custom Docker image, rather than reinvent the wheel, I wanted to find a base Apache/PHP5 docker image that I could test on Azure, before adding PhantomJS. Luckily, I found the ideal Docker image to start with in the Azure-App-Service/PHP repository (5.6.21-apache).

Note: If you’re trying to install the Azure CLI tools but getting odd failures, make sure to install python-pip first. The install script doesn’t always seem to catch the dependency being missing, but this fixed it for me.

I started testing this docker image with a test PHP file I could include, rather than hostingstart.html, other than that, I left the Dockerfile as default (for now).

COPY hostingstart.html /home/site/wwwroot/hostingstart.html
+COPY success.php /home/site/wwwroot/success.php

RUN a2enmod rewrite expires include deflate

To run the container, we then just use the standard build and run commands, making sure to map the Apache port through to the host machine.

> docker build -t container .

> docker run -d -p 8080:8080 container

Included test page

Included test page

And just for completeness, my PHP test page

PHP test page

Getting PhantomJS in the container

Update: This package is broken and seems to require bits of QT to function correctly. I have edited the post to install the PhantomJS package from binaries provided by [ariya](

Because the requirements of the web app only need the PhantomJS binary and nothing else fancy (like a listen port), we don’t need to set up a separate container for PhantomJS, we can just install the package into the container to be used directly by the web application. The largest problem with this, is that the docker image is based on Debian version 8 (Jessie), rather than the current stable release (version 9, ‘Stretch’). The PhantomJS package is only available in jessie-backports, whereas in stable, it’s in the main package list.

To get this working initially, this led me to edit the package install RUN line to add the backports repository and make sure that PhantomJS is installed. Unfortunately, this did not work as the package has some issues, so I had to install it from another source.

This changes the docker file, but only adds a few different RUN lines, rather than editing too much.

+RUN { \
+        cd /root; \
+        wget; \
+        tar xvjf phantomjs-2.1.1-linux-x86_64.tar.bz2; \
+        cp phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local/bin/phantomjs; \
+        ln -sf /usr/local/bin/phantomjs /usr/bin/phantomjs; \
+    }

Now I can build the resulting container and test locally, by pushing things into /home/site/wwwroot and as if by magic, it works with either a local or Azure MySQL instance (once I’d added my public IP to the access list :P).

Build and publish to an Azure Container Registry

I needed a place to store the resulting container artifact, I decided to use an [Azure Container Registry]( for this. I could have used the [Docker Hub](, but I thought it would be better to keep things in one place.

To deploy the container registry, i ran:

> az acr create -n containerregistry -g weblinuxcontainer --sku Basic --admin-enabled true

> az acr credential show -n containerregistry

I navigated to the Dockerfile folder for my custom container and ran this:

> docker build -t container .

> docker login --username containerregistry --password <password from acr credential>

> docker tag container:latest

> docker push

Testing it all out

To test and use this all in Azure we’ve just got to build the Linux Web App Service plan and create the Web App inside it. Once this is done, we can hook up the Azure container registry to the deployed Web App and then hopefully end up with a web page, or two!

> az appservice plan create -g weblinuxcontainer -n weblinuxplan --is-linux --sku B1

> az webapp create -g weblinuxcontainer -p weblinuxplan -n weblinuxapp

> az webapp config container set --name weblinuxapp --resource-group weblinuxcontainer --docker-custom-image-name container:v1 --docker-registry-server-url --docker-registry-server-user containerregistryuser --docker-registry-server-password <container registry password>

The Azure container service should then figure out the website port automagically, if it doesn’t you will need to tell Azure, you can do this with:

> az webapp config appsettings set --resource-group weblinuxcontainer --name weblinuxapp --settings WEBSITES_PORT=8080

You can then imagine how easy it is to start hooking in Azure MySQL, or another database as the data layer to this app. If your application requires some data to be available to the container, for test purposes or some legacy application storge in an ‘uploads’ folder there is always the ability to use the app service storage account to keep state within the scale set.

You will need to upload the files over FTP to this storage account. If you do this in production, remember to set backups!

> az webapp config container set --name weblinuxapp --resource-group weblinuxcontainer --enable-app-service-storage true

Final thoughts

Because I am using the built in app service storage account, I tried getting some continuous integration magic working by setting up a trigger on commit for [Visual Studio Team Services]( to FTP files to the Web App storage to set a really nice development cycle up and running.

Visual Studio Team Services Error

This worked… but only with small amounts of files. Once there is over 300 files in the folder structure, I see connection failures start to plague the test deployment, which is annoying! I tried a few things, but haven’t managed to get past this yet. I’ll push on with this and see where I get, there’s a couple more ideas I have in mind to perhaps make this better.

Hopefully this was a nice intro into using custom Docker containers on Azure. It will definitely stay as a good set of notes for me for a while :)

Another change!

As you can see, the site has once again changed!

I’ve made some fairly big changes, but i’ve kept as much of the old content as I can. There’s a few broken links and formatting oddities, but welcome to migrating blog systems!

I’m using Jekyll at the moment, but since I’m a Windows bod, the buildchain is odd. I’m considering Hugo as an alternative, but I’ll make sure i’ve fixed the oddities and the links before I do that.

I’m working on the band side of things again, as well as some exciting projects coming up at work. This means there will hopefully be a few bits of shiny content and perhaps some music!

South Coast PowerShell Usergroup

I gave a presentation at the UK South Coast PowerShell usergroup back in June about psake and I thought I would post about it (finally!).

It was the first time i’d ever presented to a room of (mostly) strangers, but it was a really rewarding experience and something I am planning to repeat :).

You can find the presentation and the code on GitHub

If you are in the area, I strongly recommend you check it out! The meetup group for the South Coast group is here and the other user groups in the UK and details about them are here.

Discovering Open RDS User Profile Disks

Another week, another quick script!

Very occasionally, we have a problem with Remote Desktop User Profile Disks (UPDs) getting stuck mounted when there’s a problem with a RDS session host and it either reboots or crashes. Here’s a little script I’ve written to show the open VHDX files on a file server or SOFS cluster to find out which RDS host the VHDX is mounted on. Because user profile disks are quite fragile, there is a lot of odd behaviour around when a host crashes. When users log in again, the profile disk may mount on the new host, but the user always seems to receive a temporary profile and the disk gets stuck even after the user is logged out.

After this, you can then take action to unmount it and then deal with the disk. Anyways, here’s the script!

Script to be posted, sorry!

PowerShell Argument Completers

Something I do a lot of the time to make people’s experience with PowerShell easier, especially when they are new, or reluctant to learn is to write argument completers. A lot of the time, I write them for advanced functions and cmdlets I build, but I also find the odd occasion where I build them for existing commands.

I can’t believe I haven’t talked about this before, but I started doing this after watching this awesome video from the PowerShell Global summit by the awesome Rohn Edwards.

I always make sure to ship these even if 99% of the time the script is running completely automated because they are easy to write and, when you find you need them, the convenience is just awesome. Since these are PS v5 commands, you can always make sure you skip these when running on lower versions if you don’t want to require v5 as the minimum shell version, since they are entirely optional.

Probably the biggest time saver argument completers I’ve ever written are for the MSOnline Office 365 PowerShell module, as being able to tab through UPNs (and sometimes tenant IDs, once you start to recognise them :S) is absurdly useful.

These particular argument completers take a dependency on a little function to test whether you’re connected to Office 365, but there are other ways to do this as well. This function is also included below.

Anyways, because they’ve saved me a lot of time, here they are. Hopefully they will save you some time too!:

Completer 1

Completer 2