Using VSTS for building and hosting PowerShell modules with Package Management

I’ve been working with VSTS a lot recently as a source management solution. Rather than build and distribute my modules through a file share or some other weird and wacky way, I thought i’d try to use VSTS Package Management to run myself a PSGallery-alike, without running PSGallery! This is related to another coming-soon blog post about building and running a cloud platform business rule compliance testing solution using the VSTS hosted build runner too.

All of this was to work towards to run things in a little more of a continuous integration (CI) friendly way, hopefully making a start towards a release pipeline model for cloud service configurations.

Getting Started

So i started with this great guide on how to do it manually, creating pakages and pushing them to VSTS Package Management. This works great, but I wanted to close the loop by automatically building my module, along with packing and pushing my nuget packages to VSTS. Here’s how I did it…

Generating the module manifest and nuspec file

You don’t have to generate both automagically, I suppose you could generate the nuspec from the module manifest, which would be quite straightforward. My way is not the only way :) Anyways, based on all the reorganising I do of my modules, it’s easier to generate both so I don’t have to worry about the module manifest file list or the cmdlets/functions/aliases to export.

I use the a project structure like the one used in the Plaster module (since that’s usually my starting point!). This means that a rough module structure looks like this:

\Module
|-\release (.gitignored)
|-\src
| |-Module.nuspec (.gitignored)
| |-Module.psd1   (.gitignored)
| |-Module.psm1
|
|-\test
| |-testfile.test.ps1
|
|-CHANGELOG.md
|-build.manifest.ps1
|-build.ps1
|-build.psake.ps1
|-build.settings.ps1
|-README.md
|-scriptanalyzer.settings.psd1

As you can see, most of those files look pretty much identical to those you get from the Plaster NewModule example with one or two exceptions. The major one is build.manifest.ps1 which is the script I use to build the manifest and stitch together all the pieces defining the module. This script gets called as part of the build and creates two files shown in the above structure, Module.nuspec and Module.psd1.

Here’s the content of this file, I’ve also used it in an example module hosted in GitHub here.

$ModuleName = (Get-Item -Path $PSScriptRoot).Name
$ModuleRoot = "$PSScriptRoot\src\$ModuleName.psm1"

# Removes all versions of the module from the session before importing
Get-Module $ModuleName | Remove-Module
$Module         = Import-Module $ModuleRoot -PassThru -ErrorAction Stop
$ModuleCommands = Get-Command -Module $Module
Remove-Module $Module

if ($ModuleCommands) {
    $Function = $ModuleCommands | Where-Object { $_.CommandType -eq 'Function' -and $_.Name -like '*-*' }
    $Cmdlet = $ModuleCommands   | Where-Object { $_.CommandType -eq 'Cmdlet' -and $_.Name -like '*-*' }
    $Alias = $ModuleCommands    | Where-Object { $_.CommandType -eq 'Alias' -and $_.Name -like '*-*' }
}

Push-Location -Path $PSScriptRoot\src
$FileList = (Get-ChildItem -Recurse | Resolve-Path -Relative).Substring(2) | Where-Object { $_ -like '*.*' }
Pop-Location

$ModuleDescription = @{
    Path                = "$(Split-Path -Path $ModuleRoot)\$((Get-Item -Path $ModuleRoot).BaseName).psd1"
    Description         = 'A PowerShell script module.'
    RootModule          = "$ModuleName.psm1"
    Author              = 'David Green'
    CompanyName         = 'tookitaway.co.uk'
    Copyright           = '(c) 2018. All rights reserved.'
    PowerShellVersion   = '5.1'
    ModuleVersion       = '1.0.0'
    # RequiredModules   = ''
    FileList            = $FileList
    FunctionsToExport   = $Function
    CmdletsToExport     = $Cmdlet
    AliasesToExport     = $Alias
    Tags                = $ModuleName
    # VariablesToExport = ''
    # LicenseUri        = ''
    # ProjectUri        = ''
    # IconUri           = ''
    ReleaseNotes        = Get-Content -Path "$PSScriptRoot\CHANGELOG.md" -Raw
}

[string]$Tags = $ModuleDescription.Tags | Foreach-Object { "'$_' " }

[xml]$ModuleNuspec = @"
<?xml version="1.0"?>
<package>
  <metadata>
    <id>$ModuleName</id>
    <version>$($ModuleDescription.ModuleVersion)</version>
    <authors>$($ModuleDescription.Author)</authors>
    <owners>$($ModuleDescription.Author)</owners>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>$($ModuleDescription.Description)</description>
    <releaseNotes>$(Get-Content -Path "$PSScriptRoot\CHANGELOG.md" -Raw)</releaseNotes>
    <copyright>$($ModuleDescription.Copyright)</copyright>
    <tags>$Tags</tags>
  </metadata>
</package>
"@

New-ModuleManifest @ModuleDescription
$ModuleNuspec.Save("$(Split-Path -Path $ModuleRoot)\$((Get-Item -Path $ModuleRoot).BaseName).nuspec")

The build script

This next small example is from build.ps1, showing where the manifest build is called, along with pre-installing any prerequisites needed, as generating the manifest requires me to import the module. I don’t currently alter the build.psake.ps1 file, as I want to be able to easily consume any updates to the psake script (although it should really be in the build.settings.ps1 in that case!).

[CmdletBinding()]

Param (
    [Parameter()]
    [string[]]
    $Task = 'build',

    [Parameter()]
    [System.Collections.Hashtable]
    $Parameters,

    [Parameter()]
    [switch]
    $InstallPrerequisites
)

$psake = @{
    buildFile  = "$PSScriptRoot\Build.psake.ps1"
    taskList   = $Task
    Verbose    = $VerbosePreference
}

if ($Parameters) {
    $psake.parameters = $Parameters
}

# Prerequisites
if ($InstallPrerequisites) {
    if (-not (Get-PackageProvider -Name NuGet -ErrorAction SilentlyContinue)) {
        Install-PackageProvider -Name NuGet -Force -Scope CurrentUser
    }

    'pester', 'psake' | ForEach-Object {
        Install-Module -Name $_ -Force -Verbose -Scope CurrentUser -SkipPublisherCheck
    }
}

. $PSScriptRoot\Build.manifest.ps1
Invoke-psake @psake

OK! so hopefully you can implement this file to build your manifest and nuspec file, so you could run your nuget pack and nuget push, then call it a day, right? Kind of… But wouldn’t it be easier for someone else to do the pack and push? Enter VSTS Package Management! You’ve been committing all this to source control right?!

Building the steps for ‘pack and push’

To build the steps for pack and push, you need to have the following prerequisites in place (i’m assuming you’ve already got git, or the first half of this post may have missed the mark):

Once you’ve got those installed and read through the getting started with VSTS stuff to get a good grounding in what it’s all about, We can push the code to the remote origin, then we can build… the build!

We can add the remote git server with:

git remote add <remote name> <repository url>

Then initialise the empty remote server with all our content and history using:

git push -u <remote name> -all

Now we should have our module in VSTS, looking a little lonely, just waiting to get built! We can go to Build and Release > Packages to start the process, but we might need to add an extension license before we do that.

Package Management

Click your user, then click Manage extensions and apply the Package Management license. After that, we can go back to the Packages page to create the new feed, here’s a screenshot.

Create a new feed

Now we can navigate to Builds and click + New definition to create a build definition. We’ll use VSTS Git and use an empty build definition.

Build steps and script

We’ll create three steps, a simple build for the module! We’re runing build.ps1, which will install the prerequisites like Pester and psake if needed, which for the VSTS build runner, we will require. Then we can build the module into the Release folder and give it a good test before we give it to nuget to pick up in the next step.

Nuget pack

This step is almost entirely as-is, just pointing to the expected path of the Example2.nuspec file to grab and pack up.

Nuget pack

This step again is almost-default and just points to the target feed, nothing else has been changed. You could collect the test results as well, along with some other things to make sure you’ve collected all the good data about the build, but that’s outside the scope of this post.

So now we’ve got a module, it’s built, it’s tested, it’s packed and pushed and ready to go! So how do we use it?

Installing modules from the feed

Remember the package feed we created earlier? Now the feed is created and we’ve pushed a package to it, it’s got a use! If you navigate back to the Build and Release > Packages, then click Connect to feed, you can then copy the Package source URL, changing nuget/v3 to nuget/v2 which will look something like this:

https://<VSTSName>.pkgs.visualstudio.com/_packaging/<TeamName>/nuget/v2

This is the Repository URL for the PSRepository we’ll want to register, the only other detail we need will be the Personal Access Token (PAT), details of how to generate one can be found here (accessible through your profile security page).

Basically you an use anything for your username, as long as the password is your PAT, it’ll connect and let you seach, download and install packages from this feed.

$Cred = Get-Credential

$Splat = @{
    Name               = 'PowerShell'
    InstallationPolicy = 'Trusted'
    SourceLocation     = 'https://<VSTSName>.pkgs.visualstudio.com/_packaging/<TeamName>/nuget/v2'
    Verbose            = $true
    Credential         = $Cred
}

Register-PSRepository @Splat

Install-Module -Repository PowerShell -Name Example2 -Scope CurrentUser -Credential $Cred -Verbose

Final thoughts

The only issue I currently have is that since the package feed is immutable, you have to make sure you have upped the module version in order to get the push to work, or you end up with a build that works, but an error at the end like:

Error: An unexpected error occurred while trying to push the package with VstsNuGetPush.exe. Exit code(2) and error(The feed already contains ‘Example2 1.0.0’.)

I’m unsure how to get my CI process (build + test) running automatically, while also having a build test and release working automatically when I bump the version number. I’ll continue to discover new things though and post once I figure it out! Alternatively, if you already know how, please share! :)

And that’s it! Hopefully this give you some good information on working with your own PowerShell module CI solutions within VSTS and Package Management.

Getting Started with custom Docker containers on Azure

This post is something a little bit different to what I normally post (Hint: Linux :)), but I’m not a platform snob, you’ve got to use the best tool for the job!

In this case, the customer requirements were to run a PHP5 web application in Apache, with a MySQL backend, plus a few PHP modules and PhantomJS for charting, so Linux it is.

Before I got involved, the plan was for a two-tier architecture hosted using Azure IaaS VMs. From a customer point of view, this is a reasonably expensive solution, as there is a fair bit of intervention required for maintenance and backups, etc. which would inflate the cost. This solution also falls into a bit of a cost pit from Azure availability recommendations, which would require at least 4 VMs, plus load balancing get deployed, making the solution even more expensive.

Essentially, it sounded like a good time to sprinkle some cloud magic on it and see where the infrastructure ended up!

All of the command lines I use in this example are right to the best of my ability, but please let me know if there are any issues!

Initial Idea

I’d heard about Docker containers a bit, from a few people and I knew it would be a good fit for the web frontend for this (assuming I could get it working) and although this would be my first proper foray into using Docker for a production service, I was reasonably confident this would be the way forward. I could always fall back on the old plan, right?

That covers the web frontend you say, but what about the database! For the DB I noticed this little gem appear not so long ago in Azure, so that’s my target for that sorted too! (Yes I know it’s preview, but the roadmap points to early 2018 as GA. This app won’t be in production before then).

Let’s play… with Docker containers!

The very first step is running up my test machine. Since this is going to be a production service (eventually), my Linux distribution of choice is Debian. You may have a different take on it, but when it’s got to be Linux and work in a production environment, I’ve yet to have a better experience.

Once I’ve got everything installed (Git, Docker-CE, etc.), it’s time to get an initial PHP web app up to test. Since my plan was to deploy a custom Docker image, rather than reinvent the wheel, I wanted to find a base Apache/PHP5 docker image that I could test on Azure, before adding PhantomJS. Luckily, I found the ideal Docker image to start with in the Azure-App-Service/PHP repository (5.6.21-apache).

Note: If you’re trying to install the Azure CLI tools but getting odd failures, make sure to install python-pip first. The install script doesn’t always seem to catch the dependency being missing, but this fixed it for me.

I started testing this docker image with a test PHP file I could include, rather than hostingstart.html, other than that, I left the Dockerfile as default (for now).

COPY hostingstart.html /home/site/wwwroot/hostingstart.html
+COPY success.php /home/site/wwwroot/success.php

RUN a2enmod rewrite expires include deflate

To run the container, we then just use the standard build and run commands, making sure to map the Apache port through to the host machine.

> docker build -t container .

> docker run -d -p 8080:8080 container

Included test page

Included test page

And just for completeness, my PHP test page

PHP test page

Getting PhantomJS in the container

Update: This package is broken and seems to require bits of QT to function correctly. I have edited the post to install the PhantomJS package from binaries provided by ariya

Because the requirements of the web app only need the PhantomJS binary and nothing else fancy (like a listen port), we don’t need to set up a separate container for PhantomJS, we can just install the package into the container to be used directly by the web application. The largest problem with this, is that the docker image is based on Debian version 8 (Jessie), rather than the current stable release (version 9, ‘Stretch’). The PhantomJS package is only available in jessie-backports, whereas in stable, it’s in the main package list.

To get this working initially, this led me to edit the package install RUN line to add the backports repository and make sure that PhantomJS is installed. Unfortunately, this did not work as the package has some issues, so I had to install it from another source.

This changes the docker file, but only adds a few different RUN lines, rather than editing too much.

+RUN { \
+        cd /root; \
+        wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2; \
+        tar xvjf phantomjs-2.1.1-linux-x86_64.tar.bz2; \
+        cp phantomjs-2.1.1-linux-x86_64/bin/phantomjs /usr/local/bin/phantomjs; \
+        ln -sf /usr/local/bin/phantomjs /usr/bin/phantomjs; \
+    }

Now I can build the resulting container and test locally, by pushing things into /home/site/wwwroot and as if by magic, it works with either a local or Azure MySQL instance (once I’d added my public IP to the access list :P).

Build and publish to an Azure Container Registry

I needed a place to store the resulting container artifact, I decided to use an Azure Container Registry for this. I could have used the Docker Hub, but I thought it would be better to keep things in one place.

To deploy the container registry, i ran:

> az acr create -n containerregistry -g weblinuxcontainer --sku Basic --admin-enabled true

> az acr credential show -n containerregistry

I navigated to the Dockerfile folder for my custom container and ran this:

> docker build -t container .

> docker login --username containerregistry --password <password from acr credential> containerregistry.azurecr.io

> docker tag container:latest containerregistry.azurecr.io/container:v1

> docker push containerregistry.azurecr.io/containerregistry/container:v1

Testing it all out

To test and use this all in Azure we’ve just got to build the Linux Web App Service plan and create the Web App inside it. Once this is done, we can hook up the Azure container registry to the deployed Web App and then hopefully end up with a web page, or two!

> az appservice plan create -g weblinuxcontainer -n weblinuxplan --is-linux --sku B1

> az webapp create -g weblinuxcontainer -p weblinuxplan -n weblinuxapp

> az webapp config container set --name weblinuxapp --resource-group weblinuxcontainer --docker-custom-image-name container:v1 --docker-registry-server-url https://containerregistry.azurecr.io --docker-registry-server-user containerregistryuser --docker-registry-server-password <container registry password>

The Azure container service should then figure out the website port automagically, if it doesn’t you will need to tell Azure, you can do this with:

> az webapp config appsettings set --resource-group weblinuxcontainer --name weblinuxapp --settings WEBSITES_PORT=8080

You can then imagine how easy it is to start hooking in Azure MySQL, or another database as the data layer to this app. If your application requires some data to be available to the container, for test purposes or some legacy application storge in an ‘uploads’ folder there is always the ability to use the app service storage account to keep state within the scale set.

You will need to upload the files over FTP to this storage account. If you do this in production, remember to set backups!

> az webapp config container set --name weblinuxapp --resource-group weblinuxcontainer --enable-app-service-storage true

Final thoughts

Because I am using the built in app service storage account, I tried getting some continuous integration magic working by setting up a trigger on commit for Visual Studio Team Services to FTP files to the Web App storage to set a really nice development cycle up and running.

Visual Studio Team Services Error

This worked… but only with small amounts of files. Once there is over 300 files in the folder structure, I see connection failures start to plague the test deployment, which is annoying! I tried a few things, but haven’t managed to get past this yet. I’ll push on with this and see where I get, there’s a couple more ideas I have in mind to perhaps make this better.

Hopefully this was a nice intro into using custom Docker containers on Azure. It will definitely stay as a good set of notes for me for a while :)

Another change!

As you can see, the site has once again changed!

I’ve made some fairly big changes, but i’ve kept as much of the old content as I can. There’s a few broken links and formatting oddities, but welcome to migrating blog systems!

I’m using Jekyll at the moment, but since I’m a Windows bod, the buildchain is odd. I’m considering Hugo as an alternative, but I’ll make sure i’ve fixed the oddities and the links before I do that.

I’m working on the band side of things again, as well as some exciting projects coming up at work. This means there will hopefully be a few bits of shiny content and perhaps some music!

South Coast PowerShell Usergroup

I gave a presentation at the UK South Coast PowerShell usergroup back in June about psake and I thought I would post about it (finally!).

It was the first time i’d ever presented to a room of (mostly) strangers, but it was a really rewarding experience and something I am planning to repeat :).

You can find the presentation and the code on GitHub

If you are in the area, I strongly recommend you check it out! The meetup group for the South Coast group is here and the other user groups in the UK and details about them are here.

Discovering Open RDS User Profile Disks

Another week, another quick script!

Very occasionally, we have a problem with Remote Desktop User Profile Disks (UPDs) getting stuck mounted when there’s a problem with a RDS session host and it either reboots or crashes. Here’s a little script I’ve written to show the open VHDX files on a file server or SOFS cluster to find out which RDS host the VHDX is mounted on. Because user profile disks are quite fragile, there is a lot of odd behaviour around when a host crashes. When users log in again, the profile disk may mount on the new host, but the user always seems to receive a temporary profile and the disk gets stuck even after the user is logged out.

After this, you can then take action to unmount it and then deal with the disk. Anyways, here’s the script!

Script to be posted, sorry!