From TeamCity to VSTS – My DevOps Journey

My team recently migrated to Visual Studio Team Services and as a result we have a goal to consolidate our deployment processes using the powerful Release Management framework. While this is the goal, for the moment I was interested in somehow getting our current build process in Team City to work along side our shiny new ALM provider.

Visual Studio Team Services and TeamCity. You would think they would go together like chocolate and lobster but with some work I was able to achieve a little unity…

In my development team we use TeamCity to perform the deployment of build outputs from our on-premise Team Foundation Server. In essence we use TeamCity as a glorified PowerShell runner but it does the job well and for a minimal financial outlay it has served us well. We have a few primary goals for our release management pipeline and TeamCity satisfies some of them:

  1. We require a record of our deployments including an audit log of the deployment itself and the current state of our environments.
  2. The deployment needs to be performed the same way every time.
  3. Although the goal of the deployment process is full automation, an approval step is required to allow stakeholders to control whether the deployment is performed.
  4. A summary of the deployment including change sets and associated work items should be generated as an artifact of the release.

Last month the company moved to Visual Studio Team Services and this change provided me with access to the Release Management framework. While TeamCity takes care of the first two goals, the promise of Release Management is that is sorts out the whole list and more. The ultimate aim is to put Release Management to work as the main DevOps solution but in the meantime we have a heap of IP wrapped up in TeamCity and I wanted to find a way to leverage that while being able to achieve the missing goals with VSTS.  With that in mind, I took at look at our current deployment pipeline and had a look at what I could leverage without changing the whole shebang.

Here’s the steps TeamCity was performing in a nutshell:

  1. The TeamCity build was executed with a parameter passed in for the TFS build name and TFS build number.
  2. The parameters would be used to determine a drop folder location in order for the first build running on a local agent to access the output of the TFS build.
  3. These artifacts were then passed to a chained build running on an agent on the target environment. This target environment could be on-premise or in Azure.
  4. A PowerShell script would be executed as a TeamCity build step on the same remote agent to deploy the artifacts.

As the components of my release pipeline were now spread between the cloud and on-premise I had to re-imagine the process a little in order to maximise the accessibility of the build artifacts. My goal was to create a new solution that looked more like this:

deploy_pipeline

  1. Create a release in VSTS Release Management via a manual trigger or preferably off a build on our Staging branch.
  2. Trigger the execution of one of our existing TeamCity builds via it’s REST API to deploy the build artifacts.
  3. Instead of accessing a UNC share, instead use the VSTS REST API to query for the drop location of the build being deployed.
  4. Download the drop.zip, decompress it and install the resulting build output using the existing process.

In this post I’ll tackle the last two steps and in follow up posts show how to join up the whole process to control everything from VSTS.

My first hurdle was accessing the build output from Release Management. As a UNC path wasn’t going to cut it anymore I first switched up the XAML build to drop the build output on the server. I did this by selecting the following option from Edit Build Definition form:

drop

This meant that the resulting build artifacts would be stored in VSTS and could be accessed from the web portal or via the REST API. In order to retrieve the build output I was required to add a parameter to the TeamCity build to find the artifacts and retrieve them from Powershell. This parameter was the Build.BuildId from VSTS and when the entire process was complete this could be accessed easily and passed over the TeamCity. In the meantime I could open the build and grab this value from the url:

https://<vsts_tenant>.visualstudio.com/<collection>/_build?_a=summary&buildId=842

With this value in hand I could change the Powershell script executed by TeamCity to retrieve the drop folder:

function Get-VSTSBuild() {
param(
[string] $buildId
)
$personalAccessToken = "GENERATED_PERSONAL_ACCESS_TOKEN"
$server = "https://VSTS_TENANT.visualstudio.com/"
$url = "$($server)COLLECTION/_apis/build/builds/$buildId/artifacts"
$authHeader = @{Authorization = 'Basic ' + [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(":$($personalAccessToken)")) }

#Write-Output $data
$artifacts = Invoke-RestMethod -Method Get -Uri $url -Headers $authHeader

$downloadUrl = $artifacts.value[0].resource.downloadUrl

$currentLocation = Get-Location

Invoke-WebRequest -Uri $downloadUrl -Headers $authHeader -OutFile "$currentLocation\drop.zip" -DisableKeepAlive

$zipfile = "$currentLocation\drop.zip"

Add-Type -AssemblyName System.IO.Compression.FileSystem
[System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $currentLocation)

Move-Item -Path $currentLocation\drop\* -Destination $currentLocation -Force
}

The personal access token can be generated from VSTS and then utilised in the PowerShell script above. I was also having trouble with getting an exception intermittently like the following when downloading the drop zip when the artifacts were on the larger side:

Exception: System.Net.Sockets.SocketException Message: An existing connection was forcibly closed by the remote host.

I found several blog posts like the following and added the -DisableKeepAlive to attempt to address the problem. So far it seems to be working and if I do see the problem again I’ll attempt to add Brian’s other suggestions to help alleviate the issue.

Edit: The -DisableKeepAlive doesn’t seem to be the silver bullet so I have instead switched to using System.Net.WebClient. The code now looks like this and seems to be rock solid:

function Get-VSTSBuild() {
param(
[string] $buildId
)
$personalAccessToken = "2u5v3piaai3hbyjk72tgiayokx5e6pauuh3kfmm6ygywq6zkv5uq"
$server = "https://teamfred.visualstudio.com/"
$url = "$($server)Fred/_apis/build/builds/$buildId/artifacts"
$authHeader = 'Basic ' + [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(":$($personalAccessToken)"))

$artifacts = Invoke-RestMethod -Method Get -Uri $url -Headers $authHeader

$downloadUrl = $artifacts.value[0].resource.downloadUrl

$currentLocation = Get-Location

$request = New-Object System.Net.WebClient
$request.Headers.Add("Authorization", $authHeader)

Write-Host "Downloading file 'drop.zip'"
$request.DownloadFile($downloadUrl, "$currentLocation\drop.zip")
Write-Host "Downloading file 'drop.zip'"

$zipfile = "$currentLocation\drop.zip"

Write-Host "Extracting file 'drop.zip'"
Add-Type -AssemblyName System.IO.Compression.FileSystem
[System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $currentLocation)

Move-Item -Path $currentLocation\drop\* -Destination $currentLocation -Force
}

With that in place I was now able to trigger a build in TeamCity using the desired VSTS BuildId and perform my standard deployment with no more further modification. In the next post in this series I’ll show how have created a custom build step with both a PowerShell and NodeJS implementation in order to initiate the whole process.

In the final step I’ll have a look at the approval flow and the Send Email Summary feature to produce sweet release notes when we deploy to our staging and production environments. There is a currently an issue with the Send Email Summary function and XAML based builds but I’m assured by the VSTS team that a fix is in the pipeline.

Happy DevOps!!!

Leave a Reply

Your email address will not be published. Required fields are marked *