In the last post, we got all of our tools and infrastructure setup. Now for the meat — automating it all with CodePipeline.

Creating A Pipeline

As usual, AWS will walk us through the process of creating the pipeline. It will ask to create a service role capable of running the pipeline. Because of the wide range of services CodePipeline may need to touch for any given project, the default IAM role it creates has a long list of permissions. It’s fine for this example, but you may want to go back and remove permissions that it doesn’t really need.

CodePipeline will need some workspace storage to do its job. If you don’t already have an S3 bucket, create one. I used the same one as I used for my CodeBuild job. When building with CodePipeline, CodeBuild will get the source code from the pipeline’s S3 bucket, instead of checking out the code directly from CodeCommit. So you must add permissions to read that bucket to CodeBuild’s IAM role. Here is sample JSON for it:

{
 "Effect": "Allow",
 "Resource": [
 "arn:aws:s3:::codepipeline-us-east-1-*"
 ], 
 "Action": [
 "s3:PutObject",
 "s3:GetObject",
 "s3:GetObjectVersion"
 ] 
}

Another side effect of CodePipeline getting the code from SCM instead of CodeBuild, is that the build job will not have a `.git` directory available. If your build depends on that (using a git plugin to grab tag info for example) you will have to work around this limitation. One option is to add a step to your buildspec.yml that clones the repo. The same applies to other SCM tools besides git.

In the pipeline wizard, you will need to create a service role if haven’t used CodePipeline before. Otherwise, you can reuse an existing one. For the artifact store (the place CodePipeline will use as a workspace) you can either create a new S3 bucket or reuse an existing one. It’s ok to use the same buck as the CodeBuild job. Each tool gets its own “subfolder” in the bucket.

The next steps will walk you through adding a Source Provider and Build Provider, and a Deploy provider. You can fill in the info for your CodeCommit repo, CodeBuild job, and ECS cluster respectively.

Pipeline Structure

The pipeline itself is divided into stages. The default pipeline will give you three stages that correspond to each of the providers configured in the wizard (source, build, deploy). Each stage is comprised of one or more actions. Each action executes a particular step. Any action that fails will halt the pipeline.

If you add multiple actions to a stage they will run in parallel by default. Using the AWS CLI you can update the `runOrder` property of each action to make them run in a sequence. There is currently no access to that property in the web UI, but you can create “Action Groups” which will execute serially (even if there is only one action per group).

As soon as one action/stage is successful, the pipeline will immediately move on to the next one.

After completing the wizard you should have a working that pipeline that will detect code commits, automatically trigger a build, and deploy the new app version to ECS. NOTE: if you set your service’s task count to zero, you will have to update the service to have at least one task. CodePipeline will deploy with zero tasks, but then, of course, you won’t see that it really worked since your app won’t be running.

Adding An Integration Test Stage

This is a good start, but to make this pipeline more useful we are going to add two more stages after the Deploy stage.

Additional pipeline stages.

These stages will run integration tests, cleanup that environment, and if they succeeded deploy the app to Production (another cluster we will create later). There are many options for running integration tests.

View your pipeline and click the “Edit” button in the upper right. Add a new stage, and new action. When creating the action, CodePipeline will give you choices for the type of action. In the list of actions one section is called “Test actions”. You will notice CodeBuild in that list. Since a CodeBuild job can run arbitrary scripts, you can basically use it as a task runner to execute test scripts, or run selenium tests for example. CodePipeline will wait up to eight hours for a CodeBuild to complete, so if your tests can run within that time, it’s a viable option.

It’s also possible to create custom actions through the CLI. Custom actions require you to have an EC2 instance running a process that polls CodePipeline for action requests. When it detects an action request, it does its job, then responds to CodePipeline with the action status.

Creating An Integration Test With Lambda

In this case, we are going to go a simpler route and create an action that invokes a lambda. The lambda will run a test and report back to CodePipeline. Adding a lambda pipeline action is simple. In the New Action dropdown, look for the “invoke actions” section. The only choice at this time is AWS Lambda. Then all you have to do is pick which lambda function you want to invoke. Optionally, you can also configure CodePipeline to send additional parameters to the lambda based on the output of previous stages.

The glue that enables this to work is that CodePipeline passes the job ID to the lambda when invoking it. The lambda is then able to use AWS SDK to fire a status response back to CodePipeline indicating the success or failure of the job.

Here is a full lambda function to do that, but the most relevant part is here:

// Retrieve the Job ID from the Pipe action
let jobId = event["CodePipeline.job"].id;

// fire a GET request to fetch a web page from your app

if( result.body.includes("Text from your app webpage") ){
 console.log("SUCCESS: loaded web page! Notifying Job: " + jobId);
 await codepipeline.putJobSuccessResult( {jobId: jobId} ).promise();
 console.log("Updated the pipeline with success!");
} else {
 var failMessage = "Failed to load web page!"
 console.log( failMessage );
 
 var params = {
 jobId: jobId,
 failureDetails: {
 message: failMessage,
 type: 'JobFailed',
 externalExecutionId: context.invokeid
 }
 };
 await codepipeline.putJobFailureResult( params ).promise();
 console.log("Updated the pipeline with failure!");
}

The code shows how to integrate a lambda with CodePipeline. This example is in JavaScript, but the same principles apply to any language. The crux of it is pulling out the jobID from `event` object passed in by CodePipeline. Once the test is complete, you can use the CodePipeline SDK to send either a putJobSuccessResult or a putJobFailureResult. The data you pass to either one MUST include the CodePipeline job ID. Optionally you can also include a message that will show up in your action status.

One thing to watch out for with asynchronous languages like JS is that if the lambda finishes executing before CodePipeline process the result message, it might never go through, and your pipeline action will be left hanging till it times out. That’s why this example waits for the response from CodePipeline.

One of CodePipeline’s most irritating traits is that when executing a “deploy to ECS” action, it doesn’t wait for the container to become healthy before moving on to the next action/stage. It just kicks it off, marks it successful, and moves on. That is why this test does a loop to wait for a connection to the container we are testing. If you want to make sure the instance is up before moving on to the test stage, you could add another action group after the ECS deploy that invokes a lambda that waits till the container is responsive. Just take that same connection check loop, and as soon it connects, send a `putJobSuccessResult` message to the pipeline.

Alternatives For Implementing Test Stages

If it’s not desirable to break your integration tests up into lambdas, another option to use to run another CodeBuild job to do it. You will notice in the CodePipeline action providers dropdown, under the “Test Actions” section, one of the choices is CodeBuild.

Since a CodeBuild buildspec.yml can contain arbitrary scripts, you could use it to run a test suite against your integration environment. In the post_build section, use a command that checks some output from the test suite. If it failed, return an error code (e.g. `return 1`) which will fail the build job, and in turn fail the pipeline step. At this time, CodePipeline will wait eight hours for a CodeBuild job to complete before timing out and failing the action.

It seems like a bit of abuse to use CodeBuild as an arbitrary task runner, but that’s exactly why AWS added it as a possible test action.

Cleaning Up Test Environments

Once your tests are successful, we want to shut down the integration environment so we don’t incur costs for it. There isn’t a pipeline action for stopping a cluster, but we can invoke another lambda that will update the integration service’s task count to zero. That will cause it to stop all running tasks.

The full code is available here, but here is the pertinent part of the lambda:

// service update params, desiredCount refers to the number of tasks.
var params = {
 desiredCount: 0, 
 service: serviceName,
 cluster: "hello-cloud-int"
};
 
try{
 console.log("Updating service...");
 await ecs.updateService(params).promise();
 console.log("Done Updating service, pass the pipeline action...");
 await codepipeline.putJobSuccessResult( {jobId: jobId} ).promise();
 console.log("Pipeline updated with success");
}

Adding Additional Environments To The Pipeline

Now all that’s left to do is add your next deployment environment and a pipeline stage to deploy to it. Creating another environment in ECS requires a new cluster and a new service. The service should use the same task definition (so the containers that were tested are the ones that get deployed) but will probably have a different network configuration. If this is a production environment it might also specify more task instances, load balancing, auto-scaling, etc.

Once you have the new service created, make a new cluster to run it. To continue adding tests and new environments (e.g. QA, staging, production) the steps in this post can be repeated as needed.

Estimating Costs

As with everything in AWS, it is important to understand what the costs are. The table below summarizes those expenses. In general, you can count on paying for the typical things AWS charges for: storage, compute time, and data transfer.

AWS Fees as of the date of this post

ToolCost
CodePipeline$1.00 per active pipeline per month (first 30 days are free)
ECR$0.10 per GB-month storage, plus transfer costs for >1gb between regions (within the same region is free).
ECSHourly CPU and Memory usage. A little over $.04 per CPU/GB hour. Data transfer and cloudwatch fees might apply too, depending on your app.
CodeBuildCharged per build minute, e.g. $.005/min for a small build server. Plus related fees, such as storage for build artifacts in S3
CodeCommit$1 per month for each user above 5. Limits on storage (10GB per user) and git requests (2k per user per month). Additional fees if you go over that.
LambdaNormal lambda fees based on the memory used and compute time

For more details on the costs, see these links: