Push → Build → Deploy: AWS Pipeline in Action

I decided to build something hands-on using my AWS Free Tier account

Since I already had my AWS account created, I directly started with the setup. (If you’re new, the first step would be creating an AWS & GitHub account.)

Environment Setup

  • Configured Git
  • Configured AWS CLI
  • Used my local openSUSE Tumbleweed machine for development

:small_blue_diamond: Step 1: Create Source Code

I started with a very simple web application:

Created a very simple Dockerfile to containerize the application



Then I built the Docker image locally to test everything before pushing it.

Then i tried to run this image locally to test the webserver. Using below cmd

docker run -d -p 80:80 new-aws-deploy:latest > latest is a tag

Local browser screenshot:

:small_blue_diamond: Step 2 : Now create a Buildspec.yml file (A buildspec file is a YAML-formatted configuration file used in a CI/CD pipeline, primarily with services like AWS CodeBuild, to define the build, test, and packaging steps of a project.


:small_blue_diamond: Step 3: Version Control

  • Created a new public GitHub repository (https://github.com/)
  • Pushed the source code (HTML + Dockerfile) to GitHub

ON AWS,
ECR created on AWS: refer to the screenshot below for more details.

Created new build

AWS official document for the code build service. https://docs.aws.amazon.com/codebuild/latest/userguide/welcome.html

  • Go the AWS codebuild clicks on Create project
  • Give proper name
  • Configure the source for the code.
  • there are multiple options available select GitHub options as our code is on GitHub (create an access token on GitHub and map it to aws )

Don’t forget to copy the role name if you are creating a new role for this. As we need this role name to setup permission.


Select the “Use a build spec file” option as we are using buildspec.yml file for this.

Now click on “create build project”. Once the project is created click on the project and start build to test it.


Now click on the newly created batch to check the phases and the build logs. You can also see the logs in the CloudWatch.

Important Notes

  • I used the default VPC and Security Groups (SG) for this setup.
  • Depending on your environment configuration, you might encounter some errors.
  • Always verify that your IAM role permissions are correctly configured.
  • Ensure your build and deployment roles have proper access to required services.
  • Always check logs of the build to troubleshoot any errors.

Misconfigured IAM roles are one of the most common reasons for pipeline failures.

:small_blue_diamond: Step 4: Create new ECS cluster.

1) Open ECS Service

  1. Login to Amazon Web Services Console
  2. Search for Amazon ECS
  3. Click Clusters
  4. Click Create Cluster


Fill in:

  • Cluster name → my-fargate-cluster (or any name)
  • Infrastructure: Forget only
  • Leave default VPC (if you’re using default setup)
  • Leave default settings unless you need customization

Click Create

2) Next: Create Task Definition (Very Important) Cluster alone does nothing. You must create a Task Definition.

  1. In ECS → Click Task Definitions
  2. Click Create new Task Definition
  3. Select Fargate
  4. Click Next

Fill:

  • Task definition name → my-fargate-cluster (or any name)
  • Task role → Select IAM role (ensure permissions)
  • CPU → 0.25 vCPU (Free tier friendly)
  • Memory → 0.5 GB or 1 GB
  • Click Add Container

Fill:

  • Container name → As per the buildspec.yam file
  • Image → Your ECR image (browse ECR image)
  • Port mappings → 80


3) Final Step: Run Service

Now deploy it:

  1. Go to Clusters
  2. Click your cluster
  3. Click Create Service
  4. Launch type → Fargate
  5. Select Task Definition
  6. Number of tasks → 2 (as per the traffic)
  7. Select default VPC
  8. Select public subnet
  9. Select Security Group → Allow inbound port 80


After service runs:

  • Go to Tasks
  • Click running task
  • Copy Public IP
  • Open in browser:


HURRREEE IT’S WORKINGGGG :rocket::whale::cloud:

If it is not, Common Mistakes.

  • :x: Not allowing port 80 in Security Group
  • :x: Not assigning public IP
  • :x: Wrong ECR image URI
  • :x: IAM role missing permissions

Step 5: Create Pipeline.

Now that your ECS cluster is running, time to automate it like a real DevOps engineer :sunglasses: Before creating pipeline:

:white_check_mark: ECS Cluster running :white_check_mark: ECS Service created :white_check_mark: Task Definition created :white_check_mark: ECR repository created :white_check_mark: Code pushed to GitHub (Dockerfile + index.html) :white_check_mark: buildspec.yml file added

  1. Go to CodePipeline
  2. Click Create Pipeline
  3. Pipeline name → ecs-pipeline or my-fargate-cluster-pipeline (or any name)
  4. Service role → Create new role
  5. Artifact store → Default S3 bucket


Click Next

Click Next

Add Build Stage

  1. Build provider → CodeBuild
  2. Select the project you created (ex: ecs-build-project)

Click Next

Add Deploy Stage (ECS)

  1. Deploy provider → Amazon ECS
  2. Select:

A:Cluster name

B:Service name

Click Create Pipeline

:warning: Make sure:

  • container-name matches ECS task definition container name
  • ECR repo name is correct
  • Region matches

:tada: What Happens Now?

Every time you push code to GitHub:

GitHub → CodePipeline → CodeBuild → ECR → ECS → App Updated :rocket:

No manual deployment anymore.

:sunglasses: Common Errors

:x: Privileged mode not enabled

:x: IAM role missing ECR permission

:x: Wrong container name in imagedefinitions.json

:x: Wrong region.

once this works, you officially move from:

“I deploy manually” to “I automate everything” :fire::sunglasses:

:warning: Important Tip

Don’t forget to delete your AWS resources after completing the project.