Terraform level3- task2- kinesis firehose setup

Hi All,

I am stuck with Terraform level 3 - task2.
I have setup everything correct and terraform apply working fine and also i am able to see the new line delimiter working when data pushed to s3 using kinesis CLI.
But still the task is failing. I have attached the screenshot ( you can see the error as well as the main.tf config file )

Any kind of support would be appreciated.
Thanks !

Hi @rahuldotbhatia

Please share the details of your main.tf, outputs.tf, and variables.tf files. The task requires several configurations, so sharing only a screenshot of the main.tf file isn’t enough.

main.tf:

provider “aws” {
region = “us-east-1”
}

:one: Create S3 bucket

resource “aws_s3_bucket” “firehose_bucket” {
bucket = var.KKE_S3_BUCKET_NAME
}

:two: Create IAM role for Firehose

resource “aws_iam_role” “firehose_role” {
name = var.KKE_FIREHOSE_ROLE_NAME

assume_role_policy = jsonencode({
Version = “2012-10-17”
Statement = [
{
Effect = “Allow”
Action = “sts:AssumeRole”
Principal = {
Service = “firehose.amazonaws.com”
}
}
]
})
}

:three: Attach IAM policy to allow Firehose to write to S3

resource “aws_iam_role_policy” “firehose_policy” {
name = “firehose-s3-policy”
role = aws_iam_role.firehose_role.id

policy = jsonencode({
Version = “2012-10-17”
Statement = [
{
Effect = “Allow”
Action = [
“s3:PutObject”,
“s3:PutObjectAcl”,
“s3:ListBucket”
]
Resource = [
aws_s3_bucket.firehose_bucket.arn,
“${aws_s3_bucket.firehose_bucket.arn}/*”
]
}
]
})
}

:four: Create Kinesis Firehose Delivery Stream

resource “aws_kinesis_firehose_delivery_stream” “firehose_stream” {
name = var.KKE_FIREHOSE_STREAM_NAME
destination = “extended_s3”

extended_s3_configuration {
role_arn = aws_iam_role.firehose_role.arn
bucket_arn = aws_s3_bucket.firehose_bucket.arn
buffering_size = 5
buffering_interval = 300
compression_format = “UNCOMPRESSED”

processing_configuration {
  enabled = true

  processors {
    type = "AppendDelimiterToRecord"

    parameters {
      parameter_name  = "Delimiter"
      parameter_value = "\\n"
    }
  }
}

}

depends_on = [
aws_iam_role_policy.firehose_policy
]
}

variables.tf:
variable “KKE_S3_BUCKET_NAME” {
description = “Name of the S3 bucket for Firehose delivery”
type = string
default = “devops-stream-bucket-12310”
}

variable “KKE_FIREHOSE_STREAM_NAME” {
description = “Name of the Firehose delivery stream”
type = string
default = “devops-firehose-stream”
}

variable “KKE_FIREHOSE_ROLE_NAME” {
description = “Name of the IAM role for Firehose”
type = string
default = “firehose-sts-role”
}

@raymond.baoly Although, there is only a need of main.tf file here to examine , as the error on page shows “Firehose processor is missing the Delimiter” , as I mentioned that outputs file working fine and I am getting the expected output, but the task is failing with "Firehose processor is missing the Delimiter

Hi @rahuldotbhatia

I was able to pass the task using the main.tf file below. Please compare it with your file and try again.

# --------------------------
# 1. Create S3 Bucket
# --------------------------
resource "aws_s3_bucket" "firehose_bucket" {
  bucket = var.KKE_S3_BUCKET_NAME
}

# --------------------------
# 2. IAM Role for Firehose (STS Assume Role)
# --------------------------
resource "aws_iam_role" "firehose_role" {
  name = var.KKE_FIREHOSE_ROLE_NAME

  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Principal = {
          Service = "firehose.amazonaws.com"
        },
        Action = "sts:AssumeRole"
      }
    ]
  })
}

# --------------------------
# 3. IAM Policy for Firehose to Access S3
# --------------------------
resource "aws_iam_policy" "firehose_s3_policy" {
  name        = "firehose-s3-access-policy"
  description = "Policy that allows Firehose to write to S3"

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Action = [
          "s3:AbortMultipartUpload",
          "s3:GetBucketLocation",
          "s3:GetObject",
          "s3:ListBucket",
          "s3:ListBucketMultipartUploads",
          "s3:PutObject"
        ],
        Resource = [
          aws_s3_bucket.firehose_bucket.arn,
          "${aws_s3_bucket.firehose_bucket.arn}/*"
        ]
      },
      {
        Effect = "Allow",
        Action = ["logs:PutLogEvents"],
        Resource = "*"
      }
    ]
  })
}

# --------------------------
# 4. Attach IAM Policy to Role
# --------------------------
resource "aws_iam_role_policy_attachment" "firehose_s3_policy_attach" {
  role       = aws_iam_role.firehose_role.name
  policy_arn = aws_iam_policy.firehose_s3_policy.arn
}

# --------------------------
# 5. Create Kinesis Firehose Delivery Stream
# --------------------------
resource "aws_kinesis_firehose_delivery_stream" "firehose_stream" {
  name        = var.KKE_FIREHOSE_STREAM_NAME
  destination = "extended_s3"

  extended_s3_configuration {
    role_arn           = aws_iam_role.firehose_role.arn
    bucket_arn         = aws_s3_bucket.firehose_bucket.arn

    buffering_size     = 5       # MB
    buffering_interval = 300     # seconds
    compression_format = "UNCOMPRESSED"

    # Append newline after each record
    processing_configuration {
      enabled = true
      processors {
        type = "AppendDelimiterToRecord"
        parameters {
          parameter_name  = "Delimiter"
          parameter_value = "\n"
        }
      }
    }

    cloudwatch_logging_options {
      enabled         = true
      log_group_name  = "/aws/kinesisfirehose/xfusion-stream"
      log_stream_name = "S3Delivery"
    }
  }

  depends_on = [aws_iam_role_policy_attachment.firehose_s3_policy_attach]
}

Thanks, and it worked,
I was using ```
parameter_value = “\n”, referring to cloudformation template here:

1 Like