Dockerfile for LAMP Stack (Linux, Apache, MySQL, PHP)
This Dockerfile sets up a LAMP stack in a Docker container, ideal for running PHP web applications. It uses an official PHP image with Apache and installs MySQL extensions, ensuring your application connects to MySQL databases effortlessly. Just add your PHP code to the src directory, build, and run the container to deploy your application quickly.
Dockerfile:
FROM php:7.4-apache
RUN docker-php-ext-install mysqli pdo pdo_mysql
COPY src/ /var/www/html/
EXPOSE 80
Instructions:
Dockerfile for MEAN Stack (MongoDB, Express.js, Angular, Node.js)
This Dockerfile is a great starting point for efficiently deploying MEAN stack web applications. It uses the official Node.js image, installs your dependencies, and sets everything up for your app to run. Build and run this Docker image, and your application will be ready on port 3000.
Dockerfile:
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
The additional files index.js and package.json are essential for running your MEAN stack application within the Docker container, tailored to a Node.js environment. These are sample files designed to demonstrate a basic container setup for a Node.js application. Remember, these files are just starting points. You should replace or extend them with your own code to build a fully functional web application tailored to your requirements.
index.js
const http = require('http');
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World\n');
});
const PORT = process.env.PORT || 3000;
server.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
package.json
{
"name": "simple-node-app",
"version": "1.0.0",
"description": "A simple Node.js application",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"author": "KodeKloud",
"license": "ISC"
}
Instructions:
Basic Deployment and Service YAML
The Deployment.yaml creates a Kubernetes deployment with three replicas of an application for high availability. The Service.yaml exposes these replicas externally on port 80 using a LoadBalancer, facilitating access to the application. This setup ensures the application is scalable, resilient, and accessible.
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-app:latest
ports:
- containerPort: 80
Service.yaml:
apiVersion: v1
kind: Service
metadata:
name: my-loadbalancer-service
spec:
type: LoadBalancer
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
Instructions:
Notes:
StatefulSet Template for Database Applications
This template includes specifications for a StatefulSet that manages a database application (like PostgreSQL or MongoDB). It should define persistent volume claims for data storage, appropriate environment variables, liveness and readiness probes, and a headless service for stable networking.
StatefulSet.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: <database-name>
spec:
serviceName: "<database-service>"
replicas: 3
selector:
matchLabels:
app: <database-label>
template:
metadata:
labels:
app: <database-label>
spec:
containers:
- name: <database-name>
image: <database-image>
ports:
- containerPort: <database-port>
volumeMounts:
- name: <volume-name>
mountPath: <data-directory>
volumeClaimTemplates:
- metadata:
name: <volume-name>
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
Instructions:
Notes:
Horizontal Pod Autoscaler (HPA) Template:
Horizontal Pod Autoscalers automatically scale the number of pods in a deployment, replicaset, or statefulset based on observed CPU utilization or other select metrics. This template would help users quickly set up an HPA for their application, ensuring efficient resource usage and better handling of load variations.
HPA.yaml
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: <hpa-name>
namespace: <namespace>
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: <deployment-name>
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
Instructions:
Notes:
Network Policy Template:
Network policies are crucial for securing Kubernetes networks. They define how pods can communicate with each other and other network endpoints. This template would provide a basic structure for creating a network policy to control ingress and egress traffic for a set of pods.
NetworkPolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: <policy-name>
namespace: <namespace>
spec:
podSelector:
matchLabels:
app: <app-label>
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: <source-app-label>
ports:
- protocol: TCP
port: <port>
egress:
- to:
- podSelector:
matchLabels:
app: <destination-app-label>
ports:
- protocol: TCP
port: <port>
Instructions:
Notes:
Ingress Controller Template:
An Ingress controller is a vital component in a Kubernetes cluster, as it manages access to services from outside the Kubernetes network. This template would help in setting up an Ingress controller, like Nginx or Traefik, along with basic routing rules.
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: <ingress-name>
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: <app-domain>
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: <service-name>
port:
number: <service-port>
Instructions:
Notes:
Basic Helm Chart Template
This Helm chart template provides a standardized and customizable way to deploy applications to Kubernetes. It includes a Chart.yaml file for chart metadata, a values.yaml file for configuration values, and templated Kubernetes manifest files for a deployment and service.
Chart.yaml:
apiVersion: v2
name: my-app
version: 0.1.0
Values.yaml:
replicaCount: 3
image:
repository: my-app
tag: "latest"
pullPolicy: IfNotPresent
service:
type: LoadBalancer
port: 80
templates/deployment.yaml and templates/service.yaml:
Instructions:
Notes:
This example creates a basic AWS network infrastructure, an EC2 instance, and an S3 bucket. Ensure you have the latest version of Terraform installed and configured with your AWS credentials.
AWS Terraform Template:
provider "aws" {
region = "us-west-2"
}
resource "aws_vpc" "example_vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
tags = {
Name = "example-vpc"
}
}
resource "aws_subnet" "example_subnet" {
vpc_id = aws_vpc.example_vpc.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-west-2a"
tags = {
Name = "example-subnet"
}
}
resource "aws_internet_gateway" "example_igw" {
vpc_id = aws_vpc.example_vpc.id
tags = {
Name = "example-igw"
}
}
resource "aws_instance" "example_instance" {
ami = "ami-0c55b159cbfafe1f0" # Update with latest AMI
instance_type = "t2.micro"
subnet_id = aws_subnet.example_subnet.id
tags = {
Name = "example-instance"
}
}
resource "aws_s3_bucket" "example_bucket" {
bucket = "my-example-bucket" # Update with a unique bucket name
acl = "private"
}
Instructions:
Notes:
This example will focus on configuring an Apache web server on an Ubuntu-based system. Ensure you have Ansible installed and you can connect to your target server(s) via SSH.
- name: Setup Apache Web Server
hosts: your_web_servers # Replace with your server group or individual server
become: yes
tasks:
- name: Install Apache
apt:
name: apache2
state: present
update_cache: yes
- name: Start Apache and enable on boot
systemd:
name: apache2
enabled: yes
state: started
- name: Deploy a basic index.html
copy:
content: "<html><body><h1>Hello from Ansible</h1></body></html>"
dest: /var/www/html/index.html
Instructions:
Notes:
AWS CloudFormation Template: Basic Network Setup
This CloudFormation template will create the following resources:
CloudFormation YAML Template:
AWSTemplateFormatVersion: '2010-09-09'
Description: Basic Network Setup
Resources:
MyVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/16
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: Name
Value: MyVPC
MySubnet:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref MyVPC
CidrBlock: 10.0.1.0/24
MapPublicIpOnLaunch: true
Tags:
- Key: Name
Value: MySubnet
MyInternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: MyInternetGateway
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref MyVPC
InternetGatewayId: !Ref MyInternetGateway
MySecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allow HTTP and SSH access
VpcId: !Ref MyVPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
How to Deploy:
Notes:
This CloudFormation template will create the following resources:
CloudFormation YAML Template:
AWSTemplateFormatVersion: '2010-09-09'
Description: Basic Network Setup
Resources:
MyVPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/16
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: Name
Value: MyVPC
MySubnet:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref MyVPC
CidrBlock: 10.0.1.0/24
MapPublicIpOnLaunch: true
Tags:
- Key: Name
Value: MySubnet
MyInternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: MyInternetGateway
AttachGateway:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref MyVPC
InternetGatewayId: !Ref MyInternetGateway
MySecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allow HTTP and SSH access
VpcId: !Ref MyVPC
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
How to Deploy:
Notes:
This Jenkinsfile defines a CI/CD pipeline for a Java application using Maven. It consists of three main stages:
This pipeline provides a basic structure for Java Maven projects, automating the process of building, testing, and preparing for deployment.
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'mvn clean package'
}
}
stage('Test') {
steps {
sh 'mvn test'
}
}
stage('Deploy') {
steps {
// Add deployment steps here
echo 'Deploying application...'
}
}
}
}
Instructions:
Notes:
The .gitlab-ci.yml file defines a CI/CD pipeline specifically for a Node.js application, with the following stages:
This configuration ensures that every push to the repository triggers an automated process for building, testing, and preparing the Node.js application for deployment.
stages:
- build
- test
- deploy
build_job:
stage: build
script:
- echo "Building the application..."
- npm install
test_job:
stage: test
script:
- echo "Running tests..."
- npm run test
deploy_job:
stage: deploy
script:
- echo "Deploying the application..."
# Add deployment scripts here
Instructions:
Notes:
The GitHub Actions workflow is tailored for Python applications. It defines a job that runs on each push to the repository, with the following steps:
The deploy step is a placeholder for actual deployment commands and should be customized based on where and how the Python application is deployed.
name: Python CI/CD
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.8'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Run tests
run: |
python -m unittest discover -s tests
deploy:
runs-on: ubuntu-latest
needs: build
steps:
- uses: actions/checkout@v2
- name: Deploy to Production
run: echo "Add deployment steps here"
Instructions:
Notes:
Prometheus Configuration File (prometheus.yml)
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'my-application'
static_configs:
- targets: ['my-app-service:80']
Instructions:
Notes:
Grafana Dashboard JSON Template
{ "__inputs": [
{
"name": "DS_PROMETHEUS",
"label": "Prometheus",
"description": "",
"type": "datasource",
"pluginId": "prometheus",
"pluginName": "Prometheus"
}
],
"__requires": [/.../],
"annotations": {/.../},
"editable": true, "gnetId": null,
"graphTooltip": 0,
"id": null,
"links": [],
"panels": [/.../],
"refresh": "10s",
"schemaVersion": 16,
"style": "dark",
"tags": ["prometheus"],
"templating": {/.../},
"time": {/.../},
"timepicker": {/.../},
"timezone": "",
"title": "My Dashboard",
"uid": null,
"version": 1
}
Instructions:
Notes:
Elastic Stack (ELK): Configurations for Log Ingestion, Processing, and Visualization
ELK Stack PlayGround : https://kodekloud.com/playgrounds/playground-elk-stack
Filebeat Configuration (filebeat.yml)
filebeat.inputs:
- type: log
paths:
- /var/log/*.log
output.elasticsearch:
hosts: ["localhost:9200"]
Instructions:
Notes:
Logstash Configuration (logstash.conf)
input {
beats {
port => 5044
}
}
filter {
// Add your filters here
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash-%{+YYYY.MM.dd}"
}
}
Instructions:
Notes:
Kibana: Dashboard Template for Log Monitoring
Kibana Dashboard JSON Template
{
"objects": [
{
"attributes": {
"description": "",
"kibanaSavedObjectMeta": {
"searchSourceJSON": {
"filter": [],
"query": {
"language": "kuery",
"query": ""
}
}
},
"title": "ELK Log Monitoring",
"uiStateJSON": {},
"version": 1,
"visState": {
"aggs": [
/* Define aggregations and metrics here /
],
"params": {
/ Define visualization parameters */
},
"type": "histogram"
}
},
"id": "log-monitoring",
"type": "visualization",
"version": "7.10.0" // Use your Kibana version
}
],
"type": "dashboard",
"version": "7.10.0" // Use your Kibana version
}
Instructions:
Notes:
ELK Stack Configuration Guide for Web Server Logs
This comprehensive guide provides you with the templates and detailed steps necessary to deploy the ELK Stack for collecting, processing, storing, and visualizing web server logs. This solution leverages Elasticsearch, Logstash, Kibana, and Filebeat to create a powerful system for real-time log monitoring and analysis.
Elasticsearch Index Template Setup
Template Configuration (ElasticsearchIndexTemplate.json
):
PUT _index_template/template_web_logs
{
"index_patterns": ["web-logs-*"],
"template": {
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1
},
"mappings": {
"properties": {
"timestamp": { "type": "date" },
"log_level": { "type": "keyword" },
"message": { "type": "text" },
"ip": { "type": "ip" },
"response_time": { "type": "float" }
}
}
}
}
How to Deploy:
template_web_logs
with your template name (e.g., template_myapp_logs
).localhost:9200
with your Elasticsearch server address: curl -X PUT "localhost:9200/_index_template/template_web_logs" -H 'Content-Type: application/json' [email protected]
Logstash Configuration
Configuration File (LogstashConfig.conf
):
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "web-logs-%{+YYYY.MM.dd}"
user => "elastic"
password => "changeme"
}
}
How to Deploy:
hosts
, user
, and password
to match your Elasticsearch details.LogstashConfig.conf
in Logstash's configuration directory.bin/logstash -f LogstashConfig.conf
.
Kibana Visualization and Dashboards
Creating Visualizations:
http://localhost:5601
).web-logs-*
as the pattern.Filebeat Configuration
Configuration File (filebeat.yml
):
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/apache2/*.log # Adjust the path to your log files
fields:
log_type: apache_log
output.logstash:
hosts: ["localhost:5044"] # Your Logstash server address
How to Deploy:
filebeat.yml
with your log paths and output destination.sudo service filebeat start
.
Final Steps and Verification
After deploying each component:
This guide provides a foundational setup. Tailor each configuration to fit your specific logging needs and infrastructure for an effective log monitoring and analysis solution.
Git Repository Essentials for a Python Project
This comprehensive guide outlines the essentials for setting up and maintaining a Git repository for a Python project. It includes a .gitignore
file to exclude unnecessary files, a template for creating a README.md
, guidelines for branch naming, and a pull request template to standardize contributions.
Basic .gitignore
for a Python Project
A properly configured .gitignore
file is crucial for keeping your repository clean by excluding temporary files, environment-specific configurations, and other non-essential files from being tracked by Git.
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*.so
# Environment files
.env
# Virtual environment
venv/
# IDE settings
.idea/
# Log files
*.log
Instructions: Save this content as .gitignore
in the root of your Python project to automatically ignore common unnecessary files.
README Template
A well-documented README.md
helps users and contributors understand, install, and use your project effectively.
# Project Title
## Description
Short description of the project.
## Installation
Steps to install the project.
## Usage
How to use the project.
## Contributing
Guidelines for contributing to the project.
## License
Specify the project license (e.g., MIT, GPL).
Instructions: Customize each section of this README.md
template with your project details and save it in the root of your repository.
Simple Branch Naming Guidelines
Consistent branch naming helps organize and manage changes in your repository.
feature/<feature-name>
bugfix/<bug-name>
hotfix/<hotfix-name>
release/v<version>
Instructions: Adopt these naming conventions for branches in your project to maintain clarity and order.
Pull Request Template
A pull request template ensures that all contributions are consistent and provide the necessary information for review.
## Description
A brief summary of the changes.
## Type of Change
- [ ] New feature
- [ ] Bug fix
- [ ] Documentation update
## How Has This Been Tested?
Describe how you've tested the changes.
## Checklist
- [ ] I have followed the contribution guidelines.
- [ ] My changes do not generate new warnings.
Instructions: Save this template as .github/PULL_REQUEST_TEMPLATE.md
in your repository. It will automatically populate the description field for new pull requests.
Enhancements and Best Practices
README.md
, up to date with project changes and releases.Bash/PowerShell: Scripts for Common System Administration Tasks
Bash Script for Basic System Updates (update_system.sh)
#!/bin/bash
# Update and upgrade system packages
echo "Updating and upgrading system packages..."
sudo apt-get update && sudo apt-get upgrade -y
# Clean up
echo "Cleaning up..."
sudo apt-get autoremove -y
echo "System update complete."
Instructions:
Notes:
Python Script for AWS S3 File Upload (s3_upload.py)
import boto3
from botocore.exceptions import NoCredentialsError
def upload_to_aws(local_file, bucket, s3_file):
s3 = boto3.client('s3')
try:
s3.upload_file(local_file, bucket, s3_file)
print(f"Upload Successful: {s3_file}")
return True
except FileNotFoundError:
print("The file was not found")
return False
except NoCredentialsError:
print("Credentials not available")
return False
uploaded = upload_to_aws('local_file.txt', 'mybucket', 's3_file.txt')
Instructions:
Notes:
Python Script for Downloading Files from AWS S3
s3_download.py:
import boto3
from botocore.exceptions import NoCredentialsError, ClientError
import logging
def download_file_from_s3(bucket, s3_object, local_file):
s3 = boto3.client('s3')
try:
s3.download_file(bucket, s3_object, local_file)
print(f"Downloaded {s3_object} from {bucket} to {local_file}")
except ClientError as e:
logging.error(e)
return False
except NoCredentialsError:
logging.error("Credentials not available")
return False
return True
# Example Usage
bucket_name = 'your-bucket-name'
s3_object_name = 'your-object-name'
local_file_path = 'path/to/save/file'
download_file_from_s3(bucket_name, s3_object_name, local_file_path)
Python Script to Automate EC2 Instance Creation
create_ec2_instance.py:
import boto3
def create_ec2_instance(image_id, instance_type, keypair_name):
ec2 = boto3.resource('ec2')
instance = ec2.create_instances(
ImageId=image_id,
MinCount=1,
MaxCount=1,
InstanceType=instance_type,
KeyName=keypair_name
)
print(f"EC2 Instance {instance[0].id} created")
# Example Usage
image_id = 'ami-12345'
# Replace with actual AMI ID
instance_type = 't2.micro'
keypair_name = 'your-keypair-name'
# Replace with your keypair name
create_ec2_instance(image_id, instance_type, keypair_name)
Python Script for Basic Data Processing
Data_processing.py:
import pandas as pd
def load_and_process_data(file_path): # Load data df = pd.read_csv(file_path)
# Data processing steps
# Example: df = df.dropna() # Removing missing values
return df
# Example Usage
file_path = 'path/to/your/data.csv'processed_data = load_and_process_data(file_path)print(processed_data.head())