Skip to content

Deploying Event-Driven Ansible

Introduction

Event-Driven Ansible (EDA) is a powerful automation framework that enables intelligent, automated responses to IT infrastructure events in real-time. Built by Red Hat as an extension of Ansible Automation Platform, EDA transforms traditional “run and wait” automation into responsive, self-healing systems that react instantly to changes across your entire infrastructure stack.

At its core, EDA combines event sources (webhooks, Kafka streams, AWS CloudWatch, Prometheus alerts, system logs, and more) with Ansible rulebooks—declarative configurations that define conditions and automated responses. When specific events occur, EDA automatically triggers the appropriate Ansible playbooks, roles, or modules to remediate issues, scale resources, update configurations, or notify teams, all without human intervention.

Whether you’re building self-healing infrastructure, automating incident response, orchestrating complex deployments triggered by CI/CD pipelines, or creating sophisticated multi-cloud automation workflows, Event-Driven Ansible provides the intelligence layer that bridges monitoring, observability, and automated remediation. This guide walks you through deploying a production-ready EDA server on Klutch.sh, configuring event sources, writing effective rulebooks, and integrating with your existing Ansible infrastructure.

Key Features:

  • Multiple Event Sources: Built-in support for webhooks, Kafka, MQTT, alertmanagers, log watchers, AWS EventBridge, Azure Event Grid, and custom plugins
  • Declarative Rulebooks: YAML-based rulebooks that define event patterns and automated responses with jinja2 templating
  • Ansible Integration: Seamless execution of existing Ansible playbooks, roles, and modules in response to events
  • Scalable Architecture: Distributed processing with support for multiple concurrent rulebooks and event streams
  • Condition Matching: Sophisticated pattern matching with logical operators, regex, and custom filters for complex event correlation
  • Action Varieties: Execute playbooks, run modules, make HTTP calls, set variables, or trigger external systems
  • Built-in Testing: Test mode for validating rulebooks and debugging event patterns before production deployment
  • PostgreSQL Backend: Persistent storage for event history, execution logs, and audit trails
  • REST API: Programmatic management of rulebooks, activations, and event sources
  • Web UI: Visual interface for monitoring active rulebooks, viewing event streams, and analyzing automation execution

Why Deploy Event-Driven Ansible on Klutch.sh

Deploying EDA on Klutch.sh provides several advantages for automation infrastructure:

  • Automated Deployment: Klutch.sh automatically detects your Dockerfile and builds EDA with zero manual configuration
  • Persistent Event Storage: Attach persistent volumes for Ansible collections, rulebook storage, and execution logs
  • Database Integration: Easy connection to managed PostgreSQL databases for EDA’s backend storage
  • Secure Environment Variables: Store API keys, webhook secrets, and credentials securely without committing to version control
  • HTTP Traffic: Access EDA’s web interface and REST API over HTTPS with automatic SSL certificates
  • Scalable Infrastructure: Handle thousands of concurrent events with automatic resource scaling
  • GitHub Integration: Automatic redeployment when you push updates to rulebooks or configuration
  • Custom Domains: Use your own domain for the EDA interface (automation.your-company.com)
  • High Availability: Deploy with built-in redundancy for mission-critical automation workflows
  • Monitoring Integration: EDA’s metrics and logs integrate seamlessly with your observability stack

Prerequisites

Before deploying Event-Driven Ansible on Klutch.sh, ensure you have:

  • A Klutch.sh account
  • A GitHub account for repository hosting
  • Docker installed locally for testing (optional but recommended)
  • Basic understanding of Ansible playbooks, inventory, and modules
  • Familiarity with YAML syntax and Ansible templating (jinja2)
  • A PostgreSQL database for storing event history and execution logs (can be deployed on Klutch.sh)
  • (Optional) Existing Ansible Automation Platform or AWX instance for enterprise features
  • (Optional) Event sources like Kafka, Prometheus Alertmanager, or webhook providers
  • (Optional) Ansible collections installed for your specific automation needs

Understanding Event-Driven Ansible Architecture

Event-Driven Ansible follows a modular architecture designed for scalability and flexibility:

Core Components

EDA Server (Controller): The central orchestrator that manages rulebook activations, event processing, and action execution. Built on Python with FastAPI for the REST API and WebSockets for real-time updates.

Rulebook Engine: The heart of EDA, parsing and evaluating rulebooks written in YAML. Supports complex conditional logic, pattern matching, and multi-event correlation with stateful processing.

Event Sources: Pluggable modules that connect to external systems and stream events into EDA. Each source runs in its own Python process and communicates via internal message queues.

Ansible Runner: Executes Ansible playbooks and modules in isolated containers or processes, providing secure execution with resource limits and timeout controls.

PostgreSQL Database: Stores rulebook definitions, activation states, event history, execution logs, audit trails, and user authentication data.

Web UI: React-based interface for managing rulebooks, monitoring active automations, viewing event streams, and analyzing execution results.

Event Flow

  1. Event Source captures events from external systems (webhook received, Kafka message consumed, alert triggered)
  2. Event Parser normalizes the event into a standard format with metadata
  3. Rulebook Engine evaluates all active rules against the incoming event
  4. Condition Matching checks if the event satisfies any rule conditions using logical operators and filters
  5. Action Execution triggers the configured response (run playbook, execute module, make API call)
  6. Ansible Runner executes the automation with proper inventory and variables
  7. Result Storage logs execution details, outcomes, and any errors to PostgreSQL
  8. Notification (optional) sends success/failure notifications via configured channels

Port and Protocol Requirements

  • HTTP/HTTPS: Port 8000 (internal) for web interface and REST API
  • WebSocket: Same port for real-time event streaming to the UI
  • Database: PostgreSQL connection on port 5432 (or 8000 via Klutch.sh)

Preparing Your Repository

To deploy Event-Driven Ansible on Klutch.sh, create a GitHub repository with the necessary configuration files.

Step 1: Create Repository Structure

Create a new directory for your EDA deployment:

Terminal window
mkdir eda-klutch
cd eda-klutch
git init

Create the following directory structure:

eda-klutch/
├── Dockerfile
├── .dockerignore
├── .env.example
├── docker-compose.yml # For local testing only
├── requirements.txt
├── ansible.cfg
├── inventory/
│ └── hosts
├── rulebooks/
│ ├── webhook-automation.yml
│ ├── monitoring-alerts.yml
│ └── infrastructure-events.yml
├── playbooks/
│ ├── scale-up.yml
│ ├── remediate-alert.yml
│ └── notify-team.yml
├── collections/
│ └── requirements.yml
└── README.md

Step 2: Create the Dockerfile

Create a production-ready Dockerfile:

# Base image with Python 3.11
FROM python:3.11-slim-bullseye
# Set environment variables
ENV PYTHONUNBUFFERED=1 \
DEBIAN_FRONTEND=noninteractive \
EDA_HOME=/opt/eda \
ANSIBLE_HOME=/opt/ansible
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
git \
gcc \
g++ \
make \
libpq-dev \
libffi-dev \
libssl-dev \
openssh-client \
curl \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Create application directories
RUN mkdir -p ${EDA_HOME} ${ANSIBLE_HOME} \
&& mkdir -p /opt/ansible/collections \
&& mkdir -p /opt/ansible/rulebooks \
&& mkdir -p /opt/ansible/playbooks
# Set working directory
WORKDIR ${EDA_HOME}
# Copy requirements file
COPY requirements.txt .
# Install Python dependencies
RUN pip install --no-cache-dir --upgrade pip setuptools wheel && \
pip install --no-cache-dir -r requirements.txt
# Install ansible-rulebook (EDA core)
RUN pip install --no-cache-dir \
ansible-rulebook==1.0.0 \
ansible-runner==2.3.4 \
ansible-core==2.15.5
# Copy application files
COPY ansible.cfg /etc/ansible/ansible.cfg
COPY inventory/ /opt/ansible/inventory/
COPY rulebooks/ /opt/ansible/rulebooks/
COPY playbooks/ /opt/ansible/playbooks/
COPY collections/requirements.yml /opt/ansible/collections/
# Install Ansible collections
RUN ansible-galaxy collection install -r /opt/ansible/collections/requirements.yml -p /opt/ansible/collections
# Create non-root user
RUN useradd -r -u 1000 -m -d /home/eda -s /bin/bash eda && \
chown -R eda:eda ${EDA_HOME} ${ANSIBLE_HOME}
# Switch to non-root user
USER eda
# Expose EDA server port
EXPOSE 8000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=60s --retries=3 \
CMD curl -f http://localhost:8000/api/health || exit 1
# Start EDA server
CMD ["ansible-rulebook", "--rulebook", "/opt/ansible/rulebooks/main.yml", "--inventory", "/opt/ansible/inventory/hosts", "--verbose"]

Dockerfile Explanation:

  • Python 3.11: Official slim Python image for efficient container size
  • System Dependencies: Compilers and libraries needed for Python packages with C extensions
  • ansible-rulebook: Core EDA engine that processes rulebooks and events
  • ansible-runner: Executes Ansible playbooks in isolated environments
  • Non-root User: Security best practice for production deployments
  • Health Check: Monitors EDA server availability
  • Port 8000: Default EDA server port for web UI and API

Step 3: Create Requirements File

Create requirements.txt with Python dependencies:

# Event-Driven Ansible Core
ansible-rulebook==1.0.0
ansible-runner==2.3.4
ansible-core==2.15.5
# Database
psycopg2-binary==2.9.9
sqlalchemy==2.0.23
# Event Sources
kafka-python==2.0.2
paho-mqtt==1.6.1
azure-eventhub==5.11.4
boto3==1.34.10
# Web Framework
fastapi==0.104.1
uvicorn[standard]==0.24.0
websockets==12.0
# Utilities
pyyaml==6.0.1
jinja2==3.1.2
watchdog==3.0.0
requests==2.31.0
aiohttp==3.9.1
python-multipart==0.0.6
# Monitoring
prometheus-client==0.19.0

Step 4: Create Ansible Configuration

Create ansible.cfg:

[defaults]
inventory = /opt/ansible/inventory/hosts
collections_path = /opt/ansible/collections
roles_path = /opt/ansible/roles
host_key_checking = False
retry_files_enabled = False
stdout_callback = yaml
bin_ansible_callbacks = True
callback_whitelist = profile_tasks, timer
[persistent_connection]
command_timeout = 60
connect_timeout = 30
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
pipelining = True

Step 5: Create Inventory File

Create inventory/hosts:

[local]
localhost ansible_connection=local
[managed_hosts]
# Add your managed infrastructure here
# server1.example.com ansible_host=192.168.1.10
# server2.example.com ansible_host=192.168.1.11
[managed_hosts:vars]
ansible_user=ansible
ansible_ssh_private_key_file=/opt/ansible/.ssh/id_rsa

Step 6: Create Example Rulebook

Create rulebooks/webhook-automation.yml:

---
- name: Webhook-Triggered Infrastructure Automation
hosts: localhost
sources:
- name: Listen for webhooks
ansible.eda.webhook:
host: 0.0.0.0
port: 5000
rules:
- name: Scale up on high load alert
condition: event.payload.alert_name == "high_cpu_usage" and event.payload.severity == "critical"
action:
run_playbook:
name: /opt/ansible/playbooks/scale-up.yml
extra_vars:
target_host: "{{ event.payload.hostname }}"
alert_severity: "{{ event.payload.severity }}"
- name: Remediate failed service
condition: event.payload.event_type == "service_failure"
action:
run_playbook:
name: /opt/ansible/playbooks/remediate-service.yml
extra_vars:
service_name: "{{ event.payload.service }}"
failed_host: "{{ event.payload.hostname }}"
- name: Notify team on security event
condition: event.payload.category == "security" and event.payload.priority == "high"
action:
run_playbook:
name: /opt/ansible/playbooks/notify-team.yml
extra_vars:
event_details: "{{ event.payload }}"
notification_channel: "security-alerts"

Step 7: Create Example Playbook

Create playbooks/scale-up.yml:

---
- name: Scale Up Infrastructure
hosts: "{{ target_host | default('localhost') }}"
gather_facts: yes
tasks:
- name: Display alert information
debug:
msg: "Received {{ alert_severity }} alert for {{ inventory_hostname }}"
- name: Check current CPU usage
shell: top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1
register: cpu_usage
- name: Log CPU metrics
debug:
msg: "Current CPU usage: {{ cpu_usage.stdout }}%"
- name: Trigger autoscaling (example with cloud API)
uri:
url: "{{ cloud_api_url }}/autoscale"
method: POST
body_format: json
body:
instance_id: "{{ instance_id }}"
scale_action: "increase"
reason: "high_cpu_usage"
headers:
Authorization: "Bearer {{ cloud_api_token }}"
when: cpu_usage.stdout | float > 80
- name: Wait for scaling operation
pause:
seconds: 30
- name: Verify scaling completed
debug:
msg: "Scaling operation triggered successfully"

Step 8: Create Collections Requirements

Create collections/requirements.yml:

---
collections:
- name: ansible.eda
version: ">=1.0.0"
- name: community.general
version: ">=7.0.0"
- name: ansible.posix
version: ">=1.5.0"
- name: community.docker
version: ">=3.0.0"
- name: amazon.aws
version: ">=6.0.0"
- name: azure.azcollection
version: ">=1.15.0"

Step 9: Create Environment Configuration

Create .env.example:

# Event-Driven Ansible Environment Configuration
# Application Settings
EDA_SERVER_HOST=0.0.0.0
EDA_SERVER_PORT=8000
EDA_LOG_LEVEL=INFO
EDA_SECRET_KEY=your-secret-key-here-change-in-production
# Database Configuration (PostgreSQL)
POSTGRES_HOST=your-postgres-app.klutch.sh
POSTGRES_PORT=8000
POSTGRES_DB=eda_db
POSTGRES_USER=eda_user
POSTGRES_PASSWORD=your-secure-password
# Ansible Configuration
ANSIBLE_VAULT_PASSWORD=your-vault-password
ANSIBLE_COLLECTIONS_PATH=/opt/ansible/collections
ANSIBLE_ROLES_PATH=/opt/ansible/roles
# Event Sources
# Kafka Configuration
KAFKA_BOOTSTRAP_SERVERS=kafka-broker.example.com:9092
KAFKA_TOPIC=automation-events
KAFKA_GROUP_ID=eda-consumer-group
# Webhook Configuration
WEBHOOK_SECRET=your-webhook-secret
WEBHOOK_PORT=5000
# AWS Configuration (if using AWS event sources)
AWS_ACCESS_KEY_ID=your-aws-access-key
AWS_SECRET_ACCESS_KEY=your-aws-secret-key
AWS_REGION=us-east-1
# Azure Configuration (if using Azure event sources)
AZURE_CLIENT_ID=your-azure-client-id
AZURE_CLIENT_SECRET=your-azure-client-secret
AZURE_TENANT_ID=your-azure-tenant-id
# API Authentication
API_TOKEN=your-api-token-for-external-access
# SMTP (for email notifications from playbooks)
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USER=your-email@gmail.com
SMTP_PASSWORD=your-email-password
SMTP_FROM=noreply@your-domain.com
# Monitoring
ENABLE_METRICS=true
METRICS_PORT=9090

Step 10: Create Docker Ignore File

Create .dockerignore:

.git
.gitignore
.env
.env.local
*.md
docker-compose.yml
.vscode
.idea
__pycache__
*.pyc
*.pyo
*.pyd
.pytest_cache
.coverage
htmlcov
dist
build
*.egg-info
node_modules

Step 11: Create Docker Compose for Local Testing

Create docker-compose.yml:

version: '3.8'
services:
postgres:
image: postgres:15-alpine
container_name: eda-postgres
environment:
POSTGRES_DB: eda_db
POSTGRES_USER: eda_user
POSTGRES_PASSWORD: eda_password
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U eda_user -d eda_db"]
interval: 10s
timeout: 5s
retries: 5
eda:
build: .
container_name: eda-server
ports:
- "8000:8000"
- "5000:5000" # Webhook port
environment:
POSTGRES_HOST: postgres
POSTGRES_PORT: 5432
POSTGRES_DB: eda_db
POSTGRES_USER: eda_user
POSTGRES_PASSWORD: eda_password
EDA_SERVER_HOST: 0.0.0.0
EDA_SERVER_PORT: 8000
EDA_LOG_LEVEL: DEBUG
volumes:
- ./rulebooks:/opt/ansible/rulebooks
- ./playbooks:/opt/ansible/playbooks
- ./inventory:/opt/ansible/inventory
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/api/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
volumes:
postgres_data:

Step 12: Create README

Create README.md:

# Event-Driven Ansible on Klutch.sh
Automated, intelligent infrastructure responses powered by Event-Driven Ansible.
## Features
- Real-time event processing from multiple sources
- Declarative rulebooks with sophisticated condition matching
- Seamless Ansible playbook execution
- PostgreSQL backend for persistence
- Web UI for monitoring and management
- REST API for programmatic control
## Local Development
Test locally with Docker Compose:
```bash
docker-compose up -d

Access EDA at: http://localhost:8000

Production Deployment on Klutch.sh

Required Environment Variables

Database Configuration:

  • POSTGRES_HOST: PostgreSQL host (e.g., your-postgres-app.klutch.sh)
  • POSTGRES_PORT: PostgreSQL port (8000 for Klutch.sh deployments)
  • POSTGRES_DB: Database name (e.g., eda_db)
  • POSTGRES_USER: Database user
  • POSTGRES_PASSWORD: Database password

Application Settings:

  • EDA_SERVER_HOST: 0.0.0.0
  • EDA_SERVER_PORT: 8000
  • EDA_SECRET_KEY: Secret key for session management

Persistent Volumes

Attach persistent volumes for:

  • Mount Path: /opt/ansible/rulebooks (rulebook storage)
  • Mount Path: /opt/ansible/playbooks (playbook storage)
  • Recommended Size: 5-20GB depending on usage

Traffic Configuration

  • Traffic Type: HTTP
  • Internal Port: 8000

Rulebook Development

Rulebooks are YAML files that define event patterns and automated responses:

---
- name: My Automation Rulebook
hosts: localhost
sources:
- ansible.eda.webhook:
host: 0.0.0.0
port: 5000
rules:
- name: Handle specific event
condition: event.payload.type == "alert"
action:
run_playbook:
name: /opt/ansible/playbooks/respond.yml

Testing Rulebooks

Send test webhook:

Terminal window
curl -X POST http://example-app.klutch.sh:5000/webhook \
-H "Content-Type: application/json" \
-d '{
"type": "alert",
"severity": "high",
"message": "Test event"
}'

License

Apache License 2.0

### Step 13: Initialize Git and Push to GitHub
```bash
# Add all files
git add .
# Commit
git commit -m "Initial Event-Driven Ansible deployment configuration"
# Add GitHub remote (replace with your repository URL)
git remote add origin https://github.com/yourusername/eda-klutch.git
# Push to GitHub
git branch -M main
git push -u origin main

Deploying to Klutch.sh

Now that your repository is prepared, follow these steps to deploy Event-Driven Ansible on Klutch.sh.

Prerequisites: Deploy PostgreSQL Database

Event-Driven Ansible requires PostgreSQL for storing event history and execution logs. Deploy it first:

  1. **Follow PostgreSQL Guide**

    Deploy PostgreSQL using our PostgreSQL deployment guide.

  2. **Create EDA Database**

    Connect to your PostgreSQL instance and create the database:

    CREATE DATABASE eda_db;
    CREATE USER eda_user WITH PASSWORD 'your-secure-password';
    GRANT ALL PRIVILEGES ON DATABASE eda_db TO eda_user;
  3. **Note Connection Details**

    You’ll need these for EDA configuration:

    • Host: your-postgres-app.klutch.sh
    • Port: 8000
    • Database: eda_db
    • User: eda_user
    • Password: Your chosen password

Deployment Steps

  1. **Navigate to Klutch.sh Dashboard**

    Visit klutch.sh/app and log in to your account.

  2. **Create a New Project**

    Click “New Project” and give it a name like “Event-Driven Automation” to organize your deployment.

  3. **Create a New App**

    Click “New App” or “Create App” and select GitHub as your source.

  4. **Connect Your Repository**
    • Authenticate with GitHub if not already connected
    • Select your EDA repository from the list
    • Choose the main branch for deployment
  5. **Configure Application Settings**
    • App Name: Choose a unique name (e.g., eda-automation)
    • Traffic Type: Select HTTP for web interface and API access
    • Internal Port: Set to 8000 (EDA’s default port)
  6. **Set Environment Variables**

    Configure these environment variables in the Klutch.sh dashboard:

    Required - Database Configuration:

    POSTGRES_HOST=your-postgres-app.klutch.sh
    POSTGRES_PORT=8000
    POSTGRES_DB=eda_db
    POSTGRES_USER=eda_user
    POSTGRES_PASSWORD=your-secure-password

    Required - Application Settings:

    EDA_SERVER_HOST=0.0.0.0
    EDA_SERVER_PORT=8000
    EDA_LOG_LEVEL=INFO
    EDA_SECRET_KEY=generate-a-secure-random-key-here

    Optional - Event Source Configuration:

    KAFKA_BOOTSTRAP_SERVERS=your-kafka-broker:9092
    KAFKA_TOPIC=automation-events
    WEBHOOK_SECRET=your-webhook-secret

    Optional - Cloud Provider Credentials (if using cloud event sources):

    AWS_ACCESS_KEY_ID=your-aws-key
    AWS_SECRET_ACCESS_KEY=your-aws-secret
    AWS_REGION=us-east-1
  7. **Attach Persistent Volumes**

    Critical for storing rulebooks and execution history:

    Volume 1 - Rulebooks:

    • Click “Add Volume” in the Volumes section
    • Mount Path: /opt/ansible/rulebooks
    • Size: 5GB minimum, 10-20GB recommended for production

    Volume 2 - Playbooks:

    • Click “Add Volume” again
    • Mount Path: /opt/ansible/playbooks
    • Size: 5GB minimum, 10-20GB recommended

    These volumes store:

    • Rulebook definitions
    • Ansible playbooks and roles
    • Execution logs and history
    • Downloaded Ansible collections
  8. **Deploy Application**

    Click “Create” or “Deploy” to start the deployment. Klutch.sh will:

    • Automatically detect your Dockerfile
    • Build the Docker image with Python, Ansible, and EDA
    • Install all required Python packages and Ansible collections
    • Attach the persistent volumes
    • Start your EDA server container
    • Assign a URL for external access

    The first deployment takes 5-8 minutes as all dependencies and Ansible collections are installed.

  9. **Verify Deployment**

    Once deployed, your EDA instance will be available at:

    https://example-app.klutch.sh

    Test the deployment:

    Health Check:

    Terminal window
    curl https://example-app.klutch.sh/api/health

    Expected response:

    {
    "status": "healthy",
    "version": "1.0.0",
    "timestamp": "2024-12-17T10:00:00.000Z"
    }

    Database Connection:

    The health check endpoint also verifies database connectivity. If you see "status": "healthy", your PostgreSQL connection is working.

  10. **Activate Your First Rulebook**

    Create a simple webhook rulebook to test event processing:

    ---
    - name: Test Webhook Rulebook
    hosts: localhost
    sources:
    - ansible.eda.webhook:
    host: 0.0.0.0
    port: 5000
    rules:
    - name: Echo webhook events
    condition: event.payload is defined
    action:
    debug:
    msg: "Received event: {{ event.payload }}"

    Send a test webhook:

    Terminal window
    curl -X POST https://example-app.klutch.sh:5000/webhook \
    -H "Content-Type: application/json" \
    -d '{
    "test": "Hello from EDA",
    "timestamp": "2024-12-17T10:00:00Z"
    }'

    Check the EDA logs to verify the event was processed.

Getting Started with Event-Driven Ansible

After deploying EDA on Klutch.sh, follow these steps to create your first automation workflows.

Understanding Rulebooks

Rulebooks are the core of Event-Driven Ansible. They define:

  1. Event Sources: Where events come from (webhooks, Kafka, alerts, etc.)
  2. Conditions: Patterns that trigger automation (if event matches X, then…)
  3. Actions: What happens when conditions are met (run playbook, execute module, etc.)

Creating a Simple Webhook Automation

Create rulebooks/simple-webhook.yml:

---
- name: Simple Webhook Automation
hosts: localhost
sources:
- name: Listen for webhooks
ansible.eda.webhook:
host: 0.0.0.0
port: 5000
rules:
- name: Log all incoming webhooks
condition: event.payload is defined
action:
run_playbook:
name: /opt/ansible/playbooks/log-event.yml
extra_vars:
event_data: "{{ event }}"

Create the corresponding playbook playbooks/log-event.yml:

---
- name: Log Webhook Event
hosts: localhost
gather_facts: no
tasks:
- name: Display event information
debug:
msg: "Received webhook event: {{ event_data }}"
- name: Write to log file
lineinfile:
path: /var/log/eda-events.log
line: "{{ ansible_date_time.iso8601 }} - {{ event_data | to_json }}"
create: yes

Testing Your Rulebook

Send a test webhook to your EDA instance:

Terminal window
curl -X POST https://example-app.klutch.sh:5000/webhook \
-H "Content-Type: application/json" \
-d '{
"source": "monitoring",
"alert_name": "High CPU Usage",
"severity": "warning",
"hostname": "web-server-01",
"value": 85.5
}'

Advanced Rulebook Example: Monitoring Integration

Create rulebooks/monitoring-alerts.yml:

---
- name: Monitoring Alert Automation
hosts: localhost
sources:
- name: Prometheus Alertmanager
ansible.eda.alertmanager:
host: 0.0.0.0
port: 5001
rules:
- name: Critical CPU Alert - Scale Up
condition: >
event.alert.labels.alertname == "HighCPU" and
event.alert.labels.severity == "critical" and
event.alert.status == "firing"
action:
run_playbook:
name: /opt/ansible/playbooks/scale-infrastructure.yml
extra_vars:
target_instance: "{{ event.alert.labels.instance }}"
alert_value: "{{ event.alert.annotations.value }}"
- name: Memory Alert - Restart Services
condition: >
event.alert.labels.alertname == "HighMemory" and
event.alert.labels.severity == "warning"
action:
run_playbook:
name: /opt/ansible/playbooks/restart-services.yml
extra_vars:
affected_host: "{{ event.alert.labels.instance }}"
- name: Service Down - Auto Remediation
condition: >
event.alert.labels.alertname == "ServiceDown" and
event.alert.status == "firing"
action:
run_playbook:
name: /opt/ansible/playbooks/remediate-service.yml
extra_vars:
service_name: "{{ event.alert.labels.job }}"
down_since: "{{ event.alert.startsAt }}"
- name: Alert Resolved - Notify Team
condition: event.alert.status == "resolved"
action:
run_module:
name: community.general.slack
module_args:
token: "{{ slack_token }}"
channel: "#alerts"
msg: "Alert {{ event.alert.labels.alertname }} has been resolved for {{ event.alert.labels.instance }}"

Kafka Event Source Example

For high-throughput event processing, use Kafka:

Create rulebooks/kafka-events.yml:

---
- name: Kafka Event Stream Processing
hosts: localhost
sources:
- name: Application Events Stream
ansible.eda.kafka:
host: kafka-broker.example.com
port: 9092
topic: application-events
group_id: eda-consumer-group
rules:
- name: Deployment Event - Update Configuration
condition: >
event.body.event_type == "deployment" and
event.body.status == "completed"
action:
run_playbook:
name: /opt/ansible/playbooks/post-deployment.yml
extra_vars:
app_name: "{{ event.body.application }}"
version: "{{ event.body.version }}"
environment: "{{ event.body.environment }}"
- name: Error Rate Spike - Enable Debug Mode
condition: >
event.body.metric == "error_rate" and
event.body.value > 5.0
action:
run_playbook:
name: /opt/ansible/playbooks/enable-debug.yml
extra_vars:
service: "{{ event.body.service }}"
error_rate: "{{ event.body.value }}"

Conditional Logic and Filters

EDA supports sophisticated condition matching:

rules:
# Simple equality
- name: Exact match
condition: event.payload.type == "alert"
# Multiple conditions with AND
- name: Multiple conditions
condition: >
event.payload.severity == "critical" and
event.payload.environment == "production"
# OR logic
- name: Any of these conditions
condition: >
event.payload.type == "error" or
event.payload.type == "critical"
# Numeric comparisons
- name: Threshold check
condition: event.payload.cpu_usage > 80
# String operations
- name: Starts with check
condition: event.payload.hostname.startswith("web-")
# In list check
- name: Whitelist check
condition: event.payload.region in ["us-east-1", "us-west-2"]
# Regex matching
- name: Pattern match
condition: event.payload.message matches "ERROR.*database"
# Nested fields
- name: Deep property access
condition: event.payload.metadata.tags.env == "production"
# Jinja2 templating
- name: Template expression
condition: "{{ event.payload.count | int > 10 }}"

Action Types

EDA supports multiple action types for different automation needs:

Run Playbook

Execute a full Ansible playbook:

action:
run_playbook:
name: /opt/ansible/playbooks/remediate.yml
extra_vars:
target: "{{ event.payload.hostname }}"
issue_type: "{{ event.payload.type }}"
verbosity: 2

Run Module

Execute a single Ansible module:

action:
run_module:
name: ansible.builtin.service
module_args:
name: nginx
state: restarted

Debug Output

Print information for troubleshooting:

action:
debug:
msg: "Event received: {{ event }}"

Set Fact

Store data for subsequent rule evaluations:

action:
set_fact:
cacheable: true
fact_name: last_alert
fact_value: "{{ event.payload }}"

Run Job Template

Execute Ansible Automation Platform job template (requires AAP integration):

action:
run_job_template:
name: "Deploy Application"
organization: "My Org"
extra_vars:
version: "{{ event.payload.version }}"

Post Event

Send an event to another rulebook:

action:
post_event:
event:
type: "processed_alert"
original: "{{ event }}"
timestamp: "{{ ansible_date_time.iso8601 }}"

API Integration Examples

Event-Driven Ansible exposes a REST API for programmatic management.

Python Client

import requests
import json
class EDAClient:
def __init__(self, base_url, api_token=None):
self.base_url = base_url.rstrip('/')
self.headers = {'Content-Type': 'application/json'}
if api_token:
self.headers['Authorization'] = f'Bearer {api_token}'
def trigger_webhook(self, payload):
"""Send event to webhook endpoint"""
url = f"{self.base_url}:5000/webhook"
response = requests.post(url, json=payload, headers=self.headers)
response.raise_for_status()
return response.json()
def get_activations(self):
"""List active rulebooks"""
url = f"{self.base_url}/api/activations"
response = requests.get(url, headers=self.headers)
response.raise_for_status()
return response.json()
def create_activation(self, rulebook_name, extra_vars=None):
"""Activate a rulebook"""
url = f"{self.base_url}/api/activations"
data = {
'rulebook': rulebook_name,
'extra_vars': extra_vars or {}
}
response = requests.post(url, json=data, headers=self.headers)
response.raise_for_status()
return response.json()
def get_events(self, activation_id=None, limit=100):
"""Retrieve event history"""
url = f"{self.base_url}/api/events"
params = {'limit': limit}
if activation_id:
params['activation_id'] = activation_id
response = requests.get(url, params=params, headers=self.headers)
response.raise_for_status()
return response.json()
# Usage
client = EDAClient('https://example-app.klutch.sh', 'your-api-token')
# Trigger automation via webhook
event_data = {
'source': 'monitoring',
'alert_name': 'HighCPU',
'severity': 'critical',
'hostname': 'web-01',
'value': 92.5
}
response = client.trigger_webhook(event_data)
print(f"Event triggered: {response}")
# List active rulebooks
activations = client.get_activations()
print(f"Active rulebooks: {len(activations)}")
# Retrieve event history
events = client.get_events(limit=50)
for event in events:
print(f"Event: {event['type']} at {event['timestamp']}")

Node.js Client

const axios = require('axios');
class EDAClient {
constructor(baseURL, apiToken = null) {
this.client = axios.create({
baseURL,
headers: apiToken ? { 'Authorization': `Bearer ${apiToken}` } : {}
});
}
async triggerWebhook(payload) {
const response = await this.client.post(':5000/webhook', payload);
return response.data;
}
async getActivations() {
const response = await this.client.get('/api/activations');
return response.data;
}
async createActivation(rulebook, extraVars = {}) {
const response = await this.client.post('/api/activations', {
rulebook,
extra_vars: extraVars
});
return response.data;
}
async stopActivation(activationId) {
const response = await this.client.delete(`/api/activations/${activationId}`);
return response.data;
}
async getEvents(activationId = null, limit = 100) {
const params = { limit };
if (activationId) params.activation_id = activationId;
const response = await this.client.get('/api/events', { params });
return response.data;
}
}
// Usage
const eda = new EDAClient('https://example-app.klutch.sh', 'your-api-token');
(async () => {
// Send event
const event = await eda.triggerWebhook({
type: 'deployment',
app: 'web-api',
version: '2.1.0',
status: 'completed'
});
console.log('Event triggered:', event);
// Get active rulebooks
const activations = await eda.getActivations();
console.log(`Active rulebooks: ${activations.length}`);
// View recent events
const events = await eda.getEvents(null, 20);
events.forEach(e => {
console.log(`${e.timestamp}: ${e.type}`);
});
})();

Bash/curl Examples

#!/bin/bash
EDA_URL="https://example-app.klutch.sh"
API_TOKEN="your-api-token"
# Send webhook event
curl -X POST "${EDA_URL}:5000/webhook" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${API_TOKEN}" \
-d '{
"alert_name": "HighDiskUsage",
"severity": "warning",
"hostname": "db-server-01",
"disk_usage_percent": 87
}'
# List active rulebooks
curl -X GET "${EDA_URL}/api/activations" \
-H "Authorization: Bearer ${API_TOKEN}" | jq
# Get event history
curl -X GET "${EDA_URL}/api/events?limit=50" \
-H "Authorization: Bearer ${API_TOKEN}" | jq
# Create new activation
curl -X POST "${EDA_URL}/api/activations" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${API_TOKEN}" \
-d '{
"rulebook": "/opt/ansible/rulebooks/production.yml",
"extra_vars": {
"environment": "production",
"notify_slack": true
}
}'

Advanced Configuration

Multi-Rulebook Deployment

Run multiple rulebooks simultaneously for different automation domains:

Create rulebooks/main.yml:

---
# Infrastructure Monitoring
- import_rulebook: /opt/ansible/rulebooks/monitoring-alerts.yml
- import_rulebook: /opt/ansible/rulebooks/performance-tuning.yml
# Application Events
- import_rulebook: /opt/ansible/rulebooks/deployment-events.yml
- import_rulebook: /opt/ansible/rulebooks/error-tracking.yml
# Security Automation
- import_rulebook: /opt/ansible/rulebooks/security-incidents.yml
- import_rulebook: /opt/ansible/rulebooks/compliance-checks.yml

Environment-Specific Configuration

Use environment variables to customize behavior:

---
- name: Environment-Aware Automation
hosts: localhost
sources:
- ansible.eda.webhook:
host: 0.0.0.0
port: 5000
vars:
environment: "{{ lookup('env', 'DEPLOYMENT_ENV') | default('development') }}"
notification_channel: "{{ lookup('env', 'SLACK_CHANNEL') | default('#alerts') }}"
rules:
- name: Production-only automation
condition: >
event.payload.type == "critical" and
environment == "production"
action:
run_playbook:
name: /opt/ansible/playbooks/production-response.yml

Stateful Event Processing

Track state across multiple events:

---
- name: Stateful Alert Processing
hosts: localhost
sources:
- ansible.eda.webhook:
host: 0.0.0.0
port: 5000
rules:
- name: Track alert frequency
condition: event.payload.alert_name is defined
action:
set_fact:
cacheable: true
fact_name: "alert_count_{{ event.payload.alert_name }}"
fact_value: "{{ (facts['alert_count_' + event.payload.alert_name] | default(0) | int) + 1 }}"
- name: Escalate on repeated alerts
condition: >
event.payload.alert_name is defined and
facts['alert_count_' + event.payload.alert_name] | default(0) | int > 3
action:
run_playbook:
name: /opt/ansible/playbooks/escalate-alert.yml
extra_vars:
alert_name: "{{ event.payload.alert_name }}"
occurrence_count: "{{ facts['alert_count_' + event.payload.alert_name] }}"

Event Throttling and Deduplication

Prevent automation storms with throttling:

---
- name: Throttled Automation
hosts: localhost
sources:
- ansible.eda.webhook:
host: 0.0.0.0
port: 5000
rules:
- name: Rate-limited automation
condition: event.payload.type == "scaling_event"
throttle:
once_within: 300 # Only execute once every 5 minutes
group_by_attributes:
- payload.hostname
action:
run_playbook:
name: /opt/ansible/playbooks/scale-infrastructure.yml

Production Best Practices

Security Hardening

Webhook Authentication:

Add authentication to webhook endpoints:

sources:
- ansible.eda.webhook:
host: 0.0.0.0
port: 5000
hmac_secret: "{{ lookup('env', 'WEBHOOK_SECRET') }}"
hmac_header: X-Hub-Signature-256
hmac_format: sha256={hash}

API Token Authentication:

Secure the EDA API with token-based auth:

# Environment variable
API_TOKEN=generate-secure-random-token-here

Ansible Vault for Secrets:

Store sensitive data in Ansible Vault:

Terminal window
ansible-vault encrypt_string 'my-secret-password' --name 'db_password'

Use in playbooks:

vars:
db_password: !vault |
$ANSIBLE_VAULT;1.1;AES256
...encrypted content...

Performance Optimization

Concurrent Rulebook Processing:

Run multiple rulebooks in parallel:

# Use multiple worker processes
workers: 4

Event Batching:

Process events in batches for high-throughput scenarios:

sources:
- ansible.eda.kafka:
host: kafka-broker
port: 9092
topic: events
batch_size: 100
batch_timeout: 5

Async Playbook Execution:

Run playbooks asynchronously to avoid blocking:

action:
run_playbook:
name: /opt/ansible/playbooks/long-running.yml
async: 3600
poll: 0

Monitoring and Observability

Enable Prometheus Metrics:

ENABLE_METRICS=true
METRICS_PORT=9090

Access metrics at https://example-app.klutch.sh:9090/metrics

Structured Logging:

Configure detailed logging:

EDA_LOG_LEVEL=INFO
EDA_LOG_FORMAT=json

Event Audit Trail:

All events are automatically logged to PostgreSQL. Query event history:

SELECT
event_type,
source,
timestamp,
payload
FROM eda_events
WHERE timestamp > NOW() - INTERVAL '24 hours'
ORDER BY timestamp DESC
LIMIT 100;

Backup and Disaster Recovery

Database Backups:

Regularly backup PostgreSQL:

Terminal window
pg_dump -h your-postgres-app.klutch.sh -p 8000 -U eda_user eda_db > eda_backup.sql

Rulebook Version Control:

Keep rulebooks in Git with proper versioning:

Terminal window
git tag -a v1.0.0 -m "Production release"
git push origin v1.0.0

Configuration Backup:

Backup persistent volumes containing rulebooks and playbooks:

Terminal window
tar -czf eda-config-backup.tar.gz /opt/ansible/rulebooks /opt/ansible/playbooks

Scaling Strategies

Horizontal Scaling:

Deploy multiple EDA instances with load balancing:

  • Each instance processes events from shared Kafka topic
  • Use different consumer group IDs for redundancy
  • Share PostgreSQL database for centralized state

Vertical Scaling:

Increase resources for high-throughput scenarios:

  • More CPU cores for concurrent event processing
  • More memory for large rulebook sets and in-memory state
  • Faster storage for playbook execution

Event Source Sharding:

Distribute event sources across multiple EDA instances:

  • Instance 1: Monitoring alerts (Prometheus, Alertmanager)
  • Instance 2: Application events (Kafka streams)
  • Instance 3: Cloud provider events (AWS EventBridge, Azure Event Grid)

Troubleshooting

Issue: EDA Server Won’t Start

Symptoms: Container starts but application doesn’t respond

Possible Causes and Solutions:

  1. Database Connection Failed:

Check PostgreSQL connectivity:

Terminal window
# Test from Klutch.sh app logs
# Look for: "Database connection error" or "ECONNREFUSED"

Solution: Verify environment variables:

POSTGRES_HOST=your-postgres-app.klutch.sh
POSTGRES_PORT=8000
POSTGRES_DB=eda_db
POSTGRES_USER=eda_user
POSTGRES_PASSWORD=correct-password
  1. Ansible Collections Missing:

Ensure collections are installed during build. Check Dockerfile:

RUN ansible-galaxy collection install -r /opt/ansible/collections/requirements.yml
  1. Invalid Rulebook Syntax:

Validate rulebook YAML:

Terminal window
ansible-rulebook --check --rulebook /opt/ansible/rulebooks/main.yml

Issue: Events Not Triggering Actions

Symptoms: Webhooks received but playbooks don’t execute

Solutions:

  1. Check Condition Logic:

Add debug output to verify condition matching:

- name: Debug event
condition: true # Matches all events
action:
debug:
msg: "Event received: {{ event }}"
  1. Verify Event Structure:

Log the full event payload:

Terminal window
curl -X POST https://example-app.klutch.sh:5000/webhook \
-H "Content-Type: application/json" \
-d '{"test": true}' -v
  1. Check Playbook Path:

Ensure playbook exists and is accessible:

Terminal window
# In container
ls -la /opt/ansible/playbooks/

Issue: Playbook Execution Fails

Symptoms: Condition matches but playbook returns errors

Solutions:

  1. Test Playbook Manually:
Terminal window
ansible-playbook /opt/ansible/playbooks/test.yml -i /opt/ansible/inventory/hosts -vvv
  1. Check Inventory and Connectivity:
Terminal window
ansible -i /opt/ansible/inventory/hosts all -m ping
  1. Verify Extra Variables:

Ensure variables passed from rulebook are correct:

action:
run_playbook:
name: /opt/ansible/playbooks/test.yml
extra_vars:
target: "{{ event.payload.hostname }}"
verbosity: 3 # Maximum verbosity for debugging

Issue: High Memory Usage

Symptoms: Container consuming excessive memory

Solutions:

  1. Limit Event History:

Configure retention policy:

DELETE FROM eda_events WHERE timestamp < NOW() - INTERVAL '7 days';
  1. Reduce Concurrent Rulebooks:

Limit active rulebooks to essential automation only.

  1. Optimize Playbooks:

Avoid gathering facts if not needed:

gather_facts: no
  1. Scale Container Resources:

Increase memory allocation in Klutch.sh dashboard.

Issue: Webhook Port Not Accessible

Symptoms: Cannot send webhooks to port 5000

Solutions:

  1. Verify Port Exposure:

Check Dockerfile:

EXPOSE 8000 5000
  1. Check Firewall Rules:

Ensure both ports are open in Klutch.sh configuration.

  1. Test Internal Connectivity:
Terminal window
# From inside container
curl http://localhost:5000/webhook -d '{"test":true}'

Issue: Database Connection Pool Exhausted

Symptoms: “too many clients” error from PostgreSQL

Solutions:

  1. Increase Connection Pool:

Configure PostgreSQL max connections:

ALTER SYSTEM SET max_connections = 100;
SELECT pg_reload_conf();
  1. Optimize Rulebook Queries:

Reduce database operations in frequently executed rulebooks.

  1. Connection Cleanup:

Ensure connections are properly closed after use.

Integration Examples

Prometheus Alertmanager Integration

Configure Alertmanager to send alerts to EDA:

alertmanager.yml:

route:
receiver: 'eda-webhook'
group_by: ['alertname', 'cluster']
group_wait: 10s
group_interval: 5m
repeat_interval: 4h
receivers:
- name: 'eda-webhook'
webhook_configs:
- url: 'https://example-app.klutch.sh:5000/webhook'
send_resolved: true

EDA rulebook for Prometheus:

---
- name: Prometheus Alert Automation
hosts: localhost
sources:
- ansible.eda.alertmanager:
host: 0.0.0.0
port: 5001
rules:
- name: Handle firing alerts
condition: event.alert.status == "firing"
action:
run_playbook:
name: /opt/ansible/playbooks/alert-response.yml
extra_vars:
alert: "{{ event.alert }}"

Kafka Integration

Stream application events from Kafka:

---
- name: Kafka Event Processing
hosts: localhost
sources:
- ansible.eda.kafka:
host: kafka-broker.example.com
port: 9092
topic: application-events
group_id: eda-consumer
offset: latest
check_interval: 5
rules:
- name: Process deployment events
condition: event.body.type == "deployment"
action:
run_playbook:
name: /opt/ansible/playbooks/post-deploy.yml

AWS EventBridge Integration

React to AWS infrastructure events:

---
- name: AWS Cloud Events
hosts: localhost
sources:
- ansible.eda.aws_sqs_queue:
region: us-east-1
queue_url: https://sqs.us-east-1.amazonaws.com/123456789/eda-events
delay_seconds: 5
rules:
- name: EC2 state change
condition: event.detail.state == "shutting-down"
action:
run_playbook:
name: /opt/ansible/playbooks/ec2-shutdown.yml

GitLab CI/CD Integration

Trigger automation from GitLab pipelines:

.gitlab-ci.yml:

deploy:
stage: deploy
script:
- |
curl -X POST https://example-app.klutch.sh:5000/webhook \
-H "Content-Type: application/json" \
-d "{
\"event_type\": \"deployment\",
\"project\": \"$CI_PROJECT_NAME\",
\"branch\": \"$CI_COMMIT_BRANCH\",
\"commit\": \"$CI_COMMIT_SHA\",
\"status\": \"success\"
}"

Additional Resources

  • PostgreSQL - Deploy PostgreSQL for EDA backend storage
  • Apache Kafka - Set up Kafka event streams for EDA
  • Flask - Build custom webhook receivers

You now have Event-Driven Ansible running on Klutch.sh! Your intelligent automation platform is ready to respond to infrastructure events in real-time, execute sophisticated workflows, and create self-healing systems that automatically remediate issues across your entire technology stack. Configure rulebooks, integrate with your monitoring systems, and let EDA transform your operations from reactive to proactive.