Skip to content

Deploying Feedpushr

Introduction

Feedpushr is a powerful, open-source RSS feed aggregator and processor built with Go. It collects RSS and Atom feeds, transforms the content through customizable filters, and forwards entries to various output destinations including webhooks, messaging platforms, email services, and more. Think of it as a bridge between the world of RSS feeds and modern notification systems.

Unlike simple RSS readers, Feedpushr acts as a middleware layer that can aggregate feeds from hundreds of sources, deduplicate entries, apply transformations, and route content intelligently. It’s perfect for content aggregation services, automated content distribution, monitoring systems, or any workflow that needs to react to RSS feed updates in real-time.

Built on a plugin architecture, Feedpushr supports multiple output formats (HTTP webhooks, SMTP, Amazon SQS, Kafka, etc.) and includes built-in filters for content manipulation, making it highly extensible for custom workflows.

Key Features

  • Multi-Feed Aggregation - Monitor unlimited RSS and Atom feeds simultaneously
  • Plugin Architecture - Extensible output system supporting webhooks, email, messaging platforms, and queues
  • Content Filtering - Built-in filters for title manipulation, content extraction, and custom transformations
  • Deduplication - Intelligent entry tracking prevents duplicate notifications
  • Web UI - Built-in dashboard for feed management and monitoring
  • REST API - Programmatic feed and configuration management
  • PostgreSQL Support - Persistent storage for feed state and entries
  • Cache System - Redis integration for high-performance caching
  • Metrics & Monitoring - Prometheus metrics endpoint for observability
  • OPML Support - Import/export feed collections
  • Rate Limiting - Control feed polling frequency per source
  • Content Transformation - Modify feed entries before forwarding
  • Multi-Output - Send the same feed to multiple destinations
  • Authentication - Built-in basic authentication for web UI and API
  • Docker-Ready - Official Docker images with minimal configuration

Use Cases

Content Aggregation Platforms: Build custom news aggregators or content discovery platforms by collecting feeds from multiple sources and presenting them through your own interface.

Automated Marketing: Monitor industry blogs, competitor content, or news sources and automatically distribute relevant content to your marketing channels via webhooks.

Development Team Notifications: Aggregate feeds from GitHub releases, Stack Overflow tags, or technical blogs and push updates to Slack, Discord, or Microsoft Teams.

Media Monitoring: Track news mentions, blog posts, or social media RSS feeds and forward matched entries to your monitoring dashboard or alerting system.

Content Distribution Networks: Collect content from various sources, transform it, and redistribute it to multiple platforms or subscribers automatically.

Research & Analysis: Aggregate academic feeds, preprint servers, or journal publications and forward them to data analysis pipelines for processing.

Why Deploy Feedpushr on Klutch.sh?

  • One-Click PostgreSQL Integration - Connect to managed PostgreSQL instances for persistent feed state without managing database infrastructure
  • Redis Caching - Integrate with Redis for high-performance caching of feed entries and deduplication
  • HTTP Traffic Routing - Expose the web UI and REST API over HTTPS with automatic SSL certificates
  • Environment Variables - Securely configure database connections, output plugins, and authentication credentials
  • Persistent Volumes - Optional local storage for file-based outputs or configuration backups
  • Automatic Deployments - Git-based deployments ensure your Feedpushr configuration updates automatically
  • Resource Scaling - Adjust CPU and memory allocation as your feed monitoring grows
  • Built-in Monitoring - Prometheus metrics endpoint works seamlessly with monitoring tools
  • Webhook-Friendly - Reliable outbound HTTP connections for webhook integrations
  • Zero Downtime Updates - Deploy configuration changes without interrupting feed monitoring

Prerequisites

Understanding Feedpushr Architecture

Before deploying, it’s helpful to understand how Feedpushr works:

  1. Feed Polling: Feedpushr periodically polls configured RSS/Atom feeds based on defined intervals
  2. Entry Processing: New entries are extracted, deduplicated, and passed through configured filters
  3. Output Distribution: Processed entries are forwarded to configured output plugins (webhooks, email, etc.)
  4. State Management: Feed states and processed entries are stored in PostgreSQL or in-memory (not recommended for production)
  5. Web Interface: A built-in UI on port 8080 provides feed management and monitoring

Preparing Your Repository

Step 1: Create Project Structure

Create a new directory for your Feedpushr deployment:

Terminal window
mkdir feedpushr-deployment
cd feedpushr-deployment

Step 2: Create the Dockerfile

Feedpushr provides official Docker images, but we’ll create a custom Dockerfile for better control over configuration:

# Use the official Feedpushr image
FROM ncarlier/feedpushr:latest
# Set the working directory
WORKDIR /data
# Expose the web UI and API port
EXPOSE 8080
# Health check to ensure the service is responding
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:8080/healthz || exit 1
# Default command (can be overridden with environment variables)
CMD ["feedpushr"]

Step 3: Create Configuration Directory Structure

While Feedpushr can be configured entirely via environment variables, you may want to create a basic directory structure:

Terminal window
mkdir -p config plugins

Step 4: Create Example Configuration File (Optional)

Create config/filters.json for content transformation examples:

{
"filters": [
{
"name": "Title Prefix Filter",
"condition": "Title.Contains(\"Breaking\")",
"actions": [
{
"type": "prefix",
"value": "🚨 BREAKING: "
}
]
},
{
"name": "Content Truncation",
"condition": "Content.Length > 500",
"actions": [
{
"type": "truncate",
"field": "content",
"length": 500
}
]
}
]
}

Step 5: Create Sample Output Configuration

Create config/outputs.json to define where feed entries should be sent:

{
"outputs": [
{
"alias": "webhook-endpoint",
"plugin": "http",
"enabled": true,
"config": {
"url": "https://your-webhook-endpoint.com/feeds",
"method": "POST",
"headers": {
"Content-Type": "application/json",
"Authorization": "Bearer YOUR_API_KEY"
}
}
},
{
"alias": "slack-channel",
"plugin": "http",
"enabled": true,
"config": {
"url": "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK",
"method": "POST",
"format": "slack"
}
}
]
}

Step 6: Create Environment Variables Template

Create .env.example to document required environment variables:

Terminal window
# Database Configuration (PostgreSQL recommended for production)
DB_DRIVER=postgres
DB_DSN=postgres://username:password@postgres-host:5432/feedpushr?sslmode=require
# Redis Cache (optional but recommended)
CACHE_DRIVER=redis
CACHE_DSN=redis://:password@redis-host:6379/0
# Authentication (highly recommended)
AUTHN=true
AUTHN_USERNAME=admin
AUTHN_PASSWORD=your-secure-password
# Server Configuration
LISTEN_ADDR=:8080
PUBLIC_URL=https://example-app.klutch.sh
# Feed Polling Configuration
DEFAULT_TIMEOUT=5m
MAX_FEED_AGGREGATORS=10
# Output Plugin Configuration
OUTPUT_PLUGINS=http,stdout
# Logging
LOG_LEVEL=info
LOG_FORMAT=json
# Metrics
METRICS_ENABLED=true
# OPML Import (optional)
# OPML_FILE=/data/feeds.opml

Step 7: Create Sample OPML File (Optional)

Create feeds.opml to import an initial set of feeds:

<?xml version="1.0" encoding="UTF-8"?>
<opml version="2.0">
<head>
<title>Feedpushr Initial Feeds</title>
</head>
<body>
<outline text="Technology" title="Technology">
<outline type="rss" text="TechCrunch" title="TechCrunch"
xmlUrl="https://techcrunch.com/feed/" />
<outline type="rss" text="Hacker News" title="Hacker News"
xmlUrl="https://news.ycombinator.com/rss" />
<outline type="rss" text="The Verge" title="The Verge"
xmlUrl="https://www.theverge.com/rss/index.xml" />
</outline>
<outline text="Development" title="Development">
<outline type="rss" text="GitHub Blog" title="GitHub Blog"
xmlUrl="https://github.blog/feed/" />
<outline type="rss" text="Dev.to" title="Dev.to"
xmlUrl="https://dev.to/feed" />
</outline>
</body>
</opml>

Step 8: Create README

Create README.md with deployment notes:

# Feedpushr Deployment
RSS feed aggregator and processor deployed on Klutch.sh.
## Features
- Multi-feed aggregation
- Webhook output support
- PostgreSQL persistence
- Redis caching
- Web UI for management
## Configuration
Set these environment variables in Klutch.sh:
- `DB_DSN`: PostgreSQL connection string
- `CACHE_DSN`: Redis connection string (optional)
- `AUTHN_USERNAME`: Admin username
- `AUTHN_PASSWORD`: Admin password
- `OUTPUT_PLUGINS`: Comma-separated list of enabled outputs
## Accessing the UI
Visit https://your-app.klutch.sh and log in with your admin credentials.
## API Documentation
API docs available at: https://your-app.klutch.sh/swagger

Step 9: Create .gitignore

Terminal window
.env
*.log
data/
cache/
config/local.json
*.pem
*.key

Step 10: Initialize Git Repository

Terminal window
git init
git add .
git commit -m "Initial Feedpushr setup"

Push to your GitHub repository:

Terminal window
git remote add origin https://github.com/yourusername/feedpushr-deployment.git
git branch -M main
git push -u origin main

Deploying on Klutch.sh

  1. Log in to Klutch.sh
    Navigate to klutch.sh/app and sign in to your account.
  2. Create a New App
    Click New App and select Deploy from GitHub. Choose your Feedpushr repository.
  3. Configure Build Settings
    Klutch.sh will automatically detect your Dockerfile. No additional build configuration is needed.
  4. Set Traffic Type
    Select HTTP traffic since Feedpushr serves a web UI and API. The internal port should be set to 8080.
  5. Configure Database Connection (Recommended)
    If using PostgreSQL for persistence:
    • Deploy a PostgreSQL database on Klutch.sh first
    • Add environment variable DB_DRIVER with value postgres
    • Add DB_DSN with your PostgreSQL connection string:
      postgres://username:password@your-postgres.klutch.sh:8000/feedpushr?sslmode=disable
    • Mark DB_DSN as sensitive to hide the value
  6. Configure Redis Cache (Optional)
    For better performance with high feed volumes:
    • Deploy a Redis instance on Klutch.sh
    • Add environment variable CACHE_DRIVER with value redis
    • Add CACHE_DSN with your Redis connection string:
      redis://:password@your-redis.klutch.sh:8000/0
    • Mark CACHE_DSN as sensitive
  7. Configure Authentication
    Add the following environment variables to secure your Feedpushr instance:
    • AUTHN = true
    • AUTHN_USERNAME = Your desired admin username
    • AUTHN_PASSWORD = A strong password (mark as sensitive)
  8. Configure Server Settings
    Add these essential environment variables:
    • LISTEN_ADDR = :8080
    • PUBLIC_URL = https://your-app.klutch.sh (replace with your actual app URL)
    • LOG_LEVEL = info
    • LOG_FORMAT = json
  9. Configure Output Plugins
    Enable the output plugins you need:
    • OUTPUT_PLUGINS = http,stdout (or your preferred plugins)
  10. Attach Persistent Volume (Optional)
    If you want to store OPML files or local data:
    • In the Volumes section, click Add Volume
    • Set mount path: /data
    • Set size: 1GB (adjust based on needs)
  11. Deploy the App
    Click Deploy. Klutch.sh will build your Docker image and start your Feedpushr instance.
  12. Verify Deployment
    Once deployed, visit https://your-app.klutch.sh to access the Feedpushr web UI. Log in with the credentials you configured.

Configuration and Management

Database Options

Feedpushr supports multiple database backends:

PostgreSQL (Recommended for Production):

Terminal window
DB_DRIVER=postgres
DB_DSN=postgres://user:pass@host:port/dbname?sslmode=require

In-Memory (Not Recommended for Production):

Terminal window
DB_DRIVER=memory
# No DSN needed - data is lost on restart

BoltDB (Single-File Database):

Terminal window
DB_DRIVER=bolt
DB_DSN=/data/feedpushr.db

Cache Configuration

Redis (Recommended):

Terminal window
CACHE_DRIVER=redis
CACHE_DSN=redis://:password@host:port/db

In-Memory Cache:

Terminal window
CACHE_DRIVER=memory
# No DSN needed

Output Plugin Configuration

Feedpushr supports multiple output plugins. Configure them via environment variables or the web UI:

HTTP Webhook Output:

{
"alias": "my-webhook",
"plugin": "http",
"config": {
"url": "https://api.example.com/webhook",
"method": "POST",
"headers": {
"Content-Type": "application/json",
"Authorization": "Bearer YOUR_TOKEN"
}
}
}

Available Output Plugins:

  • http - Send to HTTP webhooks
  • stdout - Log to console (debugging)
  • kafka - Apache Kafka integration
  • amazonsqs - AWS SQS queue
  • smtp - Email notifications

Enable plugins with the OUTPUT_PLUGINS environment variable:

Terminal window
OUTPUT_PLUGINS=http,stdout,kafka

Filter Configuration

Filters transform feed entries before they reach outputs. Common filters include:

Title Transformation:

{
"name": "Add Emoji Prefix",
"condition": "Category.Contains(\"Breaking\")",
"actions": [
{
"type": "prefix",
"field": "title",
"value": "🔥 "
}
]
}

Content Truncation:

{
"name": "Truncate Long Content",
"condition": "Content.Length > 1000",
"actions": [
{
"type": "truncate",
"field": "content",
"length": 500,
"suffix": "... [Read more]"
}
]
}

URL Extraction:

{
"name": "Extract First Image",
"actions": [
{
"type": "extract",
"field": "image",
"pattern": "<img src=\"([^\"]+)\""
}
]
}

Accessing the Web Interface

Once deployed, access the Feedpushr web UI at https://your-app.klutch.sh.

Dashboard Features

Feed Management:

  • Add new RSS/Atom feeds
  • Configure polling intervals
  • Set feed-specific tags and categories
  • Enable/disable individual feeds
  • View feed status and last update times

Output Configuration:

  • Create and manage output destinations
  • Test outputs with sample data
  • View output delivery statistics
  • Configure retry policies

Filter Management:

  • Create custom content filters
  • Test filters against sample entries
  • Chain multiple filters together
  • Enable/disable filters per feed

Monitoring:

  • View feed polling statistics
  • Check entry processing rates
  • Monitor output delivery success rates
  • View error logs and failed deliveries

Using the REST API

Feedpushr provides a comprehensive REST API for programmatic management:

Add a Feed:

Terminal window
curl -X POST https://example-app.klutch.sh/v1/feeds \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"title": "TechCrunch",
"xmlUrl": "https://techcrunch.com/feed/",
"tags": ["technology", "news"]
}'

List All Feeds:

Terminal window
curl -u admin:password https://example-app.klutch.sh/v1/feeds

Get Feed Entries:

Terminal window
curl -u admin:password https://example-app.klutch.sh/v1/feeds/1/entries

Delete a Feed:

Terminal window
curl -X DELETE -u admin:password https://example-app.klutch.sh/v1/feeds/1

Import OPML:

Terminal window
curl -X POST https://example-app.klutch.sh/v1/opml \
-u admin:password \
-F "file=@feeds.opml"

Export OPML:

Terminal window
curl -u admin:password https://example-app.klutch.sh/v1/opml > feeds.opml

API Authentication

All API requests require HTTP Basic Authentication. Use the username and password configured in your AUTHN_USERNAME and AUTHN_PASSWORD environment variables.

For programmatic access, include credentials in the request:

JavaScript/Node.js:

const axios = require('axios');
const feedpushrAPI = axios.create({
baseURL: 'https://example-app.klutch.sh',
auth: {
username: 'admin',
password: 'your-password'
}
});
// Add a feed
const addFeed = async () => {
const response = await feedpushrAPI.post('/v1/feeds', {
title: 'GitHub Blog',
xmlUrl: 'https://github.blog/feed/',
tags: ['development', 'github']
});
console.log('Feed added:', response.data);
};
// List all feeds
const listFeeds = async () => {
const response = await feedpushrAPI.get('/v1/feeds');
console.log('Feeds:', response.data);
};

Python:

import requests
from requests.auth import HTTPBasicAuth
BASE_URL = 'https://example-app.klutch.sh'
auth = HTTPBasicAuth('admin', 'your-password')
# Add a feed
response = requests.post(
f'{BASE_URL}/v1/feeds',
auth=auth,
json={
'title': 'Dev.to',
'xmlUrl': 'https://dev.to/feed',
'tags': ['development', 'community']
}
)
print('Feed added:', response.json())
# List all feeds
response = requests.get(f'{BASE_URL}/v1/feeds', auth=auth)
print('Feeds:', response.json())

Go:

package main
import (
"bytes"
"encoding/json"
"fmt"
"net/http"
)
type Feed struct {
Title string `json:"title"`
XMLUrl string `json:"xmlUrl"`
Tags []string `json:"tags"`
}
func main() {
client := &http.Client{}
baseURL := "https://example-app.klutch.sh"
// Add a feed
feed := Feed{
Title: "Hacker News",
XMLUrl: "https://news.ycombinator.com/rss",
Tags: []string{"tech", "news"},
}
jsonData, _ := json.Marshal(feed)
req, _ := http.NewRequest("POST", baseURL+"/v1/feeds", bytes.NewBuffer(jsonData))
req.SetBasicAuth("admin", "your-password")
req.Header.Set("Content-Type", "application/json")
resp, _ := client.Do(req)
defer resp.Body.Close()
fmt.Println("Feed added:", resp.Status)
}

cURL with Auth Token:

Terminal window
# Create base64 encoded credentials
AUTH=$(echo -n "admin:your-password" | base64)
# Make authenticated request
curl -H "Authorization: Basic $AUTH" \
https://example-app.klutch.sh/v1/feeds

Advanced Configuration

Webhook Output Configuration

Configure webhook outputs to send feed entries to external services:

Slack Integration:

Terminal window
# Add via API
curl -X POST https://example-app.klutch.sh/v1/outputs \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"alias": "slack-notifications",
"plugin": "http",
"config": {
"url": "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK",
"method": "POST",
"format": "slack"
}
}'

Discord Integration:

Terminal window
curl -X POST https://example-app.klutch.sh/v1/outputs \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"alias": "discord-feed",
"plugin": "http",
"config": {
"url": "https://discord.com/api/webhooks/YOUR/WEBHOOK",
"method": "POST",
"format": "discord"
}
}'

Custom Webhook with Transformation:

Terminal window
curl -X POST https://example-app.klutch.sh/v1/outputs \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"alias": "custom-api",
"plugin": "http",
"config": {
"url": "https://api.example.com/feeds/ingest",
"method": "POST",
"headers": {
"Authorization": "Bearer YOUR_API_TOKEN",
"Content-Type": "application/json"
},
"template": "{\"title\": \"{{.Title}}\", \"url\": \"{{.Link}}\", \"content\": \"{{.Description}}\", \"published\": \"{{.PublishedParsed}}\"}"
}
}'

Feed Polling Configuration

Control how frequently feeds are polled:

Terminal window
# Default timeout for feed fetching
DEFAULT_TIMEOUT=5m
# Maximum number of concurrent feed aggregators
MAX_FEED_AGGREGATORS=10
# Default polling interval (per feed, can be overridden)
DEFAULT_POLLING_INTERVAL=5m

Individual feeds can have custom polling intervals set via the UI or API:

Terminal window
curl -X PUT https://example-app.klutch.sh/v1/feeds/1 \
-u admin:password \
-H "Content-Type: application/json" \
-d '{
"hubUrl": "https://example.com/feed.xml",
"polling": {
"interval": "10m"
}
}'

Monitoring and Metrics

Feedpushr exposes Prometheus metrics at /metrics:

Terminal window
# Enable metrics
METRICS_ENABLED=true
# Access metrics endpoint
curl https://example-app.klutch.sh/metrics

Key Metrics:

  • feedpushr_feed_total - Total number of configured feeds
  • feedpushr_feed_status - Feed status (0=disabled, 1=enabled)
  • feedpushr_entries_total - Total entries processed
  • feedpushr_output_sent_total - Total outputs sent
  • feedpushr_output_errors_total - Total output errors
  • feedpushr_http_requests_total - Total HTTP requests
  • feedpushr_http_request_duration_seconds - Request duration histogram

Health Check Endpoint:

Terminal window
curl https://example-app.klutch.sh/healthz

Returns HTTP 200 if the service is healthy.

Custom Filter Examples

Filter by Keywords:

{
"name": "Kubernetes Only",
"condition": "Title.Contains(\"Kubernetes\") || Description.Contains(\"k8s\")",
"actions": [
{
"type": "keep"
}
]
}

Add Custom Fields:

{
"name": "Add Source Tag",
"actions": [
{
"type": "set",
"field": "custom.source",
"value": "TechCrunch"
}
]
}

HTML to Markdown:

{
"name": "Convert HTML to Markdown",
"actions": [
{
"type": "html2text",
"field": "description"
}
]
}

Extract First Paragraph:

{
"name": "Extract First Paragraph",
"actions": [
{
"type": "extract",
"field": "summary",
"source": "description",
"pattern": "<p>(.*?)</p>",
"index": 0
}
]
}

Production Best Practices

Security

  • Enable Authentication - Always set AUTHN=true and use strong passwords
  • Use HTTPS - Klutch.sh provides automatic SSL certificates
  • Secure Database Connections - Use SSL/TLS for PostgreSQL connections when possible
  • Rotate Credentials - Periodically update database passwords and API tokens
  • Restrict Webhook Access - Use API tokens or IP whitelisting for webhook endpoints
  • Review Feed Sources - Only add feeds from trusted sources to prevent XSS or malicious content

Performance Optimization

  • Use Redis Caching - Significantly improves performance with many feeds
  • Tune Polling Intervals - Set longer intervals for infrequently updated feeds
  • Limit Concurrent Aggregators - Adjust MAX_FEED_AGGREGATORS based on resources
  • Database Connection Pooling - PostgreSQL automatically handles connection pooling
  • Monitor Memory Usage - Increase container memory if processing many large feeds
  • Output Batching - Some plugins support batching to reduce API calls

Reliability

  • Use PostgreSQL - Persistent database prevents data loss on restarts
  • Attach Persistent Volumes - Store OPML backups and configuration files
  • Configure Retry Logic - Most output plugins support automatic retries
  • Monitor Failed Outputs - Regularly check the UI for failed deliveries
  • Set Up Alerts - Use Prometheus metrics to alert on errors or downtime
  • Regular OPML Backups - Export feed configurations periodically

Monitoring

  • Enable Prometheus Metrics - Set METRICS_ENABLED=true
  • Monitor Feed Status - Check for feeds failing to fetch
  • Track Output Errors - Alert on high output error rates
  • Review Logs Regularly - Use LOG_FORMAT=json for structured logging
  • Set Up Health Checks - Use /healthz endpoint for monitoring tools

Backup and Recovery

Export OPML Regularly:

Terminal window
# Export all feeds to OPML
curl -u admin:password https://example-app.klutch.sh/v1/opml > backup-$(date +%Y%m%d).opml

Backup Database: If using PostgreSQL, use standard PostgreSQL backup tools:

Terminal window
pg_dump -h your-postgres.klutch.sh -p 8000 -U username feedpushr > backup.sql

Restore from OPML:

Terminal window
curl -X POST https://example-app.klutch.sh/v1/opml \
-u admin:password \
-F "file=@backup-20250120.opml"

Troubleshooting

Feeds Not Updating

Issue: Feeds are not polling or updating entries.

Solutions:

  • Check the feed URL is accessible: curl -I https://feed-url.com/feed.xml
  • Verify polling interval is not too long
  • Check Feedpushr logs for fetch errors: View logs in Klutch.sh dashboard
  • Ensure MAX_FEED_AGGREGATORS is not set too low
  • Verify the feed is enabled in the UI
  • Check if the feed URL has changed or requires authentication

Outputs Not Sending

Issue: Feed entries are being processed but not reaching output destinations.

Solutions:

  • Test the webhook URL manually with curl
  • Check output configuration in the UI
  • Verify authentication tokens/API keys are correct
  • Review output error logs in the dashboard
  • Ensure the output plugin is enabled in OUTPUT_PLUGINS
  • Check webhook endpoint rate limits
  • Test outputs with the "Test Output" button in the UI

Database Connection Errors

Issue: Feedpushr cannot connect to PostgreSQL or Redis.

Solutions:

  • Verify DB_DSN connection string format is correct
  • Check PostgreSQL is deployed and accessible on port 8000
  • Ensure database credentials are correct
  • Test connection manually: psql "postgres://user:pass@host:8000/dbname"
  • Check if SSL mode needs to be adjusted (?sslmode=disable or ?sslmode=require)
  • Verify Redis connection string for cache issues

Authentication Not Working

Issue: Cannot log in to the web UI.

Solutions:

  • Verify AUTHN is set to true
  • Check AUTHN_USERNAME and AUTHN_PASSWORD are set
  • Clear browser cookies and cache
  • Try accessing via incognito/private browsing
  • Check Feedpushr logs for authentication errors

High Memory Usage

Issue: Feedpushr container is using excessive memory.

Solutions:

  • Enable Redis caching to offload memory pressure
  • Reduce MAX_FEED_AGGREGATORS to limit concurrent processing
  • Increase polling intervals for less frequently updated feeds
  • Review filters for memory-intensive operations
  • Consider splitting feeds across multiple Feedpushr instances
  • Increase container memory allocation in Klutch.sh

Duplicate Entries

Issue: Same feed entries are being sent multiple times.

Solutions:

  • Ensure PostgreSQL persistence is enabled (not using in-memory database)
  • Verify Redis cache is connected properly
  • Check if feed GUID/IDs are unique and consistent
  • Review feed source for changing GUIDs on republish
  • Don't restart the service too frequently (state needs time to persist)

API Rate Limiting

Issue: Receiving 429 errors from webhook destinations.

Solutions:

  • Implement batching in your output configuration
  • Increase polling intervals to reduce output frequency
  • Use filters to reduce the number of forwarded entries
  • Configure rate limiting in the output plugin settings
  • Split high-volume feeds across multiple outputs

Example Configurations

Content Aggregation Service

Monitor multiple tech news sources and send to a custom API:

Environment Variables:

Terminal window
DB_DRIVER=postgres
DB_DSN=postgres://user:pass@postgres.klutch.sh:8000/feedpushr
CACHE_DRIVER=redis
CACHE_DSN=redis://:pass@redis.klutch.sh:8000/0
OUTPUT_PLUGINS=http
DEFAULT_POLLING_INTERVAL=10m

Feeds (via OPML or UI):

Output Configuration:

{
"alias": "content-api",
"plugin": "http",
"config": {
"url": "https://api.mysite.com/content/ingest",
"method": "POST",
"headers": {
"Authorization": "Bearer YOUR_TOKEN",
"Content-Type": "application/json"
},
"template": "{\"title\": \"{{.Title}}\", \"url\": \"{{.Link}}\", \"content\": \"{{.Description}}\", \"published\": \"{{.PublishedParsed}}\", \"source\": \"{{.Meta.FeedTitle}}\"}"
}
}

Development Team Notifications

Monitor GitHub releases, Stack Overflow, and tech blogs, forward to Slack:

Environment Variables:

Terminal window
DB_DRIVER=postgres
DB_DSN=postgres://user:pass@postgres.klutch.sh:8000/devteam
CACHE_DRIVER=redis
CACHE_DSN=redis://:pass@redis.klutch.sh:8000/1
OUTPUT_PLUGINS=http
DEFAULT_POLLING_INTERVAL=15m

Feeds:

Output Configuration:

{
"alias": "slack-devteam",
"plugin": "http",
"config": {
"url": "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK",
"method": "POST",
"format": "slack"
}
}

Filter Configuration:

{
"name": "Add Emoji to Releases",
"condition": "Title.Contains(\"Release\") || Title.Contains(\"v1.\")",
"actions": [
{
"type": "prefix",
"field": "title",
"value": "🚀 "
}
]
}

Media Monitoring Service

Track news mentions and keyword-based content, send to multiple destinations:

Environment Variables:

Terminal window
DB_DRIVER=postgres
DB_DSN=postgres://user:pass@postgres.klutch.sh:8000/media_monitor
CACHE_DRIVER=redis
CACHE_DSN=redis://:pass@redis.klutch.sh:8000/2
OUTPUT_PLUGINS=http,smtp
DEFAULT_POLLING_INTERVAL=5m

Feeds:

  • Google News (custom keyword search)
  • Reddit keyword search feeds
  • Industry-specific news sites

Multiple Outputs:

[
{
"alias": "urgent-alerts",
"plugin": "http",
"config": {
"url": "https://api.pagerduty.com/incidents",
"method": "POST",
"headers": {
"Authorization": "Token token=YOUR_TOKEN",
"Content-Type": "application/json"
}
}
},
{
"alias": "daily-digest",
"plugin": "smtp",
"config": {
"host": "smtp.example.com",
"port": 587,
"username": "alerts@example.com",
"password": "your-password",
"from": "alerts@example.com",
"to": "team@example.com"
}
}
]

Keyword Filter:

{
"name": "High Priority Keywords",
"condition": "Title.Contains(\"breaking\") || Title.Contains(\"urgent\") || Description.Contains(\"your-company-name\")",
"actions": [
{
"type": "route",
"output": "urgent-alerts"
}
]
}

Migration and Scaling

Migrating from Another Feed Aggregator

From FreshRSS/TinyTinyRSS:

  1. Export OPML from your current feed reader
  2. Import OPML via Feedpushr UI or API
  3. Configure outputs to replace the reading workflow
  4. Test with a few feeds before full migration

From Zapier/IFTTT:

  1. List all RSS-based Zaps/Applets
  2. Add corresponding feeds to Feedpushr
  3. Configure HTTP outputs to match your Zapier actions
  4. Test outputs thoroughly before disabling Zapier workflows

Scaling for High Volume

For handling hundreds of feeds:

  • Use Redis - Essential for caching and deduplication at scale
  • Increase Resources - Scale container CPU and memory in Klutch.sh
  • Tune Aggregators - Increase MAX_FEED_AGGREGATORS to 20-50
  • Database Optimization - Use a dedicated PostgreSQL instance with sufficient resources
  • Output Batching - Configure outputs to batch entries when possible
  • Monitor Performance - Use Prometheus metrics to identify bottlenecks

For thousands of feeds, consider:

  • Running multiple Feedpushr instances with different feed sets
  • Using a load balancer for the API
  • Implementing feed prioritization based on update frequency

Additional Resources


You now have a production-ready Feedpushr deployment on Klutch.sh! Monitor feeds through the web UI, configure outputs to match your workflow, and let Feedpushr handle the heavy lifting of RSS aggregation and distribution. For questions or issues, check the troubleshooting section or reach out to the Klutch.sh community.