Deploying Feedpushr
Introduction
Feedpushr is a powerful, open-source RSS feed aggregator and processor built with Go. It collects RSS and Atom feeds, transforms the content through customizable filters, and forwards entries to various output destinations including webhooks, messaging platforms, email services, and more. Think of it as a bridge between the world of RSS feeds and modern notification systems.
Unlike simple RSS readers, Feedpushr acts as a middleware layer that can aggregate feeds from hundreds of sources, deduplicate entries, apply transformations, and route content intelligently. It’s perfect for content aggregation services, automated content distribution, monitoring systems, or any workflow that needs to react to RSS feed updates in real-time.
Built on a plugin architecture, Feedpushr supports multiple output formats (HTTP webhooks, SMTP, Amazon SQS, Kafka, etc.) and includes built-in filters for content manipulation, making it highly extensible for custom workflows.
Key Features
- Multi-Feed Aggregation - Monitor unlimited RSS and Atom feeds simultaneously
- Plugin Architecture - Extensible output system supporting webhooks, email, messaging platforms, and queues
- Content Filtering - Built-in filters for title manipulation, content extraction, and custom transformations
- Deduplication - Intelligent entry tracking prevents duplicate notifications
- Web UI - Built-in dashboard for feed management and monitoring
- REST API - Programmatic feed and configuration management
- PostgreSQL Support - Persistent storage for feed state and entries
- Cache System - Redis integration for high-performance caching
- Metrics & Monitoring - Prometheus metrics endpoint for observability
- OPML Support - Import/export feed collections
- Rate Limiting - Control feed polling frequency per source
- Content Transformation - Modify feed entries before forwarding
- Multi-Output - Send the same feed to multiple destinations
- Authentication - Built-in basic authentication for web UI and API
- Docker-Ready - Official Docker images with minimal configuration
Use Cases
Content Aggregation Platforms: Build custom news aggregators or content discovery platforms by collecting feeds from multiple sources and presenting them through your own interface.
Automated Marketing: Monitor industry blogs, competitor content, or news sources and automatically distribute relevant content to your marketing channels via webhooks.
Development Team Notifications: Aggregate feeds from GitHub releases, Stack Overflow tags, or technical blogs and push updates to Slack, Discord, or Microsoft Teams.
Media Monitoring: Track news mentions, blog posts, or social media RSS feeds and forward matched entries to your monitoring dashboard or alerting system.
Content Distribution Networks: Collect content from various sources, transform it, and redistribute it to multiple platforms or subscribers automatically.
Research & Analysis: Aggregate academic feeds, preprint servers, or journal publications and forward them to data analysis pipelines for processing.
Why Deploy Feedpushr on Klutch.sh?
- One-Click PostgreSQL Integration - Connect to managed PostgreSQL instances for persistent feed state without managing database infrastructure
- Redis Caching - Integrate with Redis for high-performance caching of feed entries and deduplication
- HTTP Traffic Routing - Expose the web UI and REST API over HTTPS with automatic SSL certificates
- Environment Variables - Securely configure database connections, output plugins, and authentication credentials
- Persistent Volumes - Optional local storage for file-based outputs or configuration backups
- Automatic Deployments - Git-based deployments ensure your Feedpushr configuration updates automatically
- Resource Scaling - Adjust CPU and memory allocation as your feed monitoring grows
- Built-in Monitoring - Prometheus metrics endpoint works seamlessly with monitoring tools
- Webhook-Friendly - Reliable outbound HTTP connections for webhook integrations
- Zero Downtime Updates - Deploy configuration changes without interrupting feed monitoring
Prerequisites
- A Klutch.sh account
- Git installed locally
- Basic understanding of RSS/Atom feeds
- Optional: PostgreSQL database for persistent storage (recommended for production)
- Optional: Redis instance for caching and performance optimization
Understanding Feedpushr Architecture
Before deploying, it’s helpful to understand how Feedpushr works:
- Feed Polling: Feedpushr periodically polls configured RSS/Atom feeds based on defined intervals
- Entry Processing: New entries are extracted, deduplicated, and passed through configured filters
- Output Distribution: Processed entries are forwarded to configured output plugins (webhooks, email, etc.)
- State Management: Feed states and processed entries are stored in PostgreSQL or in-memory (not recommended for production)
- Web Interface: A built-in UI on port 8080 provides feed management and monitoring
Preparing Your Repository
Step 1: Create Project Structure
Create a new directory for your Feedpushr deployment:
mkdir feedpushr-deploymentcd feedpushr-deploymentStep 2: Create the Dockerfile
Feedpushr provides official Docker images, but we’ll create a custom Dockerfile for better control over configuration:
# Use the official Feedpushr imageFROM ncarlier/feedpushr:latest
# Set the working directoryWORKDIR /data
# Expose the web UI and API portEXPOSE 8080
# Health check to ensure the service is respondingHEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \ CMD wget --no-verbose --tries=1 --spider http://localhost:8080/healthz || exit 1
# Default command (can be overridden with environment variables)CMD ["feedpushr"]Step 3: Create Configuration Directory Structure
While Feedpushr can be configured entirely via environment variables, you may want to create a basic directory structure:
mkdir -p config pluginsStep 4: Create Example Configuration File (Optional)
Create config/filters.json for content transformation examples:
{ "filters": [ { "name": "Title Prefix Filter", "condition": "Title.Contains(\"Breaking\")", "actions": [ { "type": "prefix", "value": "🚨 BREAKING: " } ] }, { "name": "Content Truncation", "condition": "Content.Length > 500", "actions": [ { "type": "truncate", "field": "content", "length": 500 } ] } ]}Step 5: Create Sample Output Configuration
Create config/outputs.json to define where feed entries should be sent:
{ "outputs": [ { "alias": "webhook-endpoint", "plugin": "http", "enabled": true, "config": { "url": "https://your-webhook-endpoint.com/feeds", "method": "POST", "headers": { "Content-Type": "application/json", "Authorization": "Bearer YOUR_API_KEY" } } }, { "alias": "slack-channel", "plugin": "http", "enabled": true, "config": { "url": "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK", "method": "POST", "format": "slack" } } ]}Step 6: Create Environment Variables Template
Create .env.example to document required environment variables:
# Database Configuration (PostgreSQL recommended for production)DB_DRIVER=postgresDB_DSN=postgres://username:password@postgres-host:5432/feedpushr?sslmode=require
# Redis Cache (optional but recommended)CACHE_DRIVER=redisCACHE_DSN=redis://:password@redis-host:6379/0
# Authentication (highly recommended)AUTHN=trueAUTHN_USERNAME=adminAUTHN_PASSWORD=your-secure-password
# Server ConfigurationLISTEN_ADDR=:8080PUBLIC_URL=https://example-app.klutch.sh
# Feed Polling ConfigurationDEFAULT_TIMEOUT=5mMAX_FEED_AGGREGATORS=10
# Output Plugin ConfigurationOUTPUT_PLUGINS=http,stdout
# LoggingLOG_LEVEL=infoLOG_FORMAT=json
# MetricsMETRICS_ENABLED=true
# OPML Import (optional)# OPML_FILE=/data/feeds.opmlStep 7: Create Sample OPML File (Optional)
Create feeds.opml to import an initial set of feeds:
<?xml version="1.0" encoding="UTF-8"?><opml version="2.0"> <head> <title>Feedpushr Initial Feeds</title> </head> <body> <outline text="Technology" title="Technology"> <outline type="rss" text="TechCrunch" title="TechCrunch" xmlUrl="https://techcrunch.com/feed/" /> <outline type="rss" text="Hacker News" title="Hacker News" xmlUrl="https://news.ycombinator.com/rss" /> <outline type="rss" text="The Verge" title="The Verge" xmlUrl="https://www.theverge.com/rss/index.xml" /> </outline> <outline text="Development" title="Development"> <outline type="rss" text="GitHub Blog" title="GitHub Blog" xmlUrl="https://github.blog/feed/" /> <outline type="rss" text="Dev.to" title="Dev.to" xmlUrl="https://dev.to/feed" /> </outline> </body></opml>Step 8: Create README
Create README.md with deployment notes:
# Feedpushr Deployment
RSS feed aggregator and processor deployed on Klutch.sh.
## Features
- Multi-feed aggregation- Webhook output support- PostgreSQL persistence- Redis caching- Web UI for management
## Configuration
Set these environment variables in Klutch.sh:
- `DB_DSN`: PostgreSQL connection string- `CACHE_DSN`: Redis connection string (optional)- `AUTHN_USERNAME`: Admin username- `AUTHN_PASSWORD`: Admin password- `OUTPUT_PLUGINS`: Comma-separated list of enabled outputs
## Accessing the UI
Visit https://your-app.klutch.sh and log in with your admin credentials.
## API Documentation
API docs available at: https://your-app.klutch.sh/swaggerStep 9: Create .gitignore
.env*.logdata/cache/config/local.json*.pem*.keyStep 10: Initialize Git Repository
git initgit add .git commit -m "Initial Feedpushr setup"Push to your GitHub repository:
git remote add origin https://github.com/yourusername/feedpushr-deployment.gitgit branch -M maingit push -u origin mainDeploying on Klutch.sh
-
Log in to Klutch.sh
Navigate to klutch.sh/app and sign in to your account. -
Create a New App
Click New App and select Deploy from GitHub. Choose your Feedpushr repository. -
Configure Build Settings
Klutch.sh will automatically detect your Dockerfile. No additional build configuration is needed. -
Set Traffic Type
Select HTTP traffic since Feedpushr serves a web UI and API. The internal port should be set to8080. -
Configure Database Connection (Recommended)
If using PostgreSQL for persistence:- Deploy a PostgreSQL database on Klutch.sh first
- Add environment variable
DB_DRIVERwith valuepostgres - Add
DB_DSNwith your PostgreSQL connection string:
postgres://username:password@your-postgres.klutch.sh:8000/feedpushr?sslmode=disable - Mark
DB_DSNas sensitive to hide the value
-
Configure Redis Cache (Optional)
For better performance with high feed volumes:- Deploy a Redis instance on Klutch.sh
- Add environment variable
CACHE_DRIVERwith valueredis - Add
CACHE_DSNwith your Redis connection string:
redis://:password@your-redis.klutch.sh:8000/0 - Mark
CACHE_DSNas sensitive
-
Configure Authentication
Add the following environment variables to secure your Feedpushr instance:AUTHN=trueAUTHN_USERNAME= Your desired admin usernameAUTHN_PASSWORD= A strong password (mark as sensitive)
-
Configure Server Settings
Add these essential environment variables:LISTEN_ADDR=:8080PUBLIC_URL=https://your-app.klutch.sh(replace with your actual app URL)LOG_LEVEL=infoLOG_FORMAT=json
-
Configure Output Plugins
Enable the output plugins you need:OUTPUT_PLUGINS=http,stdout(or your preferred plugins)
-
Attach Persistent Volume (Optional)
If you want to store OPML files or local data:- In the Volumes section, click Add Volume
- Set mount path:
/data - Set size:
1GB(adjust based on needs)
-
Deploy the App
Click Deploy. Klutch.sh will build your Docker image and start your Feedpushr instance. -
Verify Deployment
Once deployed, visithttps://your-app.klutch.shto access the Feedpushr web UI. Log in with the credentials you configured.
Configuration and Management
Database Options
Feedpushr supports multiple database backends:
PostgreSQL (Recommended for Production):
DB_DRIVER=postgresDB_DSN=postgres://user:pass@host:port/dbname?sslmode=requireIn-Memory (Not Recommended for Production):
DB_DRIVER=memory# No DSN needed - data is lost on restartBoltDB (Single-File Database):
DB_DRIVER=boltDB_DSN=/data/feedpushr.dbCache Configuration
Redis (Recommended):
CACHE_DRIVER=redisCACHE_DSN=redis://:password@host:port/dbIn-Memory Cache:
CACHE_DRIVER=memory# No DSN neededOutput Plugin Configuration
Feedpushr supports multiple output plugins. Configure them via environment variables or the web UI:
HTTP Webhook Output:
{ "alias": "my-webhook", "plugin": "http", "config": { "url": "https://api.example.com/webhook", "method": "POST", "headers": { "Content-Type": "application/json", "Authorization": "Bearer YOUR_TOKEN" } }}Available Output Plugins:
http- Send to HTTP webhooksstdout- Log to console (debugging)kafka- Apache Kafka integrationamazonsqs- AWS SQS queuesmtp- Email notifications
Enable plugins with the OUTPUT_PLUGINS environment variable:
OUTPUT_PLUGINS=http,stdout,kafkaFilter Configuration
Filters transform feed entries before they reach outputs. Common filters include:
Title Transformation:
{ "name": "Add Emoji Prefix", "condition": "Category.Contains(\"Breaking\")", "actions": [ { "type": "prefix", "field": "title", "value": "🔥 " } ]}Content Truncation:
{ "name": "Truncate Long Content", "condition": "Content.Length > 1000", "actions": [ { "type": "truncate", "field": "content", "length": 500, "suffix": "... [Read more]" } ]}URL Extraction:
{ "name": "Extract First Image", "actions": [ { "type": "extract", "field": "image", "pattern": "<img src=\"([^\"]+)\"" } ]}Accessing the Web Interface
Once deployed, access the Feedpushr web UI at https://your-app.klutch.sh.
Dashboard Features
Feed Management:
- Add new RSS/Atom feeds
- Configure polling intervals
- Set feed-specific tags and categories
- Enable/disable individual feeds
- View feed status and last update times
Output Configuration:
- Create and manage output destinations
- Test outputs with sample data
- View output delivery statistics
- Configure retry policies
Filter Management:
- Create custom content filters
- Test filters against sample entries
- Chain multiple filters together
- Enable/disable filters per feed
Monitoring:
- View feed polling statistics
- Check entry processing rates
- Monitor output delivery success rates
- View error logs and failed deliveries
Using the REST API
Feedpushr provides a comprehensive REST API for programmatic management:
Add a Feed:
curl -X POST https://example-app.klutch.sh/v1/feeds \ -u admin:password \ -H "Content-Type: application/json" \ -d '{ "title": "TechCrunch", "xmlUrl": "https://techcrunch.com/feed/", "tags": ["technology", "news"] }'List All Feeds:
curl -u admin:password https://example-app.klutch.sh/v1/feedsGet Feed Entries:
curl -u admin:password https://example-app.klutch.sh/v1/feeds/1/entriesDelete a Feed:
curl -X DELETE -u admin:password https://example-app.klutch.sh/v1/feeds/1Import OPML:
curl -X POST https://example-app.klutch.sh/v1/opml \ -u admin:password \ -F "file=@feeds.opml"Export OPML:
curl -u admin:password https://example-app.klutch.sh/v1/opml > feeds.opmlAPI Authentication
All API requests require HTTP Basic Authentication. Use the username and password configured in your AUTHN_USERNAME and AUTHN_PASSWORD environment variables.
For programmatic access, include credentials in the request:
JavaScript/Node.js:
const axios = require('axios');
const feedpushrAPI = axios.create({ baseURL: 'https://example-app.klutch.sh', auth: { username: 'admin', password: 'your-password' }});
// Add a feedconst addFeed = async () => { const response = await feedpushrAPI.post('/v1/feeds', { title: 'GitHub Blog', xmlUrl: 'https://github.blog/feed/', tags: ['development', 'github'] }); console.log('Feed added:', response.data);};
// List all feedsconst listFeeds = async () => { const response = await feedpushrAPI.get('/v1/feeds'); console.log('Feeds:', response.data);};Python:
import requestsfrom requests.auth import HTTPBasicAuth
BASE_URL = 'https://example-app.klutch.sh'auth = HTTPBasicAuth('admin', 'your-password')
# Add a feedresponse = requests.post( f'{BASE_URL}/v1/feeds', auth=auth, json={ 'title': 'Dev.to', 'xmlUrl': 'https://dev.to/feed', 'tags': ['development', 'community'] })print('Feed added:', response.json())
# List all feedsresponse = requests.get(f'{BASE_URL}/v1/feeds', auth=auth)print('Feeds:', response.json())Go:
package main
import ( "bytes" "encoding/json" "fmt" "net/http")
type Feed struct { Title string `json:"title"` XMLUrl string `json:"xmlUrl"` Tags []string `json:"tags"`}
func main() { client := &http.Client{} baseURL := "https://example-app.klutch.sh"
// Add a feed feed := Feed{ Title: "Hacker News", XMLUrl: "https://news.ycombinator.com/rss", Tags: []string{"tech", "news"}, }
jsonData, _ := json.Marshal(feed) req, _ := http.NewRequest("POST", baseURL+"/v1/feeds", bytes.NewBuffer(jsonData)) req.SetBasicAuth("admin", "your-password") req.Header.Set("Content-Type", "application/json")
resp, _ := client.Do(req) defer resp.Body.Close()
fmt.Println("Feed added:", resp.Status)}cURL with Auth Token:
# Create base64 encoded credentialsAUTH=$(echo -n "admin:your-password" | base64)
# Make authenticated requestcurl -H "Authorization: Basic $AUTH" \ https://example-app.klutch.sh/v1/feedsAdvanced Configuration
Webhook Output Configuration
Configure webhook outputs to send feed entries to external services:
Slack Integration:
# Add via APIcurl -X POST https://example-app.klutch.sh/v1/outputs \ -u admin:password \ -H "Content-Type: application/json" \ -d '{ "alias": "slack-notifications", "plugin": "http", "config": { "url": "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK", "method": "POST", "format": "slack" } }'Discord Integration:
curl -X POST https://example-app.klutch.sh/v1/outputs \ -u admin:password \ -H "Content-Type: application/json" \ -d '{ "alias": "discord-feed", "plugin": "http", "config": { "url": "https://discord.com/api/webhooks/YOUR/WEBHOOK", "method": "POST", "format": "discord" } }'Custom Webhook with Transformation:
curl -X POST https://example-app.klutch.sh/v1/outputs \ -u admin:password \ -H "Content-Type: application/json" \ -d '{ "alias": "custom-api", "plugin": "http", "config": { "url": "https://api.example.com/feeds/ingest", "method": "POST", "headers": { "Authorization": "Bearer YOUR_API_TOKEN", "Content-Type": "application/json" }, "template": "{\"title\": \"{{.Title}}\", \"url\": \"{{.Link}}\", \"content\": \"{{.Description}}\", \"published\": \"{{.PublishedParsed}}\"}" } }'Feed Polling Configuration
Control how frequently feeds are polled:
# Default timeout for feed fetchingDEFAULT_TIMEOUT=5m
# Maximum number of concurrent feed aggregatorsMAX_FEED_AGGREGATORS=10
# Default polling interval (per feed, can be overridden)DEFAULT_POLLING_INTERVAL=5mIndividual feeds can have custom polling intervals set via the UI or API:
curl -X PUT https://example-app.klutch.sh/v1/feeds/1 \ -u admin:password \ -H "Content-Type: application/json" \ -d '{ "hubUrl": "https://example.com/feed.xml", "polling": { "interval": "10m" } }'Monitoring and Metrics
Feedpushr exposes Prometheus metrics at /metrics:
# Enable metricsMETRICS_ENABLED=true
# Access metrics endpointcurl https://example-app.klutch.sh/metricsKey Metrics:
feedpushr_feed_total- Total number of configured feedsfeedpushr_feed_status- Feed status (0=disabled, 1=enabled)feedpushr_entries_total- Total entries processedfeedpushr_output_sent_total- Total outputs sentfeedpushr_output_errors_total- Total output errorsfeedpushr_http_requests_total- Total HTTP requestsfeedpushr_http_request_duration_seconds- Request duration histogram
Health Check Endpoint:
curl https://example-app.klutch.sh/healthzReturns HTTP 200 if the service is healthy.
Custom Filter Examples
Filter by Keywords:
{ "name": "Kubernetes Only", "condition": "Title.Contains(\"Kubernetes\") || Description.Contains(\"k8s\")", "actions": [ { "type": "keep" } ]}Add Custom Fields:
{ "name": "Add Source Tag", "actions": [ { "type": "set", "field": "custom.source", "value": "TechCrunch" } ]}HTML to Markdown:
{ "name": "Convert HTML to Markdown", "actions": [ { "type": "html2text", "field": "description" } ]}Extract First Paragraph:
{ "name": "Extract First Paragraph", "actions": [ { "type": "extract", "field": "summary", "source": "description", "pattern": "<p>(.*?)</p>", "index": 0 } ]}Production Best Practices
Security
- Enable Authentication - Always set
AUTHN=trueand use strong passwords - Use HTTPS - Klutch.sh provides automatic SSL certificates
- Secure Database Connections - Use SSL/TLS for PostgreSQL connections when possible
- Rotate Credentials - Periodically update database passwords and API tokens
- Restrict Webhook Access - Use API tokens or IP whitelisting for webhook endpoints
- Review Feed Sources - Only add feeds from trusted sources to prevent XSS or malicious content
Performance Optimization
- Use Redis Caching - Significantly improves performance with many feeds
- Tune Polling Intervals - Set longer intervals for infrequently updated feeds
- Limit Concurrent Aggregators - Adjust
MAX_FEED_AGGREGATORSbased on resources - Database Connection Pooling - PostgreSQL automatically handles connection pooling
- Monitor Memory Usage - Increase container memory if processing many large feeds
- Output Batching - Some plugins support batching to reduce API calls
Reliability
- Use PostgreSQL - Persistent database prevents data loss on restarts
- Attach Persistent Volumes - Store OPML backups and configuration files
- Configure Retry Logic - Most output plugins support automatic retries
- Monitor Failed Outputs - Regularly check the UI for failed deliveries
- Set Up Alerts - Use Prometheus metrics to alert on errors or downtime
- Regular OPML Backups - Export feed configurations periodically
Monitoring
- Enable Prometheus Metrics - Set
METRICS_ENABLED=true - Monitor Feed Status - Check for feeds failing to fetch
- Track Output Errors - Alert on high output error rates
- Review Logs Regularly - Use
LOG_FORMAT=jsonfor structured logging - Set Up Health Checks - Use
/healthzendpoint for monitoring tools
Backup and Recovery
Export OPML Regularly:
# Export all feeds to OPMLcurl -u admin:password https://example-app.klutch.sh/v1/opml > backup-$(date +%Y%m%d).opmlBackup Database: If using PostgreSQL, use standard PostgreSQL backup tools:
pg_dump -h your-postgres.klutch.sh -p 8000 -U username feedpushr > backup.sqlRestore from OPML:
curl -X POST https://example-app.klutch.sh/v1/opml \ -u admin:password \ -F "file=@backup-20250120.opml"Troubleshooting
Feeds Not Updating
Issue: Feeds are not polling or updating entries.
Solutions:
- Check the feed URL is accessible:
curl -I https://feed-url.com/feed.xml - Verify polling interval is not too long
- Check Feedpushr logs for fetch errors: View logs in Klutch.sh dashboard
- Ensure
MAX_FEED_AGGREGATORSis not set too low - Verify the feed is enabled in the UI
- Check if the feed URL has changed or requires authentication
Outputs Not Sending
Issue: Feed entries are being processed but not reaching output destinations.
Solutions:
- Test the webhook URL manually with curl
- Check output configuration in the UI
- Verify authentication tokens/API keys are correct
- Review output error logs in the dashboard
- Ensure the output plugin is enabled in
OUTPUT_PLUGINS - Check webhook endpoint rate limits
- Test outputs with the "Test Output" button in the UI
Database Connection Errors
Issue: Feedpushr cannot connect to PostgreSQL or Redis.
Solutions:
- Verify
DB_DSNconnection string format is correct - Check PostgreSQL is deployed and accessible on port 8000
- Ensure database credentials are correct
- Test connection manually:
psql "postgres://user:pass@host:8000/dbname" - Check if SSL mode needs to be adjusted (
?sslmode=disableor?sslmode=require) - Verify Redis connection string for cache issues
Authentication Not Working
Issue: Cannot log in to the web UI.
Solutions:
- Verify
AUTHNis set totrue - Check
AUTHN_USERNAMEandAUTHN_PASSWORDare set - Clear browser cookies and cache
- Try accessing via incognito/private browsing
- Check Feedpushr logs for authentication errors
High Memory Usage
Issue: Feedpushr container is using excessive memory.
Solutions:
- Enable Redis caching to offload memory pressure
- Reduce
MAX_FEED_AGGREGATORSto limit concurrent processing - Increase polling intervals for less frequently updated feeds
- Review filters for memory-intensive operations
- Consider splitting feeds across multiple Feedpushr instances
- Increase container memory allocation in Klutch.sh
Duplicate Entries
Issue: Same feed entries are being sent multiple times.
Solutions:
- Ensure PostgreSQL persistence is enabled (not using in-memory database)
- Verify Redis cache is connected properly
- Check if feed GUID/IDs are unique and consistent
- Review feed source for changing GUIDs on republish
- Don't restart the service too frequently (state needs time to persist)
API Rate Limiting
Issue: Receiving 429 errors from webhook destinations.
Solutions:
- Implement batching in your output configuration
- Increase polling intervals to reduce output frequency
- Use filters to reduce the number of forwarded entries
- Configure rate limiting in the output plugin settings
- Split high-volume feeds across multiple outputs
Example Configurations
Content Aggregation Service
Monitor multiple tech news sources and send to a custom API:
Environment Variables:
DB_DRIVER=postgresDB_DSN=postgres://user:pass@postgres.klutch.sh:8000/feedpushrCACHE_DRIVER=redisCACHE_DSN=redis://:pass@redis.klutch.sh:8000/0OUTPUT_PLUGINS=httpDEFAULT_POLLING_INTERVAL=10mFeeds (via OPML or UI):
- TechCrunch: https://techcrunch.com/feed/
- The Verge: https://www.theverge.com/rss/index.xml
- Ars Technica: https://arstechnica.com/feed/
- Wired: https://www.wired.com/feed/rss
Output Configuration:
{ "alias": "content-api", "plugin": "http", "config": { "url": "https://api.mysite.com/content/ingest", "method": "POST", "headers": { "Authorization": "Bearer YOUR_TOKEN", "Content-Type": "application/json" }, "template": "{\"title\": \"{{.Title}}\", \"url\": \"{{.Link}}\", \"content\": \"{{.Description}}\", \"published\": \"{{.PublishedParsed}}\", \"source\": \"{{.Meta.FeedTitle}}\"}" }}Development Team Notifications
Monitor GitHub releases, Stack Overflow, and tech blogs, forward to Slack:
Environment Variables:
DB_DRIVER=postgresDB_DSN=postgres://user:pass@postgres.klutch.sh:8000/devteamCACHE_DRIVER=redisCACHE_DSN=redis://:pass@redis.klutch.sh:8000/1OUTPUT_PLUGINS=httpDEFAULT_POLLING_INTERVAL=15mFeeds:
- GitHub Kubernetes Releases: https://github.com/kubernetes/kubernetes/releases.atom
- Stack Overflow Kubernetes: https://stackoverflow.com/feeds/tag/kubernetes
- Dev.to Kubernetes: https://dev.to/feed/tag/kubernetes
Output Configuration:
{ "alias": "slack-devteam", "plugin": "http", "config": { "url": "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK", "method": "POST", "format": "slack" }}Filter Configuration:
{ "name": "Add Emoji to Releases", "condition": "Title.Contains(\"Release\") || Title.Contains(\"v1.\")", "actions": [ { "type": "prefix", "field": "title", "value": "🚀 " } ]}Media Monitoring Service
Track news mentions and keyword-based content, send to multiple destinations:
Environment Variables:
DB_DRIVER=postgresDB_DSN=postgres://user:pass@postgres.klutch.sh:8000/media_monitorCACHE_DRIVER=redisCACHE_DSN=redis://:pass@redis.klutch.sh:8000/2OUTPUT_PLUGINS=http,smtpDEFAULT_POLLING_INTERVAL=5mFeeds:
- Google News (custom keyword search)
- Reddit keyword search feeds
- Industry-specific news sites
Multiple Outputs:
[ { "alias": "urgent-alerts", "plugin": "http", "config": { "url": "https://api.pagerduty.com/incidents", "method": "POST", "headers": { "Authorization": "Token token=YOUR_TOKEN", "Content-Type": "application/json" } } }, { "alias": "daily-digest", "plugin": "smtp", "config": { "host": "smtp.example.com", "port": 587, "username": "alerts@example.com", "password": "your-password", "from": "alerts@example.com", "to": "team@example.com" } }]Keyword Filter:
{ "name": "High Priority Keywords", "condition": "Title.Contains(\"breaking\") || Title.Contains(\"urgent\") || Description.Contains(\"your-company-name\")", "actions": [ { "type": "route", "output": "urgent-alerts" } ]}Migration and Scaling
Migrating from Another Feed Aggregator
From FreshRSS/TinyTinyRSS:
- Export OPML from your current feed reader
- Import OPML via Feedpushr UI or API
- Configure outputs to replace the reading workflow
- Test with a few feeds before full migration
From Zapier/IFTTT:
- List all RSS-based Zaps/Applets
- Add corresponding feeds to Feedpushr
- Configure HTTP outputs to match your Zapier actions
- Test outputs thoroughly before disabling Zapier workflows
Scaling for High Volume
For handling hundreds of feeds:
- Use Redis - Essential for caching and deduplication at scale
- Increase Resources - Scale container CPU and memory in Klutch.sh
- Tune Aggregators - Increase
MAX_FEED_AGGREGATORSto 20-50 - Database Optimization - Use a dedicated PostgreSQL instance with sufficient resources
- Output Batching - Configure outputs to batch entries when possible
- Monitor Performance - Use Prometheus metrics to identify bottlenecks
For thousands of feeds, consider:
- Running multiple Feedpushr instances with different feed sets
- Using a load balancer for the API
- Implementing feed prioritization based on update frequency
Additional Resources
- Feedpushr GitHub Repository
- Official Documentation
- PostgreSQL Deployment Guide
- Redis Deployment Guide
- Klutch.sh Deployment Concepts
- Persistent Volumes Guide
- Networking and Traffic Routing
- RSS Specification
- RSS Feed Validator
You now have a production-ready Feedpushr deployment on Klutch.sh! Monitor feeds through the web UI, configure outputs to match your workflow, and let Feedpushr handle the heavy lifting of RSS aggregation and distribution. For questions or issues, check the troubleshooting section or reach out to the Klutch.sh community.