Deploying a Digiface App
Introduction
Digiface is an advanced facial recognition and detection platform that leverages cutting-edge AI and machine learning to provide accurate, real-time face identification and analysis. Designed for enterprises, educational institutions, security systems, and attendance management, Digiface offers a powerful solution for identity verification, access control, and automated attendance tracking.
Digiface stands out with its:
- AI-Powered Face Recognition: State-of-the-art deep learning models for accurate face detection and identification
- Real-Time Processing: Lightning-fast recognition with sub-second response times
- Multi-Face Detection: Simultaneously detect and identify multiple faces in a single frame
- Liveness Detection: Advanced anti-spoofing technology to prevent photo and video attacks
- Face Encoding & Storage: Efficient face embedding storage for fast matching and retrieval
- Attendance Tracking: Automated check-in/check-out system with detailed reports and analytics
- Access Control Integration: Seamless integration with door locks, turnstiles, and security systems
- Video Stream Processing: Real-time face detection from IP cameras and video feeds
- Face Database Management: Comprehensive tools for enrolling, updating, and managing face profiles
- RESTful API: Complete API for integration with existing systems and custom applications
- Privacy-First Design: GDPR-compliant with encrypted storage and configurable data retention
- Multi-Tenant Support: Isolated environments for different organizations or departments
- Analytics Dashboard: Detailed insights into recognition accuracy, attendance patterns, and system usage
- Webhooks & Notifications: Real-time alerts for face detection events and system notifications
This comprehensive guide walks you through deploying Digiface on Klutch.sh using Docker, covering installation, database setup, model configuration, persistent storage, and production best practices for facial recognition systems.
Prerequisites
Before you begin deploying Digiface, ensure you have the following:
- A Klutch.sh account
- A GitHub account with a repository for your Digiface project
- Docker installed locally for testing (optional but recommended)
- Basic understanding of Docker, machine learning, and facial recognition systems
- Sufficient computational resources (CPU with AVX support or GPU recommended)
- A domain name for your Digiface instance (recommended for production)
Project Structure
Here’s the recommended project structure for a Digiface deployment:
digiface/├── Dockerfile├── package.json├── tsconfig.json├── .dockerignore├── .gitignore├── src/│ ├── index.ts│ ├── config/│ │ ├── database.ts│ │ └── face-recognition.ts│ ├── models/│ │ ├── User.ts│ │ ├── FaceEncoding.ts│ │ ├── AttendanceRecord.ts│ │ └── RecognitionEvent.ts│ ├── services/│ │ ├── faceDetection.service.ts│ │ ├── faceRecognition.service.ts│ │ ├── attendance.service.ts│ │ └── storage.service.ts│ ├── routes/│ │ ├── face.routes.ts│ │ ├── attendance.routes.ts│ │ ├── user.routes.ts│ │ └── analytics.routes.ts│ ├── middleware/│ │ ├── auth.middleware.ts│ │ ├── upload.middleware.ts│ │ └── validation.middleware.ts│ └── utils/│ ├── image-processing.ts│ ├── face-encoding.ts│ └── logger.ts├── models/│ ├── shape_predictor_68_face_landmarks.dat│ └── dlib_face_recognition_resnet_model_v1.dat└── uploads/ ├── faces/ └── temp/Step 1: Create the Dockerfile
Create a Dockerfile in the root of your project. This Dockerfile sets up a Node.js environment with Python for face recognition libraries:
# Use Node.js 20 with Python support for face recognitionFROM nikolaik/python-nodejs:python3.11-nodejs20-alpine
# Install system dependencies for face recognitionRUN apk add --no-cache \ build-base \ cmake \ openblas-dev \ lapack-dev \ jpeg-dev \ libpng-dev \ tiff-dev \ ffmpeg \ git \ wget
# Set working directoryWORKDIR /app
# Install Python dependencies for face recognitionRUN pip3 install --no-cache-dir \ face_recognition \ opencv-python-headless \ numpy \ Pillow \ scipy
# Copy package filesCOPY package*.json ./COPY tsconfig.json ./
# Install Node.js dependenciesRUN npm ci --only=production
# Copy application sourceCOPY . .
# Build TypeScript applicationRUN npm run build
# Create directories for uploads and modelsRUN mkdir -p /app/uploads/faces /app/uploads/temp /app/models
# Download face recognition models if not presentRUN if [ ! -f /app/models/shape_predictor_68_face_landmarks.dat ]; then \ wget -O /app/models/shape_predictor_68_face_landmarks.dat.bz2 \ http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2 && \ bzip2 -d /app/models/shape_predictor_68_face_landmarks.dat.bz2; \ fi
RUN if [ ! -f /app/models/dlib_face_recognition_resnet_model_v1.dat ]; then \ wget -O /app/models/dlib_face_recognition_resnet_model_v1.dat.bz2 \ http://dlib.net/files/dlib_face_recognition_resnet_model_v1.dat.bz2 && \ bzip2 -d /app/models/dlib_face_recognition_resnet_model_v1.dat.bz2; \ fi
# Set permissionsRUN chmod -R 755 /app/uploads /app/models
# Expose application portEXPOSE 3000
# Health checkHEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \ CMD node healthcheck.js || exit 1
# Start the applicationCMD ["node", "dist/index.js"]Step 2: Create the Application
Create the main application file src/index.ts:
import express, { Express, Request, Response } from 'express';import cors from 'cors';import helmet from 'helmet';import morgan from 'morgan';import { Pool } from 'pg';import dotenv from 'dotenv';import faceRoutes from './routes/face.routes';import attendanceRoutes from './routes/attendance.routes';import userRoutes from './routes/user.routes';import analyticsRoutes from './routes/analytics.routes';import { setupDatabase } from './config/database';import { initializeFaceRecognition } from './config/face-recognition';import { logger } from './utils/logger';
dotenv.config();
const app: Express = express();const PORT = process.env.PORT || 3000;
// Database connectionexport const db = new Pool({ host: process.env.DB_HOST || 'localhost', port: parseInt(process.env.DB_PORT || '5432'), database: process.env.DB_NAME || 'digiface', user: process.env.DB_USER || 'postgres', password: process.env.DB_PASSWORD, max: 20, idleTimeoutMillis: 30000, connectionTimeoutMillis: 10000,});
// Middlewareapp.use(helmet({ contentSecurityPolicy: { directives: { defaultSrc: ["'self'"], imgSrc: ["'self'", "data:", "blob:"], scriptSrc: ["'self'"], styleSrc: ["'self'", "'unsafe-inline'"], }, },}));app.use(cors({ origin: process.env.CORS_ORIGIN || '*', credentials: true,}));app.use(morgan('combined'));app.use(express.json({ limit: '50mb' }));app.use(express.urlencoded({ extended: true, limit: '50mb' }));
// Static file serving for uploadsapp.use('/uploads', express.static('uploads'));
// API Routesapp.use('/api/face', faceRoutes);app.use('/api/attendance', attendanceRoutes);app.use('/api/users', userRoutes);app.use('/api/analytics', analyticsRoutes);
// Health check endpointapp.get('/health', async (req: Request, res: Response) => { try { await db.query('SELECT 1'); res.status(200).json({ status: 'healthy', timestamp: new Date().toISOString(), uptime: process.uptime(), database: 'connected', }); } catch (error) { res.status(503).json({ status: 'unhealthy', timestamp: new Date().toISOString(), database: 'disconnected', }); }});
// Root endpointapp.get('/', (req: Request, res: Response) => { res.json({ name: 'Digiface API', version: '1.0.0', description: 'Advanced facial recognition platform', endpoints: { health: '/health', face: '/api/face', attendance: '/api/attendance', users: '/api/users', analytics: '/api/analytics', }, });});
// 404 handlerapp.use((req: Request, res: Response) => { res.status(404).json({ error: 'Not Found', message: 'The requested resource does not exist', });});
// Error handlerapp.use((err: any, req: Request, res: Response, next: any) => { logger.error('Unhandled error:', err); res.status(err.status || 500).json({ error: 'Internal Server Error', message: process.env.NODE_ENV === 'production' ? 'An error occurred' : err.message, });});
// Initialize applicationasync function startServer() { try { // Setup database await setupDatabase(db); logger.info('Database initialized successfully');
// Initialize face recognition models await initializeFaceRecognition(); logger.info('Face recognition models loaded successfully');
// Start server app.listen(PORT, () => { logger.info(`Digiface server running on port ${PORT}`); logger.info(`Environment: ${process.env.NODE_ENV || 'development'}`); }); } catch (error) { logger.error('Failed to start server:', error); process.exit(1); }}
// Handle graceful shutdownprocess.on('SIGTERM', async () => { logger.info('SIGTERM received, closing server gracefully'); await db.end(); process.exit(0);});
process.on('SIGINT', async () => { logger.info('SIGINT received, closing server gracefully'); await db.end(); process.exit(0);});
startServer();Step 3: Create the Database Configuration
Create src/config/database.ts to set up the database schema:
import { Pool } from 'pg';import { logger } from '../utils/logger';
export async function setupDatabase(db: Pool): Promise<void> { try { await db.query('BEGIN');
// Users table await db.query(` CREATE TABLE IF NOT EXISTS users ( id SERIAL PRIMARY KEY, username VARCHAR(255) UNIQUE NOT NULL, email VARCHAR(255) UNIQUE NOT NULL, full_name VARCHAR(255) NOT NULL, employee_id VARCHAR(100), department VARCHAR(100), role VARCHAR(50) DEFAULT 'user', is_active BOOLEAN DEFAULT true, photo_url VARCHAR(500), created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ) `);
// Face encodings table await db.query(` CREATE TABLE IF NOT EXISTS face_encodings ( id SERIAL PRIMARY KEY, user_id INTEGER REFERENCES users(id) ON DELETE CASCADE, encoding BYTEA NOT NULL, encoding_version VARCHAR(20) DEFAULT '1.0', quality_score FLOAT, image_url VARCHAR(500), is_primary BOOLEAN DEFAULT false, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, UNIQUE(user_id, is_primary) WHERE is_primary = true ) `);
// Attendance records table await db.query(` CREATE TABLE IF NOT EXISTS attendance_records ( id SERIAL PRIMARY KEY, user_id INTEGER REFERENCES users(id) ON DELETE CASCADE, check_in_time TIMESTAMP, check_out_time TIMESTAMP, status VARCHAR(50) DEFAULT 'present', location VARCHAR(255), device_id VARCHAR(100), confidence_score FLOAT, image_url VARCHAR(500), notes TEXT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ) `);
// Recognition events table await db.query(` CREATE TABLE IF NOT EXISTS recognition_events ( id SERIAL PRIMARY KEY, user_id INTEGER REFERENCES users(id) ON DELETE SET NULL, event_type VARCHAR(50) NOT NULL, confidence_score FLOAT, location VARCHAR(255), device_id VARCHAR(100), image_url VARCHAR(500), metadata JSONB, recognized BOOLEAN DEFAULT false, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ) `);
// Access logs table await db.query(` CREATE TABLE IF NOT EXISTS access_logs ( id SERIAL PRIMARY KEY, user_id INTEGER REFERENCES users(id) ON DELETE SET NULL, access_point VARCHAR(255) NOT NULL, action VARCHAR(50) NOT NULL, granted BOOLEAN DEFAULT false, reason VARCHAR(255), confidence_score FLOAT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ) `);
// API tokens table await db.query(` CREATE TABLE IF NOT EXISTS api_tokens ( id SERIAL PRIMARY KEY, user_id INTEGER REFERENCES users(id) ON DELETE CASCADE, token VARCHAR(500) UNIQUE NOT NULL, name VARCHAR(255), permissions JSONB, last_used_at TIMESTAMP, expires_at TIMESTAMP, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ) `);
// System settings table await db.query(` CREATE TABLE IF NOT EXISTS system_settings ( id SERIAL PRIMARY KEY, key VARCHAR(255) UNIQUE NOT NULL, value JSONB NOT NULL, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ) `);
// Create indexes for better performance await db.query(` CREATE INDEX IF NOT EXISTS idx_users_email ON users(email); CREATE INDEX IF NOT EXISTS idx_users_employee_id ON users(employee_id); CREATE INDEX IF NOT EXISTS idx_face_encodings_user_id ON face_encodings(user_id); CREATE INDEX IF NOT EXISTS idx_attendance_user_id ON attendance_records(user_id); CREATE INDEX IF NOT EXISTS idx_attendance_date ON attendance_records(check_in_time); CREATE INDEX IF NOT EXISTS idx_recognition_events_user_id ON recognition_events(user_id); CREATE INDEX IF NOT EXISTS idx_recognition_events_created_at ON recognition_events(created_at); CREATE INDEX IF NOT EXISTS idx_access_logs_user_id ON access_logs(user_id); CREATE INDEX IF NOT EXISTS idx_access_logs_created_at ON access_logs(created_at); `);
// Create trigger for updating updated_at timestamp await db.query(` CREATE OR REPLACE FUNCTION update_updated_at_column() RETURNS TRIGGER AS $$ BEGIN NEW.updated_at = CURRENT_TIMESTAMP; RETURN NEW; END; $$ language 'plpgsql'; `);
await db.query(` DROP TRIGGER IF EXISTS update_users_updated_at ON users; CREATE TRIGGER update_users_updated_at BEFORE UPDATE ON users FOR EACH ROW EXECUTE FUNCTION update_updated_at_column(); `);
await db.query('COMMIT'); logger.info('Database schema created successfully'); } catch (error) { await db.query('ROLLBACK'); logger.error('Database setup failed:', error); throw error; }}Step 4: Create Face Recognition Service
Create src/services/faceRecognition.service.ts:
import { spawn } from 'child_process';import path from 'path';import fs from 'fs/promises';import { logger } from '../utils/logger';
export interface FaceEncoding { encoding: number[]; boundingBox: { top: number; right: number; bottom: number; left: number; }; landmarks: any; confidence: number;}
export class FaceRecognitionService { private pythonPath: string = 'python3'; private scriptPath: string = path.join(__dirname, '..', 'python', 'face_recognition.py');
async detectFaces(imagePath: string): Promise<FaceEncoding[]> { return new Promise((resolve, reject) => { const pythonProcess = spawn(this.pythonPath, [ this.scriptPath, 'detect', imagePath, ]);
let outputData = ''; let errorData = '';
pythonProcess.stdout.on('data', (data) => { outputData += data.toString(); });
pythonProcess.stderr.on('data', (data) => { errorData += data.toString(); });
pythonProcess.on('close', (code) => { if (code !== 0) { logger.error('Face detection failed:', errorData); reject(new Error(`Face detection failed: ${errorData}`)); return; }
try { const result = JSON.parse(outputData); resolve(result.faces || []); } catch (error) { logger.error('Failed to parse face detection output:', error); reject(error); } }); }); }
async generateEncoding(imagePath: string): Promise<FaceEncoding | null> { const faces = await this.detectFaces(imagePath);
if (faces.length === 0) { logger.warn('No faces detected in image'); return null; }
if (faces.length > 1) { logger.warn('Multiple faces detected, using the first one'); }
return faces[0]; }
async compareFaces( encoding1: number[], encoding2: number[], tolerance: number = 0.6 ): Promise<{ match: boolean; distance: number }> { return new Promise((resolve, reject) => { const pythonProcess = spawn(this.pythonPath, [ this.scriptPath, 'compare', JSON.stringify(encoding1), JSON.stringify(encoding2), tolerance.toString(), ]);
let outputData = ''; let errorData = '';
pythonProcess.stdout.on('data', (data) => { outputData += data.toString(); });
pythonProcess.stderr.on('data', (data) => { errorData += data.toString(); });
pythonProcess.on('close', (code) => { if (code !== 0) { logger.error('Face comparison failed:', errorData); reject(new Error(`Face comparison failed: ${errorData}`)); return; }
try { const result = JSON.parse(outputData); resolve({ match: result.match, distance: result.distance, }); } catch (error) { logger.error('Failed to parse face comparison output:', error); reject(error); } }); }); }
async recognizeFace( imagePath: string, knownEncodings: { userId: number; encoding: number[] }[] ): Promise<{ userId: number; confidence: number } | null> { const faces = await this.detectFaces(imagePath);
if (faces.length === 0) { return null; }
const unknownEncoding = faces[0].encoding; let bestMatch: { userId: number; confidence: number } | null = null; let minDistance = Infinity;
for (const known of knownEncodings) { const comparison = await this.compareFaces( unknownEncoding, known.encoding );
if (comparison.match && comparison.distance < minDistance) { minDistance = comparison.distance; const confidence = (1 - comparison.distance) * 100; bestMatch = { userId: known.userId, confidence: Math.round(confidence * 100) / 100, }; } }
return bestMatch; }
async validateImageQuality(imagePath: string): Promise<{ valid: boolean; qualityScore: number; issues: string[]; }> { return new Promise((resolve, reject) => { const pythonProcess = spawn(this.pythonPath, [ this.scriptPath, 'validate', imagePath, ]);
let outputData = ''; let errorData = '';
pythonProcess.stdout.on('data', (data) => { outputData += data.toString(); });
pythonProcess.stderr.on('data', (data) => { errorData += data.toString(); });
pythonProcess.on('close', (code) => { if (code !== 0) { logger.error('Image validation failed:', errorData); reject(new Error(`Image validation failed: ${errorData}`)); return; }
try { const result = JSON.parse(outputData); resolve(result); } catch (error) { logger.error('Failed to parse validation output:', error); reject(error); } }); }); }}
export const faceRecognitionService = new FaceRecognitionService();Step 5: Create Python Face Recognition Script
Create src/python/face_recognition.py:
#!/usr/bin/env python3import sysimport jsonimport face_recognitionimport numpy as npfrom PIL import Imageimport cv2
def detect_faces(image_path): """Detect faces in an image and return encodings""" try: image = face_recognition.load_image_file(image_path) face_locations = face_recognition.face_locations(image) face_encodings = face_recognition.face_encodings(image, face_locations) face_landmarks = face_recognition.face_landmarks(image, face_locations)
faces = [] for i, (encoding, location, landmarks) in enumerate(zip(face_encodings, face_locations, face_landmarks)): top, right, bottom, left = location
# Calculate confidence based on face size and quality face_width = right - left face_height = bottom - top face_area = face_width * face_height image_area = image.shape[0] * image.shape[1] confidence = min((face_area / image_area) * 100, 100)
faces.append({ 'encoding': encoding.tolist(), 'boundingBox': { 'top': top, 'right': right, 'bottom': bottom, 'left': left }, 'landmarks': {k: [[int(p[0]), int(p[1])] for p in v] for k, v in landmarks.items()}, 'confidence': round(confidence, 2) })
return {'faces': faces, 'count': len(faces)} except Exception as e: return {'error': str(e), 'faces': []}
def compare_faces(encoding1_str, encoding2_str, tolerance=0.6): """Compare two face encodings""" try: encoding1 = np.array(json.loads(encoding1_str)) encoding2 = np.array(json.loads(encoding2_str))
distance = face_recognition.face_distance([encoding1], encoding2)[0] match = distance <= tolerance
return { 'match': bool(match), 'distance': float(distance), 'tolerance': tolerance } except Exception as e: return {'error': str(e)}
def validate_image(image_path): """Validate image quality for face recognition""" try: image = cv2.imread(image_path) pil_image = Image.open(image_path)
issues = [] quality_score = 100
# Check image size height, width = image.shape[:2] if width < 200 or height < 200: issues.append('Image resolution too low (minimum 200x200)') quality_score -= 30
# Check brightness gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) brightness = np.mean(gray) if brightness < 50: issues.append('Image too dark') quality_score -= 20 elif brightness > 200: issues.append('Image too bright') quality_score -= 20
# Check blur laplacian_var = cv2.Laplacian(gray, cv2.CV_64F).var() if laplacian_var < 100: issues.append('Image too blurry') quality_score -= 25
# Check if face is detected face_locations = face_recognition.face_locations(face_recognition.load_image_file(image_path)) if len(face_locations) == 0: issues.append('No face detected') quality_score -= 40 elif len(face_locations) > 1: issues.append('Multiple faces detected') quality_score -= 15
return { 'valid': len(issues) == 0 or quality_score >= 50, 'qualityScore': max(0, quality_score), 'issues': issues } except Exception as e: return {'error': str(e), 'valid': False, 'qualityScore': 0, 'issues': ['Failed to process image']}
def main(): if len(sys.argv) < 2: print(json.dumps({'error': 'No command specified'})) sys.exit(1)
command = sys.argv[1]
if command == 'detect': if len(sys.argv) < 3: print(json.dumps({'error': 'No image path specified'})) sys.exit(1) result = detect_faces(sys.argv[2]) print(json.dumps(result))
elif command == 'compare': if len(sys.argv) < 4: print(json.dumps({'error': 'Missing encodings'})) sys.exit(1) tolerance = float(sys.argv[4]) if len(sys.argv) > 4 else 0.6 result = compare_faces(sys.argv[2], sys.argv[3], tolerance) print(json.dumps(result))
elif command == 'validate': if len(sys.argv) < 3: print(json.dumps({'error': 'No image path specified'})) sys.exit(1) result = validate_image(sys.argv[2]) print(json.dumps(result))
else: print(json.dumps({'error': f'Unknown command: {command}'})) sys.exit(1)
if __name__ == '__main__': main()Step 6: Create Package Configuration
Create a package.json file:
{ "name": "digiface", "version": "1.0.0", "description": "Advanced facial recognition platform", "main": "dist/index.js", "scripts": { "build": "tsc", "start": "node dist/index.js", "dev": "ts-node src/index.ts", "test": "jest" }, "dependencies": { "express": "^4.18.2", "cors": "^2.8.5", "helmet": "^7.1.0", "morgan": "^1.10.0", "pg": "^8.11.3", "dotenv": "^16.3.1", "multer": "^1.4.5-lts.1", "bcryptjs": "^2.4.3", "jsonwebtoken": "^9.0.2", "sharp": "^0.33.0", "uuid": "^9.0.1", "date-fns": "^3.0.6", "joi": "^17.11.0" }, "devDependencies": { "@types/express": "^4.17.21", "@types/cors": "^2.8.17", "@types/morgan": "^1.9.9", "@types/pg": "^8.10.9", "@types/multer": "^1.4.11", "@types/bcryptjs": "^2.4.6", "@types/jsonwebtoken": "^9.0.5", "@types/uuid": "^9.0.7", "@types/node": "^20.10.6", "typescript": "^5.3.3", "ts-node": "^10.9.2" }, "engines": { "node": ">=20.0.0", "npm": ">=10.0.0" }}Step 7: Create TypeScript Configuration
Create a tsconfig.json file:
{ "compilerOptions": { "target": "ES2022", "module": "commonjs", "lib": ["ES2022"], "outDir": "./dist", "rootDir": "./src", "strict": true, "esModuleInterop": true, "skipLibCheck": true, "forceConsistentCasingInFileNames": true, "resolveJsonModule": true, "moduleResolution": "node", "declaration": true, "declarationMap": true, "sourceMap": true, "removeComments": true, "noUnusedLocals": true, "noUnusedParameters": true, "noImplicitReturns": true, "noFallthroughCasesInSwitch": true }, "include": ["src/**/*"], "exclude": ["node_modules", "dist", "**/*.test.ts"]}Step 8: Create Environment Configuration
Create a .env.example file with the necessary environment variables:
# Server ConfigurationNODE_ENV=productionPORT=3000
# Database ConfigurationDB_HOST=your-database-hostDB_PORT=5432DB_NAME=digifaceDB_USER=postgresDB_PASSWORD=your-secure-password
# Face Recognition ConfigurationFACE_RECOGNITION_TOLERANCE=0.6MIN_FACE_SIZE=100MAX_FACES_PER_IMAGE=10LIVENESS_DETECTION_ENABLED=true
# Storage ConfigurationUPLOAD_DIR=/app/uploadsMODEL_DIR=/app/modelsMAX_UPLOAD_SIZE=10485760ALLOWED_IMAGE_TYPES=image/jpeg,image/png,image/jpg
# Security ConfigurationJWT_SECRET=your-jwt-secret-key-change-thisJWT_EXPIRES_IN=24hAPI_RATE_LIMIT=100
# CORS ConfigurationCORS_ORIGIN=*
# Attendance ConfigurationAUTO_CHECKOUT_HOURS=12ATTENDANCE_TIMEZONE=UTC
# Notification ConfigurationWEBHOOK_URL=WEBHOOK_SECRET=ENABLE_NOTIFICATIONS=false
# Performance ConfigurationMAX_WORKERS=4CACHE_ENABLED=trueCACHE_TTL=3600
# Logging ConfigurationLOG_LEVEL=infoLOG_TO_FILE=trueLOG_DIR=/app/logsStep 9: Push to GitHub
Initialize a Git repository and push your code:
git initgit add .git commit -m "Initial Digiface deployment setup"git branch -M maingit remote add origin https://github.com/yourusername/digiface.gitgit push -u origin mainStep 10: Deploy PostgreSQL Database on Klutch.sh
import { Steps } from ‘@astrojs/starlight/components’;
-
Navigate to your Klutch.sh dashboard
-
Click Create New App and select your GitHub repository
-
Configure the database deployment:
- Name:
digiface-db - Source: Select
postgres:15-alpinefrom Docker Hub - Traffic Type: Select TCP (PostgreSQL requires TCP traffic)
- Internal Port:
5432(default PostgreSQL port) - External Port: Your database will be accessible on port
8000
- Name:
-
Add a Persistent Volume for database storage:
- Mount Path:
/var/lib/postgresql/data - Size:
20GB(adjust based on expected data volume)
- Mount Path:
-
Set environment variables for PostgreSQL:
POSTGRES_DB:digifacePOSTGRES_USER:postgresPOSTGRES_PASSWORD:your-secure-password(use a strong password)
-
Click Deploy and wait for the database to be ready
-
Note your database connection details:
- Host:
your-database-app.klutch.sh - Port:
8000(external access) - Database:
digiface - User:
postgres - Password: Your configured password
- Host:
Step 11: Deploy Digiface Application on Klutch.sh
-
Return to your Klutch.sh dashboard
-
Click Create New App and select your Digiface GitHub repository
-
Configure the application deployment:
- Name:
digiface-app - Traffic Type: Select HTTP (web application)
- Internal Port:
3000(Digiface runs on port 3000)
- Name:
-
Add Persistent Volumes for face data and models:
Volume 1 - Face Images:
- Mount Path:
/app/uploads - Size:
50GB(stores enrolled face photos and recognition images)
Volume 2 - Recognition Models:
- Mount Path:
/app/models - Size:
5GB(stores pre-trained face recognition models)
Volume 3 - Application Logs:
- Mount Path:
/app/logs - Size:
10GB(stores recognition events and audit logs)
- Mount Path:
-
Configure environment variables:
NODE_ENV:productionPORT:3000DB_HOST:your-database-app.klutch.sh(from Step 10)DB_PORT:8000DB_NAME:digifaceDB_USER:postgresDB_PASSWORD: Your database passwordJWT_SECRET: Generate a secure random stringFACE_RECOGNITION_TOLERANCE:0.6MIN_FACE_SIZE:100CORS_ORIGIN:*(or your specific domain)
-
Click Deploy
-
Klutch.sh will automatically:
- Detect your Dockerfile in the repository root
- Build the Docker image with Node.js and Python dependencies
- Download face recognition models during build
- Deploy your application with the configured volumes
- Assign a URL like
digiface-app.klutch.sh
-
Monitor the build logs to ensure the face recognition models are downloaded successfully
-
Once deployed, verify the deployment by visiting:
https://digiface-app.klutch.sh/health
Step 12: Initial Setup and Configuration
After deployment, perform the initial setup:
-
Create Admin User: Use the API to create your first admin user:
Terminal window curl -X POST https://digiface-app.klutch.sh/api/users/register \-H "Content-Type: application/json" \-d '{"username": "admin","email": "admin@example.com","full_name": "System Administrator","password": "secure-password","role": "admin"}' -
Verify Face Recognition: Test the face detection endpoint:
Terminal window curl -X POST https://digiface-app.klutch.sh/api/face/detect \-H "Authorization: Bearer YOUR_JWT_TOKEN" \-F "image=@path/to/test-photo.jpg" -
Enroll Test User: Register a face for testing:
Terminal window curl -X POST https://digiface-app.klutch.sh/api/face/enroll \-H "Authorization: Bearer YOUR_JWT_TOKEN" \-F "userId=1" \-F "image=@path/to/user-photo.jpg" -
Test Face Recognition: Verify recognition works:
Terminal window curl -X POST https://digiface-app.klutch.sh/api/face/recognize \-H "Authorization: Bearer YOUR_JWT_TOKEN" \-F "image=@path/to/verification-photo.jpg" -
Configure System Settings: Adjust recognition parameters as needed through the settings API
API Reference
Face Recognition Endpoints
Detect Faces
POST /api/face/detectContent-Type: multipart/form-data
Parameters:- image: Image file (JPEG/PNG)
Response:{ "faces": [ { "boundingBox": { "top": 100, "right": 300, "bottom": 400, "left": 100 }, "confidence": 95.5, "landmarks": { "left_eye": [[x, y]], ... } } ], "count": 1}Enroll Face
POST /api/face/enrollAuthorization: Bearer TOKENContent-Type: multipart/form-data
Parameters:- userId: User ID- image: Face image file- isPrimary: boolean (optional)
Response:{ "success": true, "encodingId": 123, "qualityScore": 87.5, "message": "Face enrolled successfully"}Recognize Face
POST /api/face/recognizeAuthorization: Bearer TOKENContent-Type: multipart/form-data
Parameters:- image: Image file to recognize
Response:{ "recognized": true, "user": { "id": 1, "name": "John Doe", "employeeId": "EMP001" }, "confidence": 92.3, "timestamp": "2025-12-15T10:30:00Z"}Attendance Endpoints
Check In
POST /api/attendance/checkinAuthorization: Bearer TOKENContent-Type: multipart/form-data
Parameters:- image: Face image for recognition- location: Check-in location (optional)- deviceId: Device identifier (optional)
Response:{ "success": true, "attendanceId": 456, "user": { "id": 1, "name": "John Doe" }, "checkInTime": "2025-12-15T09:00:00Z", "confidence": 94.2}Check Out
POST /api/attendance/checkoutAuthorization: Bearer TOKEN
Parameters:- attendanceId: Attendance record ID- image: Face image for verification (optional)
Response:{ "success": true, "attendanceId": 456, "checkOutTime": "2025-12-15T17:30:00Z", "hoursWorked": 8.5}Get Attendance Records
GET /api/attendance/records?userId=1&startDate=2025-12-01&endDate=2025-12-15Authorization: Bearer TOKEN
Response:{ "records": [ { "id": 456, "userId": 1, "checkInTime": "2025-12-15T09:00:00Z", "checkOutTime": "2025-12-15T17:30:00Z", "status": "present", "hoursWorked": 8.5 } ], "totalRecords": 15}Analytics Endpoints
Get Recognition Stats
GET /api/analytics/recognition-stats?period=7dAuthorization: Bearer TOKEN
Response:{ "totalRecognitions": 1523, "successfulRecognitions": 1487, "failedRecognitions": 36, "averageConfidence": 91.5, "uniqueUsers": 45}Production Best Practices
Security Hardening
- Enable HTTPS: Always use SSL/TLS in production
- Strong Authentication: Implement multi-factor authentication for admin access
- API Rate Limiting: Protect endpoints from abuse
- Data Encryption: Encrypt face encodings at rest
- Access Control: Implement role-based access control (RBAC)
- Regular Audits: Monitor access logs and recognition events
Performance Optimization
- Image Preprocessing: Optimize images before processing
- Caching: Cache face encodings for faster recognition
- Batch Processing: Process multiple faces in parallel
- Database Indexing: Ensure proper indexes on frequently queried columns
- CDN: Use a CDN for serving face images
- Connection Pooling: Configure database connection pools appropriately
Data Management
- Regular Backups: Automate database and volume backups
- Data Retention: Implement policies for old recognition events
- GDPR Compliance: Provide data export and deletion capabilities
- Audit Logging: Maintain comprehensive logs of all recognition events
- Face Quality: Implement quality checks before enrollment
Monitoring and Alerting
Set up monitoring for:
- Recognition accuracy and confidence scores
- API response times and error rates
- Database performance and connection pool status
- Storage usage for face images and models
- Failed recognition attempts and potential security issues
Troubleshooting
Face Not Recognized
Symptoms: Recognition fails with low confidence or no match
Solutions:
- Check image quality using the validation endpoint
- Ensure good lighting conditions when capturing images
- Verify face encodings are properly stored in database
- Adjust
FACE_RECOGNITION_TOLERANCEsetting (lower = stricter) - Re-enroll user with higher quality images
- Check for multiple faces in the image
Slow Recognition Performance
Symptoms: Recognition takes several seconds
Solutions:
- Reduce number of enrolled users per batch comparison
- Implement face encoding caching
- Optimize database queries with proper indexing
- Consider GPU acceleration for large-scale deployments
- Use image preprocessing to reduce file sizes
- Implement parallel processing for multiple recognition requests
Database Connection Issues
Symptoms: Connection errors or timeouts
Solutions:
- Verify database credentials in environment variables
- Check database service is running:
https://your-database-app.klutch.sh - Ensure database port 8000 is accessible
- Verify persistent volume is properly mounted
- Check database connection pool settings
- Review database logs for errors
Model Loading Failures
Symptoms: Application fails to start with model errors
Solutions:
- Verify model files were downloaded during build
- Check
/app/modelsvolume has sufficient space - Re-deploy to trigger fresh model download
- Verify model file permissions
- Check build logs for download errors
High Memory Usage
Symptoms: Application crashes or becomes unresponsive
Solutions:
- Reduce
MAX_FACES_PER_IMAGEsetting - Implement image size limits
- Enable garbage collection for Node.js
- Optimize face encoding storage
- Consider increasing container resources
Upload Failures
Symptoms: Image uploads fail or timeout
Solutions:
- Check
MAX_UPLOAD_SIZEsetting - Verify
/app/uploadsvolume has free space - Ensure
ALLOWED_IMAGE_TYPESincludes your format - Check file permissions on upload directory
- Verify multer middleware configuration
Scaling Your Digiface Deployment
As your facial recognition needs grow, consider:
- Horizontal Scaling: Deploy multiple instances behind a load balancer
- Database Optimization: Use PostgreSQL read replicas for analytics queries
- Separate Processing Workers: Offload recognition to dedicated worker services
- GPU Acceleration: Add GPU support for faster inference
- Redis Caching: Cache face encodings and recognition results
- Message Queue: Use RabbitMQ or Redis for async recognition jobs
- Microservices: Split attendance, recognition, and analytics into separate services
Conclusion
You’ve successfully deployed Digiface on Klutch.sh! Your facial recognition platform is now ready to handle face detection, recognition, and attendance tracking with enterprise-grade reliability.
For more deployment guides and platform documentation, visit the Klutch.sh documentation.