A primary consideration when deploying any host is how the system is controlled and managed. Modern systems are composed of multiple layers, and each layer must be designed with clear ownership, responsibility, and failure behavior.
Some components are treated as independent systems, while others are designed to operate as part of a larger system. This guide describes those boundaries and the design decisions that follow from them.
Hardware and Virtualization Layer
The system begins with physical hardware running Proxmox as the virtualization platform.
Proxmox is responsible for hardware abstraction and virtual machine lifecycle management.
The project workload runs inside a virtual machine.
The virtual machine uses Debian Linux as its operating system.
This separation allows hardware concerns to be isolated from application and operating system concerns.
Operating System Layer
The operating system is Debian, using systemd for service management.
At this layer, we define and configure:
Core operating system components
Service management and startup behavior
Logging and failure handling
This layer provides the foundation for all higher-level systems.
Application and Service Stack
On top of the operating system, the following components are deployed:
Database system: PostgreSQL
Application framework: Django
Networking and reverse proxy: NGINX
Each component introduces its own runtime environment:
Operating system environment (Debian + systemd)
Database environment (PostgreSQL)
Application environment (Django)
These environments must be clearly separated and managed to avoid unintended coupling.
Network and Bastion Design (NGINX)
NGINX is configured as a bastion host. Its design goal is to be the least susceptible component to unacceptable failure conditions.
Key design principles:
Expose the minimum required surface area
Enforce strict access controls
Fail securely when behavior deviates from expected norms
If abnormal or non-conformant behavior is detected, the system must default to a secure failure state.
Information Planes
The system is designed around three logical “planes” of information:
Control Plane: System configuration and orchestration
Administrative Plane: Monitoring, logging, and maintenance access
User Plane: Application access and user-facing functionality
Where these planes intersect, explicit control mechanisms must be defined. These intersections are where misconfigurations and privilege escalation most often occur.
Control relationships are documented using Mermaid diagrams embedded in Markdown, with arrows indicating direction and authority of control.
System Provisioning
System provisioning follows a deterministic process:
Install system components using
aptUpdate the Bash profile as required
Execute deployment and configuration scripts
Each step should be repeatable and auditable.
Storage and Filesystem Design
Disk partitioning and filesystem configuration are critical design considerations.
Filesystem Requirements
Mount points must be correctly defined and consistently applied.
The filesystem in use must be the intended one (Debian defaults to
ext4).Mount options must align with security and performance goals:
Recommended options:
noatimeorrelatime– reduce disk writes from file access time updatesnoexec– prevent execution of binariesnodev– disallow device filesnosuid– disable setuid/setgid bits
Note: While these options improve security on web servers, some Unix tools may assume default behaviors. Validate compatibility before deployment.
Web Application File Permissions
External access to the application is performed through the www-data user.
Requirements:
www-datamust have read access to application scripts and static assetsApplication files are typically located in
/var/www
Example commands:
chown -R www-data:www-data /var/www # Set owner and group chmod -R a-w /var/www # Remove write permissions for all users
This ensures the web server can read content but cannot modify it.
Database Layer Considerations
The database layer uses PostgreSQL, introducing an additional logical boundary.
This results in three distinct operational environments:
Operating system environment
PostgreSQL environment
Django application environment
Each environment must:
Enforce least privilege
Log failures independently
Fail in a controlled and diagnosable manner
Failure and Observability
Failures in any environment must provide:
Sufficient logging for diagnosis
Clear separation of responsibility
A defined path for administrative investigation and remediation
The system should fail securely, not silently, and always leave an audit trail for administrators.
Design Outcomes
This architecture is designed to achieve:
Deterministic startup and shutdown behavior
Clear control-plane ownership
Reduced attack surface through socket-based IPC
Predictable failure modes
Auditable, layered observability
NGINX is designed as the network-facing policy boundary. It assumes responsibility for:
Client connection handling
Protocol normalization
Request validation and limits
Rate limiting and abuse mitigation
The application itself is never exposed directly to the network.
Security and Information Exposure Controls
NGINX configuration explicitly minimizes information disclosure:
Server version suppression
Header sanitization
Conservative timeouts to prevent resource exhaustion
These controls are intentionally placed at the ingress layer, where enforcement is centralized and consistent.
Observability and Telemetry Design
Logging is treated as a first-class design concern:
Structured access logs include request timing and upstream metadata
Error logging is scoped to actionable severity levels
Request identifiers propagate through the request lifecycle
This enables correlation between:
Client requests
NGINX behavior
Gunicorn response timing
Application-level failures
Rate Limiting and Resource Protection
NGINX enforces coarse-grained protection mechanisms:
Per-IP request rate limits
Connection limits
These controls act as early rejection mechanisms, preserving application capacity for legitimate traffic.
Configuration Boundary Management
Configuration responsibilities are intentionally separated:
systemd units define process behavior
Gunicorn configuration defines application runtime characteristics
NGINX configuration defines network policy and traffic shaping
This separation reduces blast radius when changes are required and supports independent evolution of each layer.
Runtime Architecture Overview
The application runtime is composed of three primary execution layers:
Process supervision and lifecycle management — systemd
Application execution environment — Gunicorn (WSGI server)
Network ingress and request mediation — NGINX
Each layer is independently managed and communicates through explicit, minimal interfaces, primarily UNIX domain sockets.
systemd as the Control Plane
systemd functions as the control plane for the application runtime. It owns:
Process startup and shutdown semantics
Restart and reload behavior
Resource isolation boundaries
Socket lifecycle (when socket activation is enabled)
Gunicorn is not treated as a long-running, self-managed daemon; instead, it is subordinate to systemd, which enforces deterministic behavior during boot, reload, and failure conditions.
Gunicorn Service Design
Execution Context
The Gunicorn service defines a constrained execution context:
User / Group Separation
The process runs under a defined user and group.
Group membership (
www-data) is aligned with NGINX to enable controlled socket access.
Working Directory
The service is explicitly rooted at the application directory, eliminating reliance on implicit paths.
Virtual Environment Binding
Gunicorn is executed from a pinned Python virtual environment, ensuring dependency immutability.
This design prevents environmental drift and reduces ambiguity in dependency resolution.
Process Model and Concurrency
Gunicorn uses a pre-fork worker model, with worker count defined explicitly. This choice:
Avoids thread-safety assumptions in Django
Provides predictable CPU and memory scaling characteristics
Allows backpressure to propagate naturally to NGINX under load
Worker tuning is considered an operational concern but is intentionally exposed in configuration to make capacity planning explicit.
Inter-Process Communication
Gunicorn binds to a UNIX domain socket rather than a TCP port:
Reduces attack surface
Eliminates unnecessary network stack traversal
Enables filesystem-based access control
The socket path (/run/gunicorn.sock) exists in a volatile runtime filesystem, reinforcing the expectation that the service is recreated cleanly on reboot.
Socket Activation Design
When enabled, systemd socket activation introduces a decoupled startup model:
systemd owns the socket
Gunicorn is started on-demand when traffic arrives
NGINX becomes the effective trigger for application startup
This design improves:
Boot-time determinism
Failure recovery behavior
Resource utilization during idle periods
The socket becomes a stable contract, while the service remains ephemeral.
Lifecycle and Failure Semantics
The service unit explicitly defines lifecycle behavior:
Graceful reloads via signal-based worker recycling
Bounded shutdown time to avoid hung processes
Mixed kill mode to ensure workers terminate correctly
Temporary filesystem isolation (PrivateTmp) further limits unintended state leakage between service restarts.
Failures are expected to:
Be visible to systemd
Emit structured logs
Leave sufficient state for post-failure diagnosis
1. Operating System Baseline
NGINX security begins before NGINX is installed. The OS establishes the trust boundary and enforcement mechanisms NGINX depends on.
Mandatory Design Requirements
Minimal OS footprint
Patch fully
Install only required packages
Enable unattended security updates
Dedicated service identity
Create
nginx:nginxsystem userNo shell, no home directory
Prevents lateral movement if compromised
System-wide umask:
027Enforces least-privilege file creation (CIS guidance)
Correct time
Chrony or NTP enabled
Accurate timestamps are mandatory for incident response
Host firewall
Allow only:
TCP 80/443
TCP 22 from admin IP ranges
File integrity monitoring
AIDE (or equivalent)
Monitor:
/etc/nginx/**TLS certificates
systemd overrides
Strongly Recommended
Partition isolation
Separate mounts for:
/var/log/var/www/var/cache/nginx
Mandatory Access Control
SELinux (Enforcing) or AppArmor
Treat as non-optional in regulated environments
2. Installation and File Permission Model
This layer defines static trust boundaries: who owns configuration, who can read content, and who can write runtime state.
Mandatory Controls
Trusted source
Install NGINX from vendor or pinned trusted repository
Version pinning prevents silent behavior changes
Strict ownership and permissions
Path | Owner | Mode | Rationale |
|---|---|---|---|
| root:root | 750 | Config readable only by root |
| root:root | 640 | Prevent accidental disclosure |
| root:adm or root:nginx | 750 | Logs readable, not writable |
Logs | root:adm | 640 | Prevent tampering |
Web roots | deploy user | r-x for nginx | No runtime writes |
Remove defaults
No example sites
No autoindex
No default server blocks
Recommended
Secrets isolation
JWT keys, credentials, upstream secrets:
Store in root-only directories
Load via environment or read-only includes
Never store secrets in web roots
3. systemd Hardening (Service Containment)
systemd is the local control plane for NGINX. The service must assume compromise and restrict blast radius.
Design Goals
Drop privileges early
Prevent filesystem modification
Restrict syscalls and kernel interaction
Limit writable paths explicitly
Hardened Override Design
/etc/systemd/system/nginx.service.d/hardening.conf
Key design choices:
CapabilityBoundingSet=CAP_NET_BIND_SERVICEOnly capability needed to bind 80/443
ProtectSystem=strictRoot filesystem becomes read-only
ReadWritePathsExplicit allowlist for logs and cache
NoNewPrivileges=truePrevent privilege escalation
SystemCallFilterRestrict syscall surface area
RestrictAddressFamiliesOnly AFINET, AFINET6, AF_UNIX
This transforms NGINX into a contained service, not a general-purpose process.
4. Core NGINX Security Configuration
This layer defines runtime behavior and protocol enforcement.
Process Model
user nginx; worker_processes auto;
Dedicated user
CPU-scaled workers
No root runtime
Information Disclosure Controls
Disable server tokens
Remove server headers
Avoid revealing build or module details
Logging and Observability
Structured logging includes:
Client IP
Request details
Upstream timing
Request ID
Design intent:
Enable forensic reconstruction
Correlate client behavior with backend latency
Support SIEM ingestion
Resource Abuse Controls
Tight timeouts
Conservative keepalive limits
Explicit request size limits
These prevent:
Slowloris attacks
Connection exhaustion
Memory abuse
Reverse Proxy Defaults
Proxy headers explicitly propagate:
Original host
Client IP
TLS state
This avoids ambiguity inside upstream applications and prevents spoofing.
Rate Limiting
Per-IP request and connection limits act as:
First-layer DoS mitigation
Signal amplification for IDS/WAF tools
5. Per-Virtual-Host (vhost) Design
Each vhost is a security boundary.
TLS Design
TLS 1.2 / 1.3 only
Modern cipher suites
Session tickets disabled
OCSP stapling enabled
HSTS enforced after validation
TLS is treated as non-negotiable transport security, not an optimization.
HTTP Security Headers
Headers enforce browser-side protections:
MIME sniffing disabled
Clickjacking prevention
Strict referrer handling
Permissions lockdown
CSP as application firewall-in-browser
Access Controls
Rate limits per vhost
Connection limits
No directory listings
Explicit root and index
Unknown hosts must fail closed (404 / default deny).
6. Certificates and Key Management
Mandatory Controls
Private keys:
root:rootMode
600Stored in
/etc/ssl/private
Strong key types:
ECDSA P-256/P-384 preferred
RSA ≥2048 where required
OCSP stapling enabled
Certificate rotation monitored
TLS material is treated as tier-0 secrets.
7. Logging, Rotation, and Monitoring
Design Requirements
Logs must:
Preserve permissions
Rotate predictably
Signal NGINX without restart
USR1-based log rotation avoids dropped connections.
Centralization
Logs must be forwarded to:
SIEM
Central logging platform
NGINX logs are security telemetry, not diagnostics only.
8. Access Control and Authentication
Mandatory
Avoid HTTP Basic Auth where possible
If used:
Rate-limit aggressively
Pair with fail2ban
Restrict admin paths:
IP allowlists
mTLS where feasible
9. Content, Uploads, and Temporary Data
NGINX must not write arbitrarily.
Explicit temp paths
Owned by nginx
750permissionsPrefer mount options:
nodevnosuidnoexec
Uploads are:
Size-limited
Type-validated
Treated as untrusted input
10. Mandatory Access Control (SELinux / AppArmor)
Choose one, enforce it.
SELinux
Enforcing mode
Minimal booleans enabled
Correct file contexts
AppArmor
Enforced nginx profile
Explicit read/write allowances
MAC is a last-line containment mechanism, not optional hardening.
11. Optional but High-Value Controls
WAF (ModSecurity v3 + OWASP CRS)
Bot throttling
mTLS for admin or API paths
Subresource Integrity
Immutable static content
Content-addressed deploys
12. Verification and Continuous Assurance
Manual Validation
Config test and reload
Socket inspection
TLS inspection
Header validation
Automated Controls
Weekly CIS/STIG scans
Monthly TLS scans
Quarterly restore testing
Security posture is maintained through recurring verification, not one-time setup.