Introduction: The Invisible Bottleneck in Server Performance
You’ve optimized CPU usage. RAM looks stable. Your bandwidth is nowhere near the threshold. Still, your application crashes, logs stop writing, or MySQL won’t restart. Sounds familiar? You may be hitting a server limit that’s often overlooked: File Descriptor (FD) exhaustion.
This small but critical value governs how many files, sockets, and other I/O resources your Linux system can keep open at any given time. In a modern hosting environment especially one running a mix of web servers, databases, caching layers, cron jobs, and containers file descriptor usage can skyrocket without warning.
At SupportSages, we’ve helped numerous clients prevent or recover from such unexpected failures by proactively tuning system-level limits. In this blog, we’ll explore what file descriptors are, why they matter, and how we help you stay ahead of this silent killer.
What Are File Descriptors and Why Do They Matter?
A file descriptor is a reference to an open file, socket, or pipe in a Linux system. Every process uses FDs to interact with resources:
- Web servers like Apache or LiteSpeed use them for handling multiple HTTP requests.
- MySQL and PHP-FPM use them to access logs, configuration files, and user sockets.
- Backup services and logging daemons can consume hundreds of descriptors over time.
The problem? Linux sets a default limit, often as low as 1024 or 4096 per process. Once a process exceeds this limit, it can no longer open files or sockets leading to errors, crashes, or service downtime.
Real-World Impact: What Can Go Wrong?
Let’s say you’re running a server during a traffic spike. PHP-FPM opens more sockets. Logs fill up. Backup services kick in. Suddenly, the server becomes unresponsive. You check CPU and memory nothing out of place. The real issue? File descriptor limits maxed out silently.
At SupportSages, we’ve helped prevent such outages by integrating monitoring tools like Netdata and Zabbix that actively watch FD usage across services.
Ideal Open File Descriptor Limits
There’s no universal number it depends on your workloads. But for hosting providers, here are safe baseline recommendations:
Use Case | Recommended |
|---|---|
| Basic Shared Hosting Server | 65,535 |
| High-Traffic Web Server | 131,072+ |
| MySQL/MariaDB Server | 100,000–250,000 |
| cPanel/CyberPanel Node | 100,000+ |
| VPS Host Node (KVM/OpenVZ) | 500,000+ |
You should also adjust the kernel-wide maximum via:
fs.file-max = 2097152
How Are Ideal Open File Descriptor Limits Determined?
There is no strict universal number for file descriptor limits because the ideal value depends heavily on the server’s purpose and workload. Hosting environments vary widely, but experienced sysadmins generally size limits based on:
Server Role | Estimation Logic | Typical Range |
|---|---|---|
| Web Server (Apache, Nginx) | 2–3 file descriptors per active connection | 65,000 – 150,000 |
| Database Server (MySQL/MariaDB) | 2–10 descriptors per database session and table | 100,000 – 250,000 |
| cPanel/CyberPanel Hosting Node | One descriptor per open connection, mail, cron, logs, etc. | 100,000+ |
| VPS Hypervisor (KVM/OpenVZ) | Sum of guest VMs' needs + safety margin | 500,000+ |
| Mail Servers (Postfix, Dovecot) | 1–2 per active mail connection/session | 65,000+ |
Modern Linux kernels allow millions of open descriptors system-wide without performance penalties, so it's generally safe to provision higher values proactively.
Rule of thumb:
High concurrency = higher limits needed.
Low concurrency = defaults might survive but are risky under sudden traffic spikes.
How to Check Current Limits
Per-user or per-process limit:
ulimit -nSystem-wide settings:
cat /proc/sys/fs/file-maxCheck current usage:
lsof | wc -lYou can also track usage per process:
lsof -u www-dataOr monitor over time with:
watch -n 2 'lsof | wc -l'
How to Increase File Descriptor Limits
1. Temporary Change (for current session):
ulimit -n 65535This resets after logout/reboot.
2. Persistent Change (per user):
Edit /etc/security/limits.conf:
www-data soft nofile 65535
www-data hard nofile 65535Then ensure PAM limits are enabled in /etc/pam.d/common-session:
session required pam_limits.so
3. System-wide Kernel Limit:
Edit /etc/sysctl.conf:
fs.file-max = 2097152Then apply:
sysctl -pYou can also set this via:
echo 2097152 > /proc/sys/fs/file-max
4. Service-specific Settings:
For systemd services (e.g., NGINX, MySQL, PHP-FPM), create or edit the service unit:
sudo systemctl edit nginxThen add:
[Service]
LimitNOFILE=65535Reload and restart:
systemctl daemon-reexec
systemctl restart nginxRepeat for any service prone to heavy I/O: MariaDB, Redis, Elasticsearch, etc.
Our Solution: Proactive Detection and Tuning
We offer tailored Server Management and Monitoring Services that include:
- Identifying FD-heavy applications.
- Tuning kernel parameters and systemd limits.
- Integrating alerts for FD usage per user or process.
- Preventing cascading failures from unnoticed resource exhaustion.
We also incorporate this tuning into our DevOps as a Service offering, especially for high-load or production-critical environments.
Common Services Affected by FD Limits
- PHP-FPM: Too many open sockets or files = 502/504 gateway errors.
- MySQL: Sudden crashes or failed restarts due to too many open tables or connections.
- rsyslog/logrotate: Logging halts silently, making troubleshooting harder.
- LiteSpeed/OpenLiteSpeed: High concurrency setups require generous FD settings.
We also include FD tuning as part of our Performance Optimization Services, ensuring your stack is ready for unexpected load.
Final Thoughts: Don't Wait for an Outage to Take Action
You don’t have to wait for logs to stop or for a panic restart to realize you’re out of file descriptors. Let SupportSages proactively monitor, manage, and tune your systems so these hidden limits never catch you off guard.
Talk to our engineers to learn more about our server health audits and infrastructure optimization strategies.
Explore Related Services:
Let us handle your infrastructure while you focus on growing your business.







