MTProxyHub
All posts
9 min

MTProxy Optimization: Tuning the Server for High Loads

Complete guide to tuning an MTProxy server: WORKERS, sysctl (including BBR), file descriptor limits, load monitoring, and connection profiling.

MTProxy Optimization: Tuning the Server for High Loads

The basic startup of MTProxy in Docker from the guide works. But if hundreds or thousands of users connect to your proxy, you will start noticing symptoms: slow media loading, delays, connection drops. This is not an MTProxy bug — it is simply what happens to any server when it reaches system limits.

This article is a complete tuning guide. Apply it as your load grows.

Diagnostics: Identifying the Bottleneck

Before tweaking anything, find the root cause of the problem.

Basic commands for diagnostics:

# Real-time container stats
docker stats mtproxy

# TCP connection state
ss -s

# Number of established connections
ss -tn state established | wc -l

# Number of TIME-WAIT connections (should be < 10000)
ss -tn state time-wait | wc -l

# Error logs
docker logs --tail 100 mtproxy 2>&1 | grep -i error

Setting WORKERS

WORKERS is the number of MTProxy working processes. The official implementation (C) is highly efficient and handles about 60,000 connections per worker.

ServerSimultaneous usersWORKERS
1 vCPU, 512 MBup to 10001
1 vCPU, 1 GBup to 30001–2
2 vCPU, 2 GBup to 10 0002
4+ vCPU, 4+ GBup to 50 000+= CPU count

Changing WORKERS in Docker:

docker stop mtproxy && docker rm mtproxy

docker run -d \
  --name mtproxy \
  --restart always \
  -p 443:443 \
  -e SECRET="YOUR_SECRET" \
  -e WORKERS=4 \
  -e TAG="YOUR_TAG" \
  -v proxy-config:/data \
  telegrammessenger/proxy:latest

Linux Kernel Optimization (sysctl)

MTProxy opens a large number of TCP connections. Linux limits them by default. Optimize /etc/sysctl.conf:

# Number of open file descriptors
fs.file-max = 1000000

# Incoming connections queue size
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 8192

# Managing TIME-WAIT connections
net.ipv4.tcp_max_tw_buckets = 1440000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15

# Local port range
net.ipv4.ip_local_port_range = 1024 65000

# TCP buffers for high traffic
net.core.rmem_max = 134217728
net.core.wmem_max = 134217728
net.ipv4.tcp_rmem = 4096 87380 134217728
net.ipv4.tcp_wmem = 4096 65536 134217728

# Congestion control algorithm (BBR, kernel 4.9+)
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr

Apply changes:

sysctl -p

What each parameter does

tcp_max_syn_backlog = 8192 — The maximum queue size for SYN packets. Under high load, new connections are dropped if the queue is full.

tcp_tw_reuse = 1 — Allows reusing TIME-WAIT sockets. Reduces the accumulation of "stuck" connections.

tcp_fin_timeout = 15 — Reduces the wait time for connection closure from 60 seconds to 15. Frees resources faster.

BBR (Bottleneck Bandwidth and RTT) — Congestion control algorithm by Google. Instead of a reactive approach (lowering speed after a packet loss), BBR actively probes bandwidth. It is particularly effective on connections with variable packet loss — typical for users in restricted regions.

Check if BBR is applied:

sysctl net.ipv4.tcp_congestion_control
# Should return: net.ipv4.tcp_congestion_control = bbr

File Descriptor Limits

Every TCP connection in Linux requires an open file descriptor. By default, the limit is 1024. For thousands of connections, you need to raise it.

System limit (add to /etc/security/limits.conf):

* soft nofile 1000000
* hard nofile 1000000
root soft nofile 1000000
root hard nofile 1000000

For the Docker container, add the flag to the run command:

docker run -d \
  --name mtproxy \
  --restart always \
  -p 443:443 \
  --ulimit nofile=1000000:1000000 \
  -e SECRET="YOUR_SECRET" \
  -e WORKERS=2 \
  -v proxy-config:/data \
  telegrammessenger/proxy:latest

Check the current limit of the process:

docker exec mtproxy cat /proc/1/limits | grep "open files"

Load Monitoring

For a public proxy, basic monitoring is recommended. The simplest approach is a cron job:

# Every 5 minutes write stats to a file
*/5 * * * * echo "$(date): connections=$(ss -tn state established | wc -l), tw=$(ss -tn state time-wait | wc -l)" >> /var/log/mtproxy-stats.log

For more detailed monitoring, consider mtg (Go implementation of MTProxy) — it has built-in metrics in Prometheus format that can be connected to Grafana.