Logging
Nginx stores and logs every request that hits the server which leads in the consumption of more IO cycles and CPU and thereby, reducing the server performance. You can overcome this issue by disabling the access log using the command:
access_log off;
At times of need, you can enable the access_log buffering which performs a single-time log entry instead of writing different operation as follows:
access_log /var/log/nginx/access.log main buffer=16K
To save the carried out changes, restart the Nginx soon after the configuration using the command:
sudo service nginx restart
Multi-Accept
This determines the acceptance status of the new connections by the worker processes. The value of multi-accept is set OFF by default which means at a time, it only accepts one connection. You can set the multi-accept to the ON value which in turn permits the worker processes to accept all new connections at once.
worker_connections 1024; multi_accept off;
Cache for Static Files
Caching usually helps in reduced bandwidth and improved server performance. Similarly, caching the static files (occasionally changing ones) of the web sites assists in fast retrieval and accessibility.
location ~* .(jpg|jpeg|png|gif|ico|css|js)$ { expires 365d; }
Cache for Dynamic Files
For every request, a new HTML request is generated which in turn do possess a certain impact on the server’s performance. Caching one copy of the HTML (generated) file will eventually minimize the number of pages that have to be generated.
Optimizing SSL/TLS
Securing the web application with SSL/TLS will also carry some untoward impact on the server’s performance as the initial handshake must be done to authorize encryption keys at the time of every new connection.
One can optimize these trifles of SSL/TLS using the following techniques:
Session caching:
The directive ssl_session_cache helps in caching the parameters used to secure the new connection.
Session tickets (IDs):
This prevents the need for new handshaking at the time of securing new connections by storing the info (for reuse) about particular SSL/TLS sessions.
OCSP Stapling:
Again, this also cuts the time used for handshaking by caching the SSL/TLS certificate details.
Socket Sharding
The socket sharding creates a socket listener for every worker process instead of assigning the task to a single socket to handle all the connections for worker processes. Wherein, it is the kernel that assigns connections to the socket listeners. You need to add the reuseport parameter on the listen directive in order to enable the socket sharding.
The optimization done is only recommended for the medium-traffic sites, the values must be increased further for high-traffic ones and as per one’s requirements. At the same time, it is to be noted that the performance of a server depends on continuous monitoring and tweaking. It is to be noted that the above-given values are not constant and varies (as it provided for a demonstration purpose). Based on the traffic that the site receives, the parameters will see subsequent changes.
My Server configuration is 32 GB RAM and 16vcpu BLR system with nginx server but My site is working very slow server configuration is all with default value What should be the configuration of nginx to improve the speed.
The site speed is purely based on server resources and configuration. The default value is more than enough for a good site speed. If you have traffic-based websites then you have to adjust the default values based on the needs. Kindly, check your website template for the chances of uncached resources, or please check if there are any misconfigurations. If you still have any queries, please let us know info@adoltech.com .