![]() For Nginx + FastCGI (php-fpm enabled)Increase max_execution_time setting: Plesk > Subscriptions > Websites & Domains > PHP Settings – Set max_execution_time = 300Change request_terminate_timeout parameter (commented by default) in /etc/php-fpm.d/file: request_terminate_timeout = 300Īdd fastcgi_read_timeout variable inside the ‘nginx’ virtual host configuration:.Plesk > Subscriptions > my. > Websites & Domains > Web Server Settings – add the lines to Additional Nginx directives If you only are able to increase timeout settings per domain, it can be done in this way: For Nginx as Proxy (php-fpm disabled)To apply settings globally, increase the following timeout values by adding the file /etc/nginx/conf.d/nf and restarting ‘nginx’ service: # cat /etc/nginx/conf.d/nf proxy_connect_timeout 600.Here are the most common 504 error messages: In previous post, I’ve write How to Fix 502 Bad Gateway Error on Nginx. Based on Wikipedia, 504 Gateway Timeout is the server was acting as a gateway or proxy and did not receive a timely response from the upstream server. This is pretty common error, are generated most probably by the PHP max execution time limit or by the FastCGI read timeout settings. Now I need to work out if there is a way to get Puma to start listening faster, or for NGINX to wait in some way for Puma to accept connections before telling clients that there is an issue, then breaking the load balancer.504 Gateway Timeout error Nginx is generated often by a number of reasons on the backend connection that is serving content. This doesn't solve the issue, but does at least help me stop chasing red herrings. Future requests that may create new worker threads may well be slow, but won't fail, because Puma is already listening and accepting connections. So any requests to NGINX during this time will instantly fail, since Puma is just not listening. Start the rails server and immediately go to If you check the browser developer console for network requests you'll see a net::ERR_CONNECTION_REFUSED status any time up until Puma say it is listening * Listening on įor my app, which does a lot of dynamic loading and class creation on the initial startup, this takes more than 10 seconds. This makes it closer to production mode in terms of loading. You can see this in development more clearly if you change the development.rb entries to: config.cache_classes = true Prewarming may possibly make this worse, since the time to get all the threads up may slow the initial startup on resource constrained servers. I believe that this is due to Puma completely rejecting connections while it is initially starting up. I know that the application runs fine on a single instance beanstalk without an ALB in front of it, so we can rule out app performance for this discussion.Īny help or suggestions would be much appreciated. The app servers are already prewarmed by the time these errors occur, and there are no CPU spikes when the failures happen. It is almost like a security groups issue just very occasionally blocking a request, but these either work or they don't. But there is no reason why the targets should not respond, and they are not even being touched. This is typically 10 seconds later, and corresponds with the ALB not being able to get a return from a target. Soon after, sometimes I'll get a 504 in the browser. In summary, NGINX shows that a client ALB closed a connection. I have also spent some time looking at: which led me to look into keepalives, etc, and Wireshark. The EC2s show only about 50% memory utilization and low CPU. The ALB configuration has Idle timeout 60 seconds I have a specific NGINX server config, with the only real related settings (within the server block) being: ssl_session_timeout 5m The monitoring does show target connection errors. The request never makes it to a target backend. I have checked the ALB logs, and they show that the 504 error occurs, but is unrelated to any specific target. No 50x errors are coming back from NGINX. I have followed the AWS troubleshooting: Īll this really confirms is that the monitoring shows HTTPCode_ELB_5XX counts, which indicates the 504 error originated from the load balancer. The load balancer also terminates the TLS connection before passing it on to NGINX. ![]() There are two Beanstalk managed EC2 instances with this setup, behind the standard Beanstalk configured AWS application load balancer (ALB). NGINX acts as a reverse proxy to the Puma app server running on the same instance, and provides TLS termination. The requests that fail are quite simple, so should not be timing out. I have a fairly simple AWS Elastic Beanstalk setup (Rails on Puma, with NGINX) and get intermittent but quite regular 504 Gateway Time-out on the client (typically 10 seconds after making the request). ![]()
0 Comments
Leave a Reply. |