i’ve been planning for a while to move away from apache2 as a reverse proxy in front of java appservs and start using nginx. i had two reasons: nginx can handle much better large number of concurrent connections [something we can make use of if we start using long-poll/comet], nginx gives more control over buffering of back-end response – eg it can close connection to the back-end as soon as possible, rather than wait till response is delivered to the client.
modifications in the debian’s default /etc/nginx/nginx.conf:
.. worker_processes 2; # proxy machine has two cores so we can as well make use of it ..
and own vhost definition in /etc/nginx/sites-available [symlinked to sites-enabled]:
upstream backend-http{ server 10.0.0.2:8080 max_fails=3 fail_timeout=60s weight=10; #primary server server 10.0.0.3:8080 max_fails=3 fail_timeout=60s weight=1 backup; #backup to be used only if the primary appserv does not respond } server { proxy_connect_timeout 5s; client_max_body_size 20M; listen 80; ## listen for ipv4 server_name some.hostname.com; access_log /var/log/nginx/some.hostname.com-access.log ; if ($request_method !~ ^(GET|HEAD|POST)$ ) { return 444; } root /var/www/; location / { root /var/www; index index.html index.htm; } location /myApp{ proxy_pass http://backend-http/myApp; proxy_set_header Forwarded $proxy_add_x_forwarded_for; proxy_next_upstream error timeout http_500; } gzip on; gzip_http_version 1.0; gzip_comp_level 9; gzip_proxied any; gzip_types text/xml text/plain; server_tokens off; }
once the change was done i’ve encountered [so far] two problems.
problem #1 – not exactly related to nginx, more to the old apache2 proxy. one of servlets did not set the content-type header, but apache2 was kind enough to add it to the response before forwarding it back to the client. this made client application happy until we switched to nginx.. at that moment client application broke. one line fix in the backend code and we’re back in the business.
problem #2 – more complicated thing. from time to time i saw in the /var/log/nginx/error.log errors:
2012/01/04 06:28:08 [error] 4287#0: *14531 sendfile() failed (104: Connection reset by peer) while sending request to upstream, client: 194.6.7.8, server: some.hostname.com, request: "POST /myApp/Servlet HTTP/1.1", upstream: "http://10.0.0.2:8080/myApp/Servlet", host: "some.hostname.com" 2012/01/04 07:12:37 [error] 4283#0: *28417 sendfile() failed (32: Broken pipe) while sending request to upstream, client: 123.3.45.41, server: some.hostname.com, request: "POST /myApp/Servlet HTTP/1.1", upstream: "http://10.0.0.2:8080/myApp/Servlet", host: "some.hostname.com"
i’ve done some reading and some sniffing – as it turns out nginx expects the back-end servlet to start sending response only after whole POST body is retrieved; in some circumstances my back-end was sending response much earlier – just after parsing the request headers; nginx did not forward that back to the client, but rather returned “HTTP/1.1 502 Bad Gateway”. i’ve modified the code [of my backend application] to read whole POST body before sending any response BODY – this made nginx and clients happy.
that’s how communication between nginx and back-end looked like before the change:
after the change in the back-end code:
Hi!
You wrote: “i’ve modified the code to read whole POST body before sending any response BODY”
Do you have a code example and could you tell me which config file(s) you edited?
Thanks in advance!
Regards from Frankfurt!
hello; probably i was not very clear. i have modified code of my own application generating the responses to read the whole POST body before generating any output. so it’s not a generic solution, rather a custom change in in-house software.