HTTP 500 vs 502 — Internal Server Error vs Bad Gateway Comparison

http 500 vs 502: 500 is your application crashing, 502 is your reverse proxy unable to reach the upstream. Learn the debugging workflow for each.

5xx

500

Internal Server Error

View full 500 page →

5xx

502

Bad Gateway

View full 502 page →

Quick Cheat Sheet

Aspect 500 Internal Server Error 502 Bad Gateway
Who failed? The application itself The upstream behind a proxy
Issued by The origin server A reverse proxy / gateway / load balancer
Likely cause Code bug, unhandled exception Backend dead, network issue, garbled response
Where to look first Application logs Proxy logs first, then backend

What Each Code Means

500 Internal Server Error (RFC 9110 § 15.6.1) is the catch-all "something blew up inside my code" response. An unhandled exception, a divide by zero, a missing template — anything where the application itself produced no useful response.

502 Bad Gateway (RFC 9110 § 15.6.3) is fundamentally different: it's sent by a proxy/gateway (Nginx, ALB, Cloudflare, API Gateway) when it received an invalid response from the upstream server, or couldn't reach it at all.

The Mental Model

If your architecture is:

Browser → Nginx → Node.js app

  • Node throws an exception, returns 500 → user sees 500 from Nginx, your app logs the stack trace.
  • Node process crashed, Nginx can't connect → user sees 502 from Nginx, your app has no logs at all for this request.

The presence (500) or absence (502) of application logs is often the fastest way to tell them apart.

Common Causes of 502

  • Node/Python/Ruby process died (segfault, OOM kill, crash loop)
  • Upstream returned malformed HTTP (response truncated, bad status line)
  • Connection refused on the upstream port (process not listening)
  • Connection reset mid-response
  • TLS handshake failure between proxy and backend

Common Causes of 500

  • Unhandled exceptions in route handlers
  • Database query errors that aren't caught
  • Template rendering failures
  • Out-of-memory in the application, but the process recovers
  • Misconfigured environment variables

Debugging Workflow

For 500:

  1. Tail your application logs and search for the request ID
  2. Look for the exception/stack trace
  3. Reproduce locally with the same input

For 502:

  1. Check proxy access logs first — is the upstream reachable?
  2. Check upstream process status (is it running? memory pressure? CPU?)
  3. Look at upstream logs for the request — if there are none, the proxy never reached you
  4. Check for TCP-level issues with tcpdump or stack traces from the proxy

CDN-Specific Notes

  • Cloudflare uses 520-526 for various proxy-failure subtypes. 502 from Cloudflare specifically means "upstream server returned an invalid response."
  • AWS ALB returns 502 when target group health checks fail or targets close the connection unexpectedly.
  • Vercel returns 502 when serverless functions crash before producing a response.

Why It Matters

Misdiagnosing 500 vs 502 wastes hours: you'll grep app logs for a 502 and find nothing, then assume the bug is somewhere else. Always check which component issued the response first.

Real-World Use Case

If your Vercel-hosted Next.js Route Handler hits an unhandled rejection in async code, you'll see 500 with the error in Vercel logs. If your Lambda function crashes during cold start before invoke, you'll see 502 from API Gateway with nothing in your function logs — the failure is in the gateway-to-Lambda integration.

Look Up Any Status Code

Browse all status codes →