The HTTP/1.1 Protocol
HTTP/1.1 is a text-based, request-response protocol that operates over TCP. It has been the backbone of the World Wide Web since 1997 (RFC 2068, updated in RFC 7230-7235). Despite the rise of HTTP/2 and HTTP/3, understanding HTTP/1.1 is essential because its message format and semantics form the foundation for all later versions.
Request Format
An HTTP request is a sequence of ASCII text lines:
METHOD SP REQUEST-URI SP HTTP-VERSION CRLF
Header-Name: Header-Value CRLF
Header-Name: Header-Value CRLF
CRLF
[optional body]
SP is a single space character. CRLF is \r\n (carriage return + line feed). The blank line (double CRLF) separates headers from the body. Example:
GET /index.html HTTP/1.1\r\n
Host: www.example.com\r\n
Accept: text/html\r\n
\r\n
Response Format
HTTP-VERSION SP STATUS-CODE SP REASON-PHRASE CRLF
Header-Name: Header-Value CRLF
CRLF
[optional body]
Example:
HTTP/1.1 200 OK\r\n
Content-Type: text/html\r\n
Content-Length: 1234\r\n
\r\n
<html>...</html>
Persistent Connections
In HTTP/1.0, each request required a new TCP connection (expensive: TCP handshake + slow start). HTTP/1.1 defaults to persistent connections (Connection: keep-alive). Multiple requests and responses can be sent over the same TCP connection sequentially. The connection is closed when either side sends Connection: close or after a timeout.
Pipelining and Head-of-Line Blocking
HTTP/1.1 introduced pipelining: the client can send multiple requests without waiting for each response. However, the server must respond in the same order the requests were received. If the first request is slow (e.g., a database query), all subsequent responses are delayed behind it — this is head-of-line (HOL) blocking. In practice, most browsers disabled pipelining because of HOL blocking and buggy proxy servers.
Chunked Transfer Encoding
When the server does not know the total response size in advance (e.g., streaming data or dynamically generated content), it uses chunked transfer encoding (Transfer-Encoding: chunked). The body is sent as a series of chunks, each preceded by its size in hexadecimal. A zero-length chunk signals the end:
4\r\n
Wiki\r\n
7\r\n
pedia i\r\n
0\r\n
\r\n
This avoids buffering the entire response in memory before sending.
Status Code Classes
HTTP responses carry a three-digit status code organized into five classes:
- 1xx Informational — the request was received, processing continues.
100 Continuetells the client to proceed with sending the body. - 2xx Success — the request was successfully received and processed.
200 OK,201 Created,204 No Content. - 3xx Redirection — further action is needed.
301 Moved Permanently,302 Found(temporary redirect),304 Not Modified(conditional GET). - 4xx Client Error — the request is malformed or unauthorized.
400 Bad Request,401 Unauthorized,403 Forbidden,404 Not Found,429 Too Many Requests. - 5xx Server Error — the server failed to process a valid request.
500 Internal Server Error,502 Bad Gateway,503 Service Unavailable,504 Gateway Timeout.
HTTP/1.1 Request and Response in Practice
A typical browser interaction with HTTP/1.1:
The browser opens a TCP connection to port 80 (or 443 for HTTPS), then sends:
GET /api/users/42 HTTP/1.1
Host: api.example.com
Accept: application/json
Connection: keep-alive
The server responds:
HTTP/1.1 200 OK
Content-Type: application/json
Content-Length: 68
Connection: keep-alive
{"id": 42, "name": "Alice", "email": "alice@example.com"}
Because Connection: keep-alive is set, the browser reuses the same TCP connection for subsequent requests (e.g., fetching images, stylesheets, scripts).
Head-of-line blocking example:
Suppose the browser pipelines three requests on one connection:
GET /slow-query(takes 2 seconds)GET /style.css(takes 5 ms)GET /logo.png(takes 10 ms)
Even though style.css and logo.png are ready almost immediately, they cannot be sent until the /slow-query response finishes. The browser stalls for 2 seconds.
Workaround: browsers open 6 parallel TCP connections per hostname (the HTTP/1.1 convention). Developers also used domain sharding — serving assets from img1.example.com, img2.example.com, etc. — to increase parallelism. HTTP/2 eliminated this need with multiplexing.