Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions docs/source/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@ A basic HTTP request/response cycle looks like this:
* and a :class:`EndOfMessage` event.

And once that's finished, both sides either close the connection, or
they go back to the top and re-use it for another request/response
they go back to the top and reuse it for another request/response
cycle.

To coordinate this interaction, the h11 :class:`Connection` object
Expand Down Expand Up @@ -281,7 +281,7 @@ what's needed to handle the basic request/response cycle:

* Server sending a :class:`Response` directly from :data:`IDLE`: This
is used for error responses, when the client's request never arrived
(e.g. 408 Request Timed Out) or was unparseable gibberish (400 Bad
(e.g. 408 Request Timed Out) or was unparsable gibberish (400 Bad
Request) and thus didn't register with our state machine as a real
:class:`Request`.

Expand Down Expand Up @@ -471,7 +471,7 @@ There are four cases where these exceptions might be raised:

* When calling :meth:`Connection.start_next_cycle`
(:exc:`LocalProtocolError`): This indicates that the connection is
not ready to be re-used, because one or both of the peers are not in
not ready to be reused, because one or both of the peers are not in
the :data:`DONE` state. The :class:`Connection` object remains
usable, and you can try again later.

Expand Down Expand Up @@ -507,7 +507,7 @@ the remote peer. But sometimes, for one reason or another, this
doesn't actually happen.

Here's a concrete example. Suppose you're using h11 to implement an
HTTP client that keeps a pool of connections so it can re-use them
HTTP client that keeps a pool of connections so it can reuse them
when possible (see :ref:`keepalive-and-pipelining`). You take a
connection from the pool, and start to do a large upload... but then
for some reason this gets cancelled (maybe you have a GUI and a user
Expand All @@ -518,7 +518,7 @@ successfully sent the full request, because you passed an
you didn't, because you never sent the resulting bytes. And then –
here's the really tricky part! – if you're not careful, you might
think that it's OK to put this connection back into the connection
pool and re-use it, because h11 is telling you that a full
pool and reuse it, because h11 is telling you that a full
request/response cycle was completed. But this is wrong; in fact you
have to close this connection and open a new one.

Expand Down Expand Up @@ -605,10 +605,10 @@ cases:

.. _keepalive-and-pipelining:

Re-using a connection: keep-alive and pipelining
Reusing a connection: keep-alive and pipelining
------------------------------------------------

HTTP/1.1 allows a connection to be re-used for multiple
HTTP/1.1 allows a connection to be reused for multiple
request/response cycles (also known as "keep-alive"). This can make
things faster by letting us skip the costly connection setup, but it
does create some complexities: we have to keep track of whether a
Expand All @@ -635,7 +635,7 @@ these cases -- h11 will automatically add this header when
necessary. Just worry about setting it when it's actually something
that you're actively choosing.

If you want to re-use a connection, you have to wait until both the
If you want to reuse a connection, you have to wait until both the
request and the response have been completed, bringing both the client
and server to the :data:`DONE` state. Once this has happened, you can
explicitly call :meth:`Connection.start_next_cycle` to reset both
Expand All @@ -651,7 +651,7 @@ skip past the :data:`DONE` state directly to the :data:`MUST_CLOSE` or
thing you can legally do is to close this connection and make a new
one.

HTTP/1.1 also allows for a more aggressive form of connection re-use,
HTTP/1.1 also allows for a more aggressive form of connection reuse,
in which a client sends multiple requests in quick succession, and
then waits for the responses to stream back in order
("pipelining"). This is generally considered to have been a bad idea,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/basic-usage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -286,7 +286,7 @@ For some servers, we'd have to stop here, because they require a new
connection for every request/response. But, this server is smarter
than that -- it supports `keep-alive
<https://en.wikipedia.org/wiki/HTTP_persistent_connection>`_, so we
can re-use this connection to send another request. There's a few ways
can reuse this connection to send another request. There's a few ways
we can tell. First, if it didn't, then it would have closed the
connection already, and we would have gotten a
:class:`ConnectionClosed` event on our last call to
Expand Down
2 changes: 1 addition & 1 deletion docs/source/changes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -293,7 +293,7 @@ Backwards compatible changes:
originates locally (which produce a 500 status code) versus errors
caused by remote misbehavior (which produce a 4xx status code).

* Changed the :data:`PRODUCT_ID` from ``h11/<verson>`` to
* Changed the :data:`PRODUCT_ID` from ``h11/<version>`` to
``python-h11/<version>``. (This is similar to what requests uses,
and much more searchable than plain h11.)

Expand Down
6 changes: 3 additions & 3 deletions examples/trio-server.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@
#
# - The error handling policy here is somewhat crude as well. It handles a lot
# of cases perfectly, but there are corner cases where the ideal behavior is
# more debateable. For example, if a client starts uploading a large
# more debatable. For example, if a client starts uploading a large
# request, uses 100-Continue, and we send an error response, then we'll shut
# down the connection immediately (for well-behaved clients) or after
# spending TIMEOUT seconds reading and discarding their upload (for
Expand All @@ -54,7 +54,7 @@
# and wasting your bandwidth if this is what it takes to guarantee that they
# see your error response. Up to you, really.
#
# - Another example of a debateable choice: if a response handler errors out
# - Another example of a debatable choice: if a response handler errors out
# without having done *anything* -- hasn't started responding, hasn't read
# the request body -- then this connection actually is salvagable, if the
# server sends an error response + reads and discards the request body. This
Expand Down Expand Up @@ -262,7 +262,7 @@ async def http_serve(stream):
return
else:
try:
wrapper.info("trying to re-use connection")
wrapper.info("trying to reuse connection")
wrapper.conn.start_next_cycle()
except h11.ProtocolError:
states = wrapper.conn.states
Expand Down
4 changes: 2 additions & 2 deletions h11/_headers.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@
# message"
# (and there are several headers where the order indicates a preference)
#
# Multiple occurences of the same header:
# Multiple occurrences of the same header:
# "A sender MUST NOT generate multiple header fields with the same field name
# in a message unless either the entire field value for that header field is
# defined as a comma-separated list [or the header is Set-Cookie which gets a
Expand Down Expand Up @@ -81,7 +81,7 @@ class Headers(Sequence[Tuple[bytes, bytes]]):

Internally we actually store the representation as three-tuples,
including both the raw original casing, in order to preserve casing
over-the-wire, and the lowercased name, for case-insensitive comparisions.
over-the-wire, and the lowercased name, for case-insensitive comparisons.

r = Request(
method="GET",
Expand Down
2 changes: 1 addition & 1 deletion h11/tests/test_connection.py
Original file line number Diff line number Diff line change
Expand Up @@ -652,7 +652,7 @@ def setup() -> ConnectionPair:
for conn in p.conns:
assert conn.states == {CLIENT: DONE, SERVER: SEND_BODY}
p.send(SERVER, EndOfMessage())
# Check that re-use is still allowed after a denial
# Check that reuse is still allowed after a denial
for conn in p.conns:
conn.start_next_cycle()

Expand Down
2 changes: 1 addition & 1 deletion h11/tests/test_state.py
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ def test_ConnectionState_reuse() -> None:
with pytest.raises(LocalProtocolError):
cs.start_next_cycle()

# Succesful protocol switch
# Successful protocol switch

cs = ConnectionState()
cs.process_client_switch_proposal(_SWITCH_UPGRADE)
Expand Down
Loading