Name Description Size
__init__.py hpack ~~~~~ HTTP/2 header encoding for Python. 568
exceptions.py hyper/http20/exceptions ~~~~~~~~~~~~~~~~~~~~~~~ This defines exceptions used in the HTTP/2 portion of hyper. 974
hpack.py hpack/hpack ~~~~~~~~~~~ Implements the HPACK header compression algorithm as detailed by the IETF. 22683
huffman.py hpack/huffman_decoder ~~~~~~~~~~~~~~~~~~~~~ An implementation of a bitwise prefix tree specially built for decoding Huffman-coded content where we already know the Huffman table. 2443
huffman_constants.py hpack/huffman_constants ~~~~~~~~~~~~~~~~~~~~~~~ Defines the constant Huffman table. This takes up an upsetting amount of space, but c'est la vie. 4643
huffman_table.py hpack/huffman_table ~~~~~~~~~~~~~~~~~~~ This implementation of a Huffman decoding table for HTTP/2 is essentially a Python port of the work originally done for nghttp2's Huffman decoding. For this reason, while this file is made available under the MIT license as is the rest of this module, this file is undoubtedly a derivative work of the nghttp2 file ``nghttp2_hd_huffman_data.c``, obtained from https://github.com/tatsuhiro-t/nghttp2/ at commit d2b55ad1a245e1d1964579fa3fac36ebf3939e72. That work is made available under the Apache 2.0 license under the following terms: Copyright (c) 2013 Tatsuhiro Tsujikawa Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. The essence of this approach is that it builds a finite state machine out of 4-bit nibbles of Huffman coded data. The input function passes 4 bits worth of data to the state machine each time, which uses those 4 bits of data along with the current accumulated state data to process the data given. For the sake of efficiency, the in-memory representation of the states, transitions, and result values of the state machine are represented as a long list containing three-tuples. This list is enormously long, and viewing it as an in-memory representation is not very clear, but it is laid out here in a way that is intended to be *somewhat* more clear. Essentially, the list is structured as 256 collections of 16 entries (one for each nibble) of three-tuples. Each collection is called a "node", and the zeroth collection is called the "root node". The state machine tracks one value: the "state" byte. For each nibble passed to the state machine, it first multiplies the "state" byte by 16 and adds the numerical value of the nibble. This number is the index into the large flat list. The three-tuple that is found by looking up that index consists of three values: - a new state value, used for subsequent decoding - a collection of flags, used to determine whether data is emitted or whether the state machine is complete. - the byte value to emit, assuming that emitting a byte is required. The flags are consulted, if necessary a byte is emitted, and then the next nibble is used. This continues until the state machine believes it has completely Huffman-decoded the data. This approach has relatively little indirection, and therefore performs relatively well, particularly on implementations like PyPy where the cost of loops at the Python-level is not too expensive. The total number of loop iterations is 4x the number of bytes passed to the decoder. 168580
struct.py hpack/struct ~~~~~~~~~~~~ Contains structures for representing header fields with associated metadata. 1050
table.py Calculates the size of a single entry This size is mostly irrelevant to us and defined specifically to accommodate memory management for lower level implementations. The 32 extra bytes are considered the "maximum" overhead that would be required to represent each entry in the table. See RFC7541 Section 4.1 9635