Benchmarks#
Methodology#
Benchmarking is done using the bombardier benchmarking tool.
Benchmarks are run on a dedicated machine, with a base Debian 11 installation.
Each framework is contained within its own docker container, running on a dedicated CPU core (using the
cset shield
command and the--cpuset-cpus
option for docker)Tests for the frameworks are written to make them as comparable as possible while completing the same tasks (you can see them here)
Each application is run using uvicorn with one worker and uvloop
Test data has been randomly generated and is being imported from a shared module
Results#
Note
If a result is missing for a specific framework that means either
It does not support this functionality (this will be mentioned in the test description)
More than 0.1% of responses were dropped / erroneous
JSON#
Serializing a dictionary into JSON
Files#
Path and query parameter handling#
All responses return “No Content”
No params: No path parameters
Path params: Single path parameter, coerced into an integer
Query params: Single query parameter, coerced into an integer
Mixed params: A path and a query parameters, coerced into integers
Dependency injection#
(not supported by Starlette
)
Resolving 3 nested synchronous dependencies
Resolving 3 nested asynchronous dependencies (only supported by
Starlite
andFastAPI
)Resolving 3 nested synchronous, and 3 nested asynchronous dependencies (only supported by
Starlite
andFastAPI
)
Modifying responses#
All responses return “No Content”
Plaintext#
Interpreting the results#
An interpretation of these results should be approached with caution, as is the case for nearly all benchmarks. A high score in a test does not necessarily translate to high performance of your application in your use case. For almost any test you can probably write an app that performs better or worse at a comparable task in your scenario.
While trying to design the tests in a way that simulate somewhat realistic scenarios, they can never give an exact representation of how a real world application, where, aside from the workload, many other factors come into play. These tests were mainly written to be used internally for starlite development, to help us locate the source of some performance regression we were experiencing.