in ,

I'm not feeling the async pressure, Hacker News


                 

written on Wednesday, January 1,      

Async is all the rage. Async Python, async Rust, go, node, .NET, pick your favorite ecosystem and it will have some async going. How good this async business works depends quite a lot on the ecosystem and the runtime of the language but overall it has some nice benefits. It makes one thing really simple: to await an operation that can take some time to finish. It makes it so simple, that it creates innumerable new ways to blow ones foot off. The one that I want to discuss is the one where you don’t realize you’re blowing your foot off until the system starts overloading and that’s the topic of back pressure management. A related term in protocol design is flow control.

What’s Back Pressure

There are many explanations for back pressure and a great one isBackpressure explained – the resisted flow of data through softwarewhich I recommend reading. So instead of going into detail about what back pressure is I just want to give a very short definition and explanation for it: back pressure is resistance that opposes the flow of data through a system. Back pressure sounds quite negative – who does not Imagine a bathtub overflowing due to a clogged pipe – but it’s here to save your day.

The setup we’re dealing with here is more or less the same in all cases: we have a system composed of different components into a pipeline and that pipeline has to accept a certain number of incoming messages.

You could imagine this like you would model luggage delivery at airports. Luggage arrives, gets sorted, loaded into the aircraft and finally unloaded. At any point an individual piece of luggage is thrown together with other luggage into containers for transportation. When a container is full it will need to be picked up. When no containers are left that’s a natural example of back pressure. Now the person that would want to throw luggage into a container can’t because there is no container. A decision has to be made now. One option is to wait: that’s often referred to as queueing or buffering. The other option is to throw away some luggage until a container arrives – this is called dropping. That sounds bad, but we will get into why this is sometimes important later. However there is another thing that plays into here. Imagine the person tasked with putting luggage into a container does not receive a container for an extended period of time (say a week). Eventually if they did not end up throwing luggage away now they will have an awful lot of luggage standing around. Eventually the amount of luggage they will have to sort through will be so enormous that they run out of physical space to store the luggage. At that point they are better off telling the airport not to accept any more incoming luggage until their container issue is resolved. This is commonly referred to asflow controland a crucial aspect of networking.

All these processing pipelines are normally scaled for a certain amount of messages (or in this case luggage) per time period. If the number exceeds this – or worst of all – if the pipeline stalls terrible things can happen. An example of this in the real world was the London Heathrow Terminal 5 opening where 42, 77 bags failed to be routed correctly over days because the IT infrastructure did not work correctly. They had to cancel more than 500 flights and for a while airlines chose to only permit carry-on only.

Back Pressure is Important

What we learn from the Heathrow disaster is that being able to communicate back pressure is crucial. In real life as well as in computing time is always finite. Eventually someone gives up waiting on something. In particular even if internally something would wait forever, externally it wouldn’t.

A real time example for this: if your bag is supposed to be going via London Heathrow to your destination in Paris, but you will only be there for 7 days, then it is completely pointless for your luggage to arrive there with a 10 day delay. In fact you want your luggage to be re-routed back to your home airport.

It’s in fact better to admit defeat – that you’re overloaded – than to pretend that you’re operational and keep buffering up forever because at one point it will only make matters worse.

So why is back pressure all the sudden a topic to discuss when we wrote thread based software for years and it did not seem to come up? A combination of many factors some of which are just the easy to shoot yourself into the foot.

Bad Defaults

To understand why back pressure matters in async code I want to give you a seemingly simple piece of code with Python’s asyncio that showcases a handful of situations where we accidentally forgot about back pressure:

)

from

(asyncioimportstart_server,run

asyncdefon_client_connected(reader

,
writer):
    while
True
:        data
=
awaitreader.************** readline()        if
notdata
:
            
break        writer.

write(data

asyncdef

server

():    srv=

awaitstart_server

(on_client_connected
,'2019 .0.0.1 '    asyncwithsrv
:
        awaitsrvserve_forever

()

run(

)server())

If you are new to the concept of async / await just imagine that at any point where await is called, the function suspends until the expression resolves. Here thestart_serverfunction that is provided by Python'sasynciosystem runs a hidden acceptloop. It listens on a socket and spawns an independent task running theon_client_connectedfunction for each socket that connects.

Now this looks pretty straightforward. You could remove all theawaitandasynckeywords and you end up with code that looks very similar to how you would write code with threads.

However that hides one very crucial issue which is the root of all our issues here: and that are function calls that do not have anawaitin front of it. In threaded code any function can yield. In async code only async functions can. This means for instance that thewriter.writemethod cannot block. So how does this work? So it will try to write the data right into the operating system's socket buffer which is non blocking. However what happens if the buffer is full and the socket would block? In the threading case we could just block here which would be ideal because it means we're applying some back pressure. However because there are not threads here we can't do that. So we're left with buffering here or dropping data. Because dropping data would be pretty terrible, Python instead chooses to buffer. Now what happens if someone sends a lot of data in but does not read? Well in that case the buffer will grow and grow and grow. This API deficiency is why the Python documentation says not to usewriteat all on it's own but to follow up withdrain:

)writer.write(data

awaitwriter

drain


()

Drain will drain some excess on the buffer. It will not cause the entire buffer to flush out, but just enough to prevent things to run out of control. So why iswritenot doing an implicitdrain? Well it's a massive API oversight and I'm not exactly sure how it happened.

An important part that is very important here is that most sockets are based on TCP and TCP has built-in flow control. A writer will only write so fast as the reader is willing to accept (give or take some buffering involved). This is hidden from you entirely as a developer because not even the BSD socket libraries expose this implicit flow control handling.

So did we fix our back pressure issue here? Well let's see how this whole thing would look like in a threading world. In a threading world our code Most likely would have had a fixed number of threads running and the accept loop would have waited for a thread to become available to take over the request. In our async example however we now have an unbounded number of connections we're willing to handle. This similarly means we're willing to accept a very high number of connections even if it means that the system would potentially overload. In this very simple example this is probably less of an issue but imagine what would happen if we were to do some database access.

Picture a database connection pool that will give out up to 50 connections. What good is it to accept 10000 connections when most of them will bottleneck on that connection pool?

Waiting vs Waiting to Wait

So this finally leads me to where I wanted to go in the first place. In most async systems and definitely in most of what I encountered in Python even if you fix all the socket level buffering behavior you end up in a world where you chain a bunch of async functions together with no regard of back pressure.

If we take our database connection pool example let's say there are only connections available. This means at most we can have 90 concurrent database sessions for our code. So let's say we want to let 4 times as many requests be processed as we're expecting that a lot of what the application does is independent of the database. One way to go about it would be to make a semaphore with 200 tokens and to acquire one at the beginning. If we're out of tokens we would start waiting for the semaphore to release a token.

But hold on. Now we're back to queueing! We're just queueing a bit earlier. If we were to severely overload the system now we would queue all the way at the beginning. So now everybody would wait for the maximum amount of time they are willing to wait and then give up. Worse: the server might still process these requests for a while until it realizes the client has disappeared and is no longer interested in the response.

So instead of waiting straight away we would want some feedback. Imagine You're in a post office and you are drawing a ticket from a machine that tells you when it's your turn. This ticket gives you a pretty good indication of how long you will have to wait. If the waiting time is too long you can decide to abandon your ticket and head out to try again later. Note that the waiting time you have until it's your turn at the post office is independent of the waiting time you have for your request (for instance because someone needs to fetch your parcel, check documents and collect a signature).

So here is the naive version where we can only notice we're waiting:

)

from

) asyncio.sync(import

Semaphoresemaphore

=

Semaphore( (****************************asyncdefhandle_request(request

:    awaitsemaphoreacquire

()
    
try
:        return
generate_response(
request

)

    finally:        semaphore

.release()

For the caller of thehandle_requestasync function we can only see that we're waiting and nothing is happening. We can't see if we're waiting because we're overloaded or if we're waiting because generating the response just takes so long. We're basically endlessly buffering here until the server will finally run out of memory and crash.

The reason for this is that we have no communication channel for back pressure. So how would we go about fixing this? One option is to add a layer of indirection. Now here unfortunatelyasyncios semaphore is no use because it only lets us wait. But let's imagine we could ask the semaphore how many tokens are left, then we could do something like this:

)

from

(hypothetical_asyncio.sync) **************

importSemaphore,

(Service) semaphore

=

Semaphore( (****************************classRequestHandlerService

(Service):

    asyncdefhandle(self

,

request):

        awaitsemaphoreacquire

()
        
try
:            return
generate_response(
request

)

        finally:            semaphore

.release()    @ property     def

is_ready(self):
        return
semaphoretokens_available

()

Now we have changed the system somewhat. We now have aRequestHandlerServicewhich has a bit more information. In particular it has the concept of readiness. The service can be asked if it's ready. That operation is inherently non blocking and a best estimate. It has to be, because we're inherently racy here.

The caller now would now turn from this:

)response

(=

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Founder Superpowers | Basis Set Ventures, Hacker News

Founder Superpowers | Basis Set Ventures, Hacker News

This is why you must not give skimmed milk to your kids – Times of India, The Times of India

This is why you must not give skimmed milk to your kids – Times of India, The Times of India