I am expecting a large request from a client to my server with ~10MB of data per request.
Because this data is quite complex to process and there is an upper time limit until the calculations have to be finished, I thought about whether it is possible to start processing the data before they are completely transfered?
My idea was to give each new "chunk" of data to a distributed system which then processes them asynchronously.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…