Aerospike handles batch requests using either inline processing or out-of-line processing.
This document explains the workflow differences between standard GET operations, out-of-line batch processing, and inline batch processing.
get() OperationA single-record lookup follows this sequence:
client β service thread (port 3000)
β read-start (demarshal + reserve partition)
β read-local (read from SSD/memory)
β read-response (send data to client)
client β TCP buffer β service thread
Batch requests in out-of-line mode distribute individual record fetches across different service threads.
client β service-thread-1 (3000)
β batch-sub-prestart (demarshal, split batch-get into multiple `get` requests)
β batch-sub-start timer starts for record-1
β batch-sub-start timer starts for record-2
β record-1 β service-thread-2
β batch-sub-start (reserve partition, stop timer)
β batch-sub-read-local
β put response into batch response buffer
β record-2 β service-thread-3
β batch-sub-start (reserve partition, stop timer)
β batch-sub-read-local
β put response into batch response buffer
β batch-thread
β batch-sub-response (send batch response to client)
β
Parallel execution: Different service threads process each record in parallel.
β
Batch requests are split into individual GETs to maximize concurrency.
β
A separate batch-thread aggregates responses before sending them to the client.
In inline mode, a single service thread handles the entire batch request.
client β service-thread-1 (3000) β does all the work above, without handing over to service-thread-*
β
Faster for small batch sizes since no inter-thread handover occurs.
β
Lower latency as thereβs no overhead of splitting requests.
β
Limited by single-thread execution, so not ideal for large batches.
| Mode | Description | Pros | Cons |
|---|---|---|---|
| Standard GET | Single-record fetch | Simple, fast | No parallelism |
| Out-of-Line Batch | Splits requests across threads | High concurrency | Higher overhead |
| Inline Batch | Single thread does all work | Low latency | Not optimal for large batches |