as mentioned in previous ABCIx post, we would like to support packing transactions based on their priorities (which makes Ethereum’s first-price auction style transaction inclusion possible). to do this, several changes have to be made on the mempool side, this post briefly covers them and steps needed going forward. hope this can help tendermint devs as well if similar changes are to be implemented in the future.
one method needs to be added to support iterating transactions based on their priorities:
// GetNextTxBytes will return transaction with condition that Bytes and Gas // must be less than remainBytes and remainGas, and with highest priority less // than the priority of stater GetNextTxBytes(remainBytes int64, remainGas int64, starter byte) (byte, error)
as explained by the comments, this method tries to fetch the transaction which meets the tx size and gas requirements while the priority is the highest among those who has lower priority than previous
(a separate note: this method will be used in
CreateBlock ABCIx API to collect transactions based on their priorities.)
during our first iteration, we are still relying on tendermint’s linked list (
CList used in existing
clist_mempool lib) to implement priority-based iteration, which requires scanning the whole list twice (first find the element matching
starter then find the desired element to return) resulting in O(n) time complexity.
the second step is to optimize the data structure to support more efficient retrieval, so we start looking for balance tree libraries in Go (alas we miss
BTreeMap in Rust’s standard library). We do find several used in large scale projects (due to the forum’s limitation I’ll post links in the next comment), but realized they lack certain customizability we need for our use cases. So we decided to turn to our own left-leaning red–black tree implementation and modify accordingly. (Update: we probably will look into google’s BTree implementation to see if it fits our uses later.)
the idea is to store transaction based on priority, insertion time and tx hash. then every time
GetNextTxBytes is called we only need O(logn) time to get the desired tx. the trade-off comes to insertion and deletion, and we are actively working on profiling the performance differences. right now we finished implementing the tree but still on working on the mempool based on LLRB.
the second step for optimization is to store the iterator’s state inside the tree. thus in theory getting the next tx will only need constant amortized time (iterating the whole tree is basically an in-order traversal).
we will be updating this post to give back implementation progress and profiling results.