Parent Log:
http://ci.aztec-labs.com/308e04cfcea07bcf
Command: 1b54ea671a21576c:ISOLATE=1:NAME=p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts
Commit:
https://github.com/AztecProtocol/aztec-packages/commit/30660269b33bab8cca354c41659533acf4d48e07
Env: REF_NAME=gh-readonly-queue/next/pr-15026-d96baf1c44329e8b2e3a432ad803f702f5184a62 CURRENT_VERSION=0.87.6 CI_FULL=1
Date: Fri Jun 13 09:35:24 UTC 2025
System: ARCH=amd64 CPUS=128 MEM=493Gi HOSTNAME=pr-15026_amd64_x1-full
Resources: CPU_LIST=0-127 CPUS=2 MEM=8g TIMEOUT=600s
History:
http://ci.aztec-labs.com/list/history_0a54840dde01048b_next
09:35:24 +++ id -u
09:35:24 +++ id -g
09:35:24 ++ docker run -d --name p2p_src_mem_pools_tx_pool_aztec_kv_tx_pool.test.ts --net=none --cpuset-cpus=0-127 --cpus=2 --memory=8g --user 1000:1000 -v/home/aztec-dev:/home/aztec-dev --mount type=tmpfs,target=/tmp,tmpfs-size=1g --workdir /home/aztec-dev/aztec-packages -e HOME -e VERBOSE -e GIT_CONFIG_GLOBAL=/home/aztec-dev/aztec-packages/build-images/src/home/.gitconfig -e FORCE_COLOR=true -e CPUS -e MEM aztecprotocol/build:3.0 /bin/bash -c 'timeout -v 600s bash -c '\''yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts'\'''
09:35:25 + cid=b33d3765738330bf7d01d8446441e906c4a98557eb9cee4518aca4a776b3cdd9
09:35:25 + set +x
09:35:28 [09:35:28.410]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:35:28 [09:35:28.420]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:35:28 [09:35:28.820]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:35:28 [09:35:28.822]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:35:28 [09:35:28.921]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:35:28 [09:35:28.923]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:35:29 [09:35:29.069]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:35:29 [09:35:29.076]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:35:29 [09:35:29.187]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:35:29 [09:35:29.189]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:35:29 [09:35:29.283]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:35:29 [09:35:29.284]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:35:29 [09:35:29.460]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:35:29 [09:35:29.462]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:35:29 [09:35:29.609]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:35:29 [09:35:29.611]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:35:29 [09:35:29.779]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:35:29 [09:35:29.782]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:29 [09:36:29.359]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:29 [09:36:29.361]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:29 [09:36:29.546]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:29 [09:36:29.548]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:29 [09:36:29.731]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:29 [09:36:29.733]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:29 [09:36:29.735]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:29 [09:36:29.737]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:30 [09:36:30.105]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:30 [09:36:30.107]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:30 [09:36:30.110]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:30 [09:36:30.112]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:30 [09:36:30.113]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
09:36:30 [09:36:30.537]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:30 [09:36:30.539]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:30 [09:36:30.540]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:30 [09:36:30.541]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:30 [09:36:30.542]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":1000}
09:36:30 [09:36:30.542]
INFO:
p2p:tx_pool Allowing tx pool size to grow above limit
{"maxTxPoolSize":1000,"txPoolOverflowFactor":1.5}
09:36:31 [09:36:31.138]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:31 [09:36:31.139]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:31 [09:36:31.335]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:31 [09:36:31.336]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:31 [09:36:31.542]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:31 [09:36:31.544]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:31 [09:36:31.692]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:31 [09:36:31.693]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:31 [09:36:31.835]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:31 [09:36:31.836]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:31 [09:36:31.983]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:31 [09:36:31.987]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:31 [09:36:31.989]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:31 [09:36:31.991]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:31 [09:36:31.992]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
09:36:32 [09:36:32.224]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:32 [09:36:32.226]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:32 [09:36:32.230]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:36:32 [09:36:32.231]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:36:32 [09:36:32.232]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
09:36:32
PASS src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts (
66.524 s)
09:36:32 KV TX pool
09:36:32
✓ Adds txs to the pool as pending (414 ms)
09:36:32
✓ Removes txs from the pool (99 ms)
09:36:32
✓ Marks txs as mined (146 ms)
09:36:32
✓ Marks txs as pending after being mined (119 ms)
09:36:32
✓ Only marks txs as pending if they are known (96 ms)
09:36:32
✓ Returns all transactions in the pool (176 ms)
09:36:32
✓ Returns all txHashes in the pool (148 ms)
09:36:32
✓ Returns txs by their hash (170 ms)
09:36:32
✓ Returns a large number of transactions by their hash (59577 ms)
09:36:32
✓ Returns whether or not txs exist (188 ms)
09:36:32
✓ Returns pending tx hashes sorted by priority (184 ms)
09:36:32
✓ Returns archived txs and purges archived txs once the archived tx limit is reached (371 ms)
09:36:32
✓ Evicts low priority txs to satisfy the pending tx size limit (434 ms)
09:36:32
✓ respects the overflow factor configured (600 ms)
09:36:32
✓ Evicts txs with nullifiers that are already included in the mined block (195 ms)
09:36:32
✓ Evicts txs with an insufficient fee payer balance after a block is mined (205 ms)
09:36:32
✓ Evicts txs with a max block number lower than or equal to the mined block (151 ms)
09:36:32
✓ Evicts txs with invalid archive roots after a reorg (143 ms)
09:36:32
✓ Evicts txs with invalid fee payer balances after a reorg (140 ms)
09:36:32
✓ Does not evict low priority txs marked as non-evictable (249 ms)
09:36:32
✓ Evicts low priority txs after block is mined (306 ms)
09:36:32
09:36:32
Test Suites: 1 passed, 1 total
09:36:32
Tests: 21 passed, 21 total
09:36:32
Snapshots: 0 total
09:36:32
Time: 66.598 s
09:36:32
Ran all test suites matching /p2p\/src\/mem_pools\/tx_pool\/aztec_kv_tx_pool.test.ts/i
.
09:36:32
Force exiting Jest: Have you considered using `--detectOpenHandles` to detect async operations that kept running after all tests finished?