Parent Log:
http://ci.aztec-labs.com/338dc61726cce170
Command: bbc5a7ea91d19a83:ISOLATE=1:NAME=p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts
Commit:
https://github.com/AztecProtocol/aztec-packages/commit/4800d08570523bc1b2a9e8ec0dfb09e326f4689a
Env: REF_NAME=gh-readonly-queue/next/pr-14900-d3bba2d69dbc070d51bcd50607354193573876ba CURRENT_VERSION=0.87.6 CI_FULL=1
Date: Fri Jun 13 13:33:04 UTC 2025
System: ARCH=amd64 CPUS=128 MEM=493Gi HOSTNAME=pr-14900_amd64_x4-full
Resources: CPU_LIST=0-127 CPUS=2 MEM=8g TIMEOUT=600s
History:
http://ci.aztec-labs.com/list/history_0a54840dde01048b_next
13:33:04 +++ id -u
13:33:04 +++ id -g
13:33:04 ++ docker run -d --name p2p_src_mem_pools_tx_pool_aztec_kv_tx_pool.test.ts --net=none --cpuset-cpus=0-127 --cpus=2 --memory=8g --user 1000:1000 -v/home/aztec-dev:/home/aztec-dev --mount type=tmpfs,target=/tmp,tmpfs-size=1g --workdir /home/aztec-dev/aztec-packages -e HOME -e VERBOSE -e GIT_CONFIG_GLOBAL=/home/aztec-dev/aztec-packages/build-images/src/home/.gitconfig -e FORCE_COLOR=true -e CPUS -e MEM aztecprotocol/build:3.0 /bin/bash -c 'timeout -v 600s bash -c '\''yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts'\'''
13:33:04 + cid=02734fb0c25969f9ef7e91a6749911cc9dcf3298f18d472b5b393f1b1cbd1547
13:33:04 + set +x
13:33:08 [13:33:08.555]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:33:08 [13:33:08.566]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:33:08 [13:33:08.975]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:33:08 [13:33:08.977]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:33:09 [13:33:09.074]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:33:09 [13:33:09.076]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:33:09 [13:33:09.226]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:33:09 [13:33:09.228]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:33:09 [13:33:09.350]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:33:09 [13:33:09.353]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:33:09 [13:33:09.446]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:33:09 [13:33:09.449]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:33:09 [13:33:09.626]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:33:09 [13:33:09.628]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:33:09 [13:33:09.760]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:33:09 [13:33:09.763]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:33:09 [13:33:09.901]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:33:09 [13:33:09.903]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:12 [13:34:12.862]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:12 [13:34:12.872]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:13 [13:34:13.045]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:13 [13:34:13.048]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:13 [13:34:13.239]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:13 [13:34:13.244]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:13 [13:34:13.248]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:13 [13:34:13.253]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:13 [13:34:13.613]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:13 [13:34:13.615]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:13 [13:34:13.618]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:13 [13:34:13.620]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:13 [13:34:13.621]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
13:34:14 [13:34:14.002]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:14 [13:34:14.003]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:14 [13:34:14.004]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:14 [13:34:14.005]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:14 [13:34:14.006]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":1000}
13:34:14 [13:34:14.007]
INFO:
p2p:tx_pool Allowing tx pool size to grow above limit
{"maxTxPoolSize":1000,"txPoolOverflowFactor":1.5}
13:34:14 [13:34:14.634]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:14 [13:34:14.635]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:14 [13:34:14.808]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:14 [13:34:14.809]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:15 [13:34:15.013]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:15 [13:34:15.014]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:15 [13:34:15.176]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:15 [13:34:15.177]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:15 [13:34:15.321]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:15 [13:34:15.322]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:15 [13:34:15.459]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:15 [13:34:15.460]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:15 [13:34:15.462]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:15 [13:34:15.463]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:15 [13:34:15.464]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
13:34:15 [13:34:15.666]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:15 [13:34:15.667]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:15 [13:34:15.669]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
13:34:15 [13:34:15.673]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
13:34:15 [13:34:15.674]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
13:34:16
PASS src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts (
70.44 s)
13:34:16 KV TX pool
13:34:16
✓ Adds txs to the pool as pending (424 ms)
13:34:16
✓ Removes txs from the pool (99 ms)
13:34:16
✓ Marks txs as mined (152 ms)
13:34:16
✓ Marks txs as pending after being mined (124 ms)
13:34:16
✓ Only marks txs as pending if they are known (95 ms)
13:34:16
✓ Returns all transactions in the pool (180 ms)
13:34:16
✓ Returns all txHashes in the pool (133 ms)
13:34:16
✓ Returns txs by their hash (140 ms)
13:34:16
✓ Returns a large number of transactions by their hash (62958 ms)
13:34:16
✓ Returns whether or not txs exist (184 ms)
13:34:16
✓ Returns pending tx hashes sorted by priority (194 ms)
13:34:16
✓ Returns archived txs and purges archived txs once the archived tx limit is reached (373 ms)
13:34:16
✓ Evicts low priority txs to satisfy the pending tx size limit (389 ms)
13:34:16
✓ respects the overflow factor configured (631 ms)
13:34:16
✓ Evicts txs with nullifiers that are already included in the mined block (173 ms)
13:34:16
✓ Evicts txs with an insufficient fee payer balance after a block is mined (201 ms)
13:34:16
✓ Evicts txs with a max block number lower than or equal to the mined block (167 ms)
13:34:16
✓ Evicts txs with invalid archive roots after a reorg (144 ms)
13:34:16
✓ Evicts txs with invalid fee payer balances after a reorg (136 ms)
13:34:16
✓ Does not evict low priority txs marked as non-evictable (206 ms)
13:34:16
✓ Evicts low priority txs after block is mined (330 ms)
13:34:16
13:34:16
Test Suites: 1 passed, 1 total
13:34:16
Tests: 21 passed, 21 total
13:34:16
Snapshots: 0 total
13:34:16
Time: 70.54 s
13:34:16
Ran all test suites matching /p2p\/src\/mem_pools\/tx_pool\/aztec_kv_tx_pool.test.ts/i
.
13:34:16
Force exiting Jest: Have you considered using `--detectOpenHandles` to detect async operations that kept running after all tests finished?