Parent Log:
http://ci.aztec-labs.com/5dedeee27fb005f9
Command: 1f475c5d66a44412:ISOLATE=1:NAME=p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts
Commit:
https://github.com/AztecProtocol/aztec-packages/commit/e56baa7f24bac54baf9e2f22f6f33ae6fa8b8c0f
Env: REF_NAME=gh-readonly-queue/next/pr-14891-76ca48a2187e3506bb464eae574e49476c2876ca CURRENT_VERSION=0.87.6 CI_FULL=0
Date: Fri Jun 13 19:22:05 UTC 2025
System: ARCH=arm64 CPUS=64 MEM=247Gi HOSTNAME=pr-14891_arm64_a1-fast
Resources: CPU_LIST=0-63 CPUS=2 MEM=8g TIMEOUT=600s
History:
http://ci.aztec-labs.com/list/history_0a54840dde01048b_next
19:22:05 +++ id -u
19:22:05 +++ id -g
19:22:05 ++ docker run -d --name p2p_src_mem_pools_tx_pool_aztec_kv_tx_pool.test.ts --net=none --cpuset-cpus=0-63 --cpus=2 --memory=8g --user 1000:1000 -v/home/aztec-dev:/home/aztec-dev --mount type=tmpfs,target=/tmp,tmpfs-size=1g --workdir /home/aztec-dev/aztec-packages -e HOME -e VERBOSE -e GIT_CONFIG_GLOBAL=/home/aztec-dev/aztec-packages/build-images/src/home/.gitconfig -e FORCE_COLOR=true -e CPUS -e MEM aztecprotocol/build:3.0 /bin/bash -c 'timeout -v 600s bash -c '\''yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts'\'''
19:22:05 + cid=f058f6c81f7acca575a54ef3f28579c642faa513b1c05ea7ba049e8c19f0b8e4
19:22:05 + set +x
19:22:08 [19:22:08.155]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:22:08 [19:22:08.164]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:22:08 [19:22:08.495]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:22:08 [19:22:08.496]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:22:08 [19:22:08.582]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:22:08 [19:22:08.583]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:22:08 [19:22:08.691]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:22:08 [19:22:08.692]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:22:08 [19:22:08.783]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:22:08 [19:22:08.785]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:22:08 [19:22:08.866]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:22:08 [19:22:08.867]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:22:09 [19:22:09.013]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:22:09 [19:22:09.014]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:22:09 [19:22:09.126]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:22:09 [19:22:09.127]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:22:09 [19:22:09.257]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:22:09 [19:22:09.258]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:03 [19:23:03.571]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:03 [19:23:03.572]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:03 [19:23:03.735]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:03 [19:23:03.736]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:03 [19:23:03.878]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:03 [19:23:03.879]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:03 [19:23:03.880]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:03 [19:23:03.881]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:04 [19:23:04.163]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:04 [19:23:04.164]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:04 [19:23:04.166]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:04 [19:23:04.167]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:04 [19:23:04.168]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
19:23:04 [19:23:04.519]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:04 [19:23:04.520]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:04 [19:23:04.521]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:04 [19:23:04.522]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:04 [19:23:04.523]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":1000}
19:23:04 [19:23:04.523]
INFO:
p2p:tx_pool Allowing tx pool size to grow above limit
{"maxTxPoolSize":1000,"txPoolOverflowFactor":1.5}
19:23:05 [19:23:05.075]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:05 [19:23:05.076]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:05 [19:23:05.227]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:05 [19:23:05.228]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:05 [19:23:05.385]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:05 [19:23:05.386]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:05 [19:23:05.505]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:05 [19:23:05.506]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:05 [19:23:05.639]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:05 [19:23:05.640]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:05 [19:23:05.771]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:05 [19:23:05.772]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:05 [19:23:05.774]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:05 [19:23:05.775]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:05 [19:23:05.776]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
19:23:05 [19:23:05.962]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:05 [19:23:05.963]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:05 [19:23:05.966]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
19:23:05 [19:23:05.967]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
19:23:05 [19:23:05.968]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
19:23:06
PASS src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts (
60.066 s)
19:23:06 KV TX pool
19:23:06
✓ Adds txs to the pool as pending (342 ms)
19:23:06
✓ Removes txs from the pool (86 ms)
19:23:06
✓ Marks txs as mined (109 ms)
19:23:06
✓ Marks txs as pending after being mined (92 ms)
19:23:06
✓ Only marks txs as pending if they are known (83 ms)
19:23:06
✓ Returns all transactions in the pool (147 ms)
19:23:06
✓ Returns all txHashes in the pool (112 ms)
19:23:06
✓ Returns txs by their hash (132 ms)
19:23:06
✓ Returns a large number of transactions by their hash (54312 ms)
19:23:06
✓ Returns whether or not txs exist (164 ms)
19:23:06
✓ Returns pending tx hashes sorted by priority (143 ms)
19:23:06
✓ Returns archived txs and purges archived txs once the archived tx limit is reached (286 ms)
19:23:06
✓ Evicts low priority txs to satisfy the pending tx size limit (355 ms)
19:23:06
✓ respects the overflow factor configured (556 ms)
19:23:06
✓ Evicts txs with nullifiers that are already included in the mined block (151 ms)
19:23:06
✓ Evicts txs with an insufficient fee payer balance after a block is mined (157 ms)
19:23:06
✓ Evicts txs with a max block number lower than or equal to the mined block (121 ms)
19:23:06
✓ Evicts txs with invalid archive roots after a reorg (133 ms)
19:23:06
✓ Evicts txs with invalid fee payer balances after a reorg (129 ms)
19:23:06
✓ Does not evict low priority txs marked as non-evictable (192 ms)
19:23:06
✓ Evicts low priority txs after block is mined (265 ms)
19:23:06
19:23:06
Test Suites: 1 passed, 1 total
19:23:06
Tests: 21 passed, 21 total
19:23:06
Snapshots: 0 total
19:23:06
Time: 60.133 s
19:23:06
Ran all test suites matching /p2p\/src\/mem_pools\/tx_pool\/aztec_kv_tx_pool.test.ts/i
.
19:23:06
Force exiting Jest: Have you considered using `--detectOpenHandles` to detect async operations that kept running after all tests finished?