Parent Log:
http://ci.aztec-labs.com/7b3e6e8a2abdd89b
Command: 9e91553d04f5b22d:ISOLATE=1:NAME=p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts
Commit:
https://github.com/AztecProtocol/aztec-packages/commit/10badd24359b04680068afd9ca24407383374db1
Env: REF_NAME=gh-readonly-queue/next/pr-15019-7d223783d91db15002a09abc1b52d1455eb3e3da CURRENT_VERSION=0.87.6 CI_FULL=0
Date: Mon Jun 16 11:10:08 UTC 2025
System: ARCH=arm64 CPUS=64 MEM=247Gi HOSTNAME=pr-15019_arm64_a1-fast
Resources: CPU_LIST=0-63 CPUS=2 MEM=8g TIMEOUT=600s
History:
http://ci.aztec-labs.com/list/history_0a54840dde01048b_next
11:10:08 +++ id -u
11:10:08 +++ id -g
11:10:08 ++ docker run -d --name p2p_src_mem_pools_tx_pool_aztec_kv_tx_pool.test.ts --net=none --cpuset-cpus=0-63 --cpus=2 --memory=8g --user 1000:1000 -v/home/aztec-dev:/home/aztec-dev --mount type=tmpfs,target=/tmp,tmpfs-size=1g --workdir /home/aztec-dev/aztec-packages -e HOME -e VERBOSE -e GIT_CONFIG_GLOBAL=/home/aztec-dev/aztec-packages/build-images/src/home/.gitconfig -e FORCE_COLOR=true -e CPUS -e MEM aztecprotocol/build:3.0 /bin/bash -c 'timeout -v 600s bash -c '\''yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts'\'''
11:10:08 + cid=ecc4fffa1099f564cd5270c24da0c487d363ae9b1bc97fb3bb47f46cf012ff5f
11:10:08 + set +x
11:10:11 [11:10:11.148]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:10:11 [11:10:11.156]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:10:11 [11:10:11.451]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:10:11 [11:10:11.452]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:10:11 [11:10:11.532]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:10:11 [11:10:11.533]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:10:11 [11:10:11.661]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:10:11 [11:10:11.662]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:10:11 [11:10:11.752]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:10:11 [11:10:11.754]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:10:11 [11:10:11.834]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:10:11 [11:10:11.835]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:10:11 [11:10:11.982]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:10:11 [11:10:11.983]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:10:12 [11:10:12.093]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:10:12 [11:10:12.094]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:10:12 [11:10:12.224]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:10:12 [11:10:12.225]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:06 [11:11:06.525]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:06 [11:11:06.527]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:06 [11:11:06.689]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:06 [11:11:06.690]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:06 [11:11:06.832]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:06 [11:11:06.833]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:06 [11:11:06.834]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:06 [11:11:06.835]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:07 [11:11:07.115]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:07 [11:11:07.116]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:07 [11:11:07.118]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:07 [11:11:07.118]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:07 [11:11:07.119]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
11:11:07 [11:11:07.468]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:07 [11:11:07.469]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:07 [11:11:07.470]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:07 [11:11:07.471]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:07 [11:11:07.472]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":1000}
11:11:07 [11:11:07.472]
INFO:
p2p:tx_pool Allowing tx pool size to grow above limit
{"maxTxPoolSize":1000,"txPoolOverflowFactor":1.5}
11:11:08 [11:11:08.026]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:08 [11:11:08.027]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:08 [11:11:08.177]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:08 [11:11:08.179]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:08 [11:11:08.335]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:08 [11:11:08.336]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:08 [11:11:08.456]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:08 [11:11:08.457]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:08 [11:11:08.588]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:08 [11:11:08.589]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:08 [11:11:08.717]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:08 [11:11:08.718]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:08 [11:11:08.720]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:08 [11:11:08.721]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:08 [11:11:08.722]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
11:11:08 [11:11:08.910]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:08 [11:11:08.911]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:08 [11:11:08.912]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
11:11:08 [11:11:08.914]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
11:11:08 [11:11:08.914]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
11:11:09
PASS src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts (
60.002 s)
11:11:09 KV TX pool
11:11:09
✓ Adds txs to the pool as pending (305 ms)
11:11:09
✓ Removes txs from the pool (80 ms)
11:11:09
✓ Marks txs as mined (129 ms)
11:11:09
✓ Marks txs as pending after being mined (91 ms)
11:11:09
✓ Only marks txs as pending if they are known (82 ms)
11:11:09
✓ Returns all transactions in the pool (148 ms)
11:11:09
✓ Returns all txHashes in the pool (111 ms)
11:11:09
✓ Returns txs by their hash (130 ms)
11:11:09
✓ Returns a large number of transactions by their hash (54300 ms)
11:11:09
✓ Returns whether or not txs exist (164 ms)
11:11:09
✓ Returns pending tx hashes sorted by priority (143 ms)
11:11:09
✓ Returns archived txs and purges archived txs once the archived tx limit is reached (283 ms)
11:11:09
✓ Evicts low priority txs to satisfy the pending tx size limit (352 ms)
11:11:09
✓ respects the overflow factor configured (558 ms)
11:11:09
✓ Evicts txs with nullifiers that are already included in the mined block (152 ms)
11:11:09
✓ Evicts txs with an insufficient fee payer balance after a block is mined (156 ms)
11:11:09
✓ Evicts txs with a max block number lower than or equal to the mined block (121 ms)
11:11:09
✓ Evicts txs with invalid archive roots after a reorg (132 ms)
11:11:09
✓ Evicts txs with invalid fee payer balances after a reorg (127 ms)
11:11:09
✓ Does not evict low priority txs marked as non-evictable (192 ms)
11:11:09
✓ Evicts low priority txs after block is mined (266 ms)
11:11:09
11:11:09
Test Suites: 1 passed, 1 total
11:11:09
Tests: 21 passed, 21 total
11:11:09
Snapshots: 0 total
11:11:09
Time: 60.069 s
11:11:09
Ran all test suites matching /p2p\/src\/mem_pools\/tx_pool\/aztec_kv_tx_pool.test.ts/i
.
11:11:09
Force exiting Jest: Have you considered using `--detectOpenHandles` to detect async operations that kept running after all tests finished?