Parent Log:
http://ci.aztec-labs.com/1c420a977384450c
Command: 426cee6a32f1da3c:ISOLATE=1:NAME=p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts
Commit:
https://github.com/AztecProtocol/aztec-packages/commit/d96baf1c44329e8b2e3a432ad803f702f5184a62
Env: REF_NAME=gh-readonly-queue/next/pr-15025-26c5a39fe03723d11540f721293d7aebd1f478d9 CURRENT_VERSION=0.87.6 CI_FULL=1
Date: Fri Jun 13 09:32:58 UTC 2025
System: ARCH=amd64 CPUS=128 MEM=493Gi HOSTNAME=pr-15025_amd64_x3-full
Resources: CPU_LIST=0-127 CPUS=2 MEM=8g TIMEOUT=600s
History:
http://ci.aztec-labs.com/list/history_0a54840dde01048b_next
09:32:58 +++ id -u
09:32:58 +++ id -g
09:32:58 ++ docker run -d --name p2p_src_mem_pools_tx_pool_aztec_kv_tx_pool.test.ts --net=none --cpuset-cpus=0-127 --cpus=2 --memory=8g --user 1000:1000 -v/home/aztec-dev:/home/aztec-dev --mount type=tmpfs,target=/tmp,tmpfs-size=1g --workdir /home/aztec-dev/aztec-packages -e HOME -e VERBOSE -e GIT_CONFIG_GLOBAL=/home/aztec-dev/aztec-packages/build-images/src/home/.gitconfig -e FORCE_COLOR=true -e CPUS -e MEM aztecprotocol/build:3.0 /bin/bash -c 'timeout -v 600s bash -c '\''yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts'\'''
09:32:58 + cid=8590ce82cdb8700d5fb8be746903d7fc6e9454497fee356fe2bab0cc42e6a873
09:32:58 + set +x
09:33:02 [09:33:02.592]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:33:02 [09:33:02.603]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:33:03 [09:33:03.058]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:33:03 [09:33:03.062]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:33:03 [09:33:03.176]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:33:03 [09:33:03.210]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:33:03 [09:33:03.371]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:33:03 [09:33:03.373]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:33:03 [09:33:03.500]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:33:03 [09:33:03.503]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:33:03 [09:33:03.602]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:33:03 [09:33:03.604]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:33:03 [09:33:03.782]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:33:03 [09:33:03.783]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:33:03 [09:33:03.914]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:33:03 [09:33:03.916]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:33:04 [09:33:04.083]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:33:04 [09:33:04.084]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:06 [09:34:06.859]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:06 [09:34:06.862]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:07 [09:34:07.030]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:07 [09:34:07.032]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:07 [09:34:07.208]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:07 [09:34:07.210]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:07 [09:34:07.213]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:07 [09:34:07.215]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:07 [09:34:07.569]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:07 [09:34:07.572]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:07 [09:34:07.575]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:07 [09:34:07.577]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:07 [09:34:07.579]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
09:34:08 [09:34:08.024]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:08 [09:34:08.026]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:08 [09:34:08.029]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:08 [09:34:08.030]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:08 [09:34:08.032]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":1000}
09:34:08 [09:34:08.032]
INFO:
p2p:tx_pool Allowing tx pool size to grow above limit
{"maxTxPoolSize":1000,"txPoolOverflowFactor":1.5}
09:34:08 [09:34:08.707]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:08 [09:34:08.709]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:08 [09:34:08.876]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:08 [09:34:08.877]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:09 [09:34:09.071]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:09 [09:34:09.073]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:09 [09:34:09.196]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:09 [09:34:09.197]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:09 [09:34:09.379]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:09 [09:34:09.380]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:09 [09:34:09.548]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:09 [09:34:09.550]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:09 [09:34:09.552]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:09 [09:34:09.553]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:09 [09:34:09.560]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
09:34:09 [09:34:09.794]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:09 [09:34:09.797]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:09 [09:34:09.799]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
09:34:09 [09:34:09.801]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
09:34:09 [09:34:09.802]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
09:34:10
PASS src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts (
70.591 s)
09:34:10 KV TX pool
09:34:10
✓ Adds txs to the pool as pending (469 ms)
09:34:10
✓ Removes txs from the pool (117 ms)
09:34:10
✓ Marks txs as mined (196 ms)
09:34:10
✓ Marks txs as pending after being mined (128 ms)
09:34:10
✓ Only marks txs as pending if they are known (103 ms)
09:34:10
✓ Returns all transactions in the pool (179 ms)
09:34:10
✓ Returns all txHashes in the pool (131 ms)
09:34:10
✓ Returns txs by their hash (168 ms)
09:34:10
✓ Returns a large number of transactions by their hash (62773 ms)
09:34:10
✓ Returns whether or not txs exist (172 ms)
09:34:10
✓ Returns pending tx hashes sorted by priority (177 ms)
09:34:10
✓ Returns archived txs and purges archived txs once the archived tx limit is reached (360 ms)
09:34:10
✓ Evicts low priority txs to satisfy the pending tx size limit (454 ms)
09:34:10
✓ respects the overflow factor configured (682 ms)
09:34:10
✓ Evicts txs with nullifiers that are already included in the mined block (168 ms)
09:34:10
✓ Evicts txs with an insufficient fee payer balance after a block is mined (190 ms)
09:34:10
✓ Evicts txs with a max block number lower than or equal to the mined block (128 ms)
09:34:10
✓ Evicts txs with invalid archive roots after a reorg (182 ms)
09:34:10
✓ Evicts txs with invalid fee payer balances after a reorg (168 ms)
09:34:10
✓ Does not evict low priority txs marked as non-evictable (243 ms)
09:34:10
✓ Evicts low priority txs after block is mined (335 ms)
09:34:10
09:34:10
Test Suites: 1 passed, 1 total
09:34:10
Tests: 21 passed, 21 total
09:34:10
Snapshots: 0 total
09:34:10
Time: 70.675 s
09:34:10
Ran all test suites matching /p2p\/src\/mem_pools\/tx_pool\/aztec_kv_tx_pool.test.ts/i
.
09:34:10
Force exiting Jest: Have you considered using `--detectOpenHandles` to detect async operations that kept running after all tests finished?