Parent Log:
http://ci.aztec-labs.com/dc2fa81d7e6e38ef
Command: 5f41952f8889ac40:ISOLATE=1:NAME=p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts
Commit:
https://github.com/AztecProtocol/aztec-packages/commit/1650b3d017a2a4fa5182c8787e79cc108e24e1f4
Env: REF_NAME=gh-readonly-queue/next/pr-14990-77a00686be4080a71b03d68671da6c8b270b62aa CURRENT_VERSION=0.87.6 CI_FULL=1
Date: Thu Jun 12 10:49:27 UTC 2025
System: ARCH=amd64 CPUS=128 MEM=493Gi HOSTNAME=pr-14990_amd64_x1-full
Resources: CPU_LIST=0-127 CPUS=2 MEM=8g TIMEOUT=600s
History:
http://ci.aztec-labs.com/list/history_0a54840dde01048b_next
10:49:27 +++ id -u
10:49:27 +++ id -g
10:49:27 ++ docker run -d --name p2p_src_mem_pools_tx_pool_aztec_kv_tx_pool.test.ts --net=none --cpuset-cpus=0-127 --cpus=2 --memory=8g --user 1000:1000 -v/home/aztec-dev:/home/aztec-dev --mount type=tmpfs,target=/tmp,tmpfs-size=1g --workdir /home/aztec-dev/aztec-packages -e HOME -e VERBOSE -e GIT_CONFIG_GLOBAL=/home/aztec-dev/aztec-packages/build-images/src/home/.gitconfig -e FORCE_COLOR=true -e CPUS -e MEM aztecprotocol/build:3.0 /bin/bash -c 'timeout -v 600s bash -c '\''yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts'\'''
10:49:27 + cid=324412b917f432255338ad3d9fc359acbda2b901e38e4dabeb883c7e61dff6d0
10:49:27 + set +x
10:49:31 [10:49:31.781]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:49:31 [10:49:31.792]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:49:32 [10:49:32.213]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:49:32 [10:49:32.216]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:49:32 [10:49:32.354]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:49:32 [10:49:32.358]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:49:32 [10:49:32.522]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:49:32 [10:49:32.524]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:49:32 [10:49:32.663]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:49:32 [10:49:32.665]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:49:32 [10:49:32.769]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:49:32 [10:49:32.770]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:49:32 [10:49:32.961]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:49:32 [10:49:32.964]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:49:33 [10:49:33.119]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:49:33 [10:49:33.121]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:49:33 [10:49:33.294]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:49:33 [10:49:33.296]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:35 [10:50:35.474]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:35 [10:50:35.477]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:35 [10:50:35.666]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:35 [10:50:35.670]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:35 [10:50:35.849]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:35 [10:50:35.852]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:35 [10:50:35.860]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:35 [10:50:35.862]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:36 [10:50:36.194]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:36 [10:50:36.198]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:36 [10:50:36.205]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:36 [10:50:36.209]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:36 [10:50:36.211]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
10:50:36 [10:50:36.656]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:36 [10:50:36.658]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:36 [10:50:36.661]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:36 [10:50:36.662]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:36 [10:50:36.663]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":1000}
10:50:36 [10:50:36.663]
INFO:
p2p:tx_pool Allowing tx pool size to grow above limit
{"maxTxPoolSize":1000,"txPoolOverflowFactor":1.5}
10:50:37 [10:50:37.303]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:37 [10:50:37.305]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:37 [10:50:37.501]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:37 [10:50:37.503]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:37 [10:50:37.693]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:37 [10:50:37.694]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:37 [10:50:37.828]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:37 [10:50:37.829]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:37 [10:50:37.999]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:38 [10:50:38.001]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:38 [10:50:38.171]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:38 [10:50:38.174]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:38 [10:50:38.176]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:38 [10:50:38.177]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:38 [10:50:38.178]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
10:50:38 [10:50:38.431]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:38 [10:50:38.436]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:38 [10:50:38.441]
INFO:
kv-store:lmdb-v2:p2p Starting data store with maxReaders 16
10:50:38 [10:50:38.448]
INFO:
kv-store:lmdb-v2:archive Starting data store with maxReaders 16
10:50:38 [10:50:38.450]
INFO:
p2p:tx_pool Setting maximum tx mempool size
{"maxTxPoolSize":15000}
10:50:38
PASS src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts (
70.171 s)
10:50:38 KV TX pool
10:50:38
✓ Adds txs to the pool as pending (435 ms)
10:50:38
✓ Removes txs from the pool (102 ms)
10:50:38
✓ Marks txs as mined (167 ms)
10:50:38
✓ Marks txs as pending after being mined (140 ms)
10:50:38
✓ Only marks txs as pending if they are known (105 ms)
10:50:38
✓ Returns all transactions in the pool (190 ms)
10:50:38
✓ Returns all txHashes in the pool (160 ms)
10:50:38
✓ Returns txs by their hash (174 ms)
10:50:38
✓ Returns a large number of transactions by their hash (62178 ms)
10:50:38
✓ Returns whether or not txs exist (192 ms)
10:50:38
✓ Returns pending tx hashes sorted by priority (183 ms)
10:50:38
✓ Returns archived txs and purges archived txs once the archived tx limit is reached (342 ms)
10:50:38
✓ Evicts low priority txs to satisfy the pending tx size limit (465 ms)
10:50:38
✓ respects the overflow factor configured (647 ms)
10:50:38
✓ Evicts txs with nullifiers that are already included in the mined block (197 ms)
10:50:38
✓ Evicts txs with an insufficient fee payer balance after a block is mined (189 ms)
10:50:38
✓ Evicts txs with a max block number lower than or equal to the mined block (138 ms)
10:50:38
✓ Evicts txs with invalid archive roots after a reorg (171 ms)
10:50:38
✓ Evicts txs with invalid fee payer balances after a reorg (171 ms)
10:50:38
✓ Does not evict low priority txs marked as non-evictable (259 ms)
10:50:38
✓ Evicts low priority txs after block is mined (345 ms)
10:50:38
10:50:38
Test Suites: 1 passed, 1 total
10:50:38
Tests: 21 passed, 21 total
10:50:38
Snapshots: 0 total
10:50:38
Time: 70.267 s
10:50:38
Ran all test suites matching /p2p\/src\/mem_pools\/tx_pool\/aztec_kv_tx_pool.test.ts/i
.
10:50:38
Force exiting Jest: Have you considered using `--detectOpenHandles` to detect async operations that kept running after all tests finished?