Parent Log: http://ci.aztec-labs.com/e0d14622d89aaee4 Command: 55a5c1e5cd3f3ca6:ISOLATE=1:NAME=p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts Commit: https://github.com/AztecProtocol/aztec-packages/commit/28bf32bfc2566751e8baa0a9125a9db20d474bee Env: REF_NAME=gh-readonly-queue/next/pr-15154-9071986bfe3af58c70d2c80c10f523e22bfe4cb4 CURRENT_VERSION=0.87.6 CI_FULL=1 Date: Fri Jun 20 15:25:08 UTC 2025 System: ARCH=amd64 CPUS=128 MEM=493Gi HOSTNAME=pr-15154_amd64_x3-full Resources: CPU_LIST=0-127 CPUS=2 MEM=8g TIMEOUT=600s History: http://ci.aztec-labs.com/list/history_0a54840dde01048b_next 15:25:09 +++ id -u 15:25:09 +++ id -g 15:25:09 ++ docker run -d --name p2p_src_mem_pools_tx_pool_aztec_kv_tx_pool.test.ts --net=none --cpuset-cpus=0-127 --cpus=2 --memory=8g --user 1000:1000 -v/home/aztec-dev:/home/aztec-dev --mount type=tmpfs,target=/tmp,tmpfs-size=1g --workdir /home/aztec-dev/aztec-packages -e HOME -e VERBOSE -e GIT_CONFIG_GLOBAL=/home/aztec-dev/aztec-packages/build-images/src/home/.gitconfig -e FORCE_COLOR=true -e CPUS -e MEM aztecprotocol/build:3.0 /bin/bash -c 'timeout -v 600s bash -c '\''yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts'\''' 15:25:09 + cid=aa521515a2ffb275710ec9b5ea13e635caf00bd8f5a60bf17d56f15643a2e382 15:25:09 + set +x 15:25:14 [15:25:14.031] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:25:14 [15:25:14.042] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:25:14 [15:25:14.468] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:25:14 [15:25:14.469] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:25:14 [15:25:14.572] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:25:14 [15:25:14.574] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:25:14 [15:25:14.742] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:25:14 [15:25:14.744] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:25:14 [15:25:14.871] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:25:14 [15:25:14.875] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:25:14 [15:25:14.990] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:25:14 [15:25:14.992] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:25:15 [15:25:15.210] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:25:15 [15:25:15.212] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:25:15 [15:25:15.385] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:25:15 [15:25:15.386] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:25:15 [15:25:15.577] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:25:15 [15:25:15.579] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:31 [15:26:31.316] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:31 [15:26:31.319] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:31 [15:26:31.550] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:31 [15:26:31.552] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:31 [15:26:31.758] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:31 [15:26:31.760] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:31 [15:26:31.763] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:31 [15:26:31.765] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:32 [15:26:32.209] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:32 [15:26:32.214] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:32 [15:26:32.217] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:32 [15:26:32.218] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:32 [15:26:32.220] INFO: p2p:tx_pool Setting maximum tx mempool size {"maxTxPoolSize":15000} 15:26:32 [15:26:32.737] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:32 [15:26:32.739] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:32 [15:26:32.741] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:32 [15:26:32.744] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:32 [15:26:32.745] INFO: p2p:tx_pool Setting maximum tx mempool size {"maxTxPoolSize":1000} 15:26:32 [15:26:32.746] INFO: p2p:tx_pool Allowing tx pool size to grow above limit {"maxTxPoolSize":1000,"txPoolOverflowFactor":1.5} 15:26:33 [15:26:33.263] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:33 [15:26:33.264] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:33 [15:26:33.408] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:33 [15:26:33.409] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:33 [15:26:33.583] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:33 [15:26:33.584] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:33 [15:26:33.702] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:33 [15:26:33.703] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:33 [15:26:33.832] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:33 [15:26:33.833] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:33 [15:26:33.977] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:33 [15:26:33.978] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:33 [15:26:33.980] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:33 [15:26:33.981] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:33 [15:26:33.982] INFO: p2p:tx_pool Setting maximum tx mempool size {"maxTxPoolSize":15000} 15:26:34 [15:26:34.195] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:34 [15:26:34.196] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:34 [15:26:34.198] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 15:26:34 [15:26:34.199] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 15:26:34 [15:26:34.200] INFO: p2p:tx_pool Setting maximum tx mempool size {"maxTxPoolSize":15000} 15:26:34 PASS src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts (83.735 s) 15:26:34 KV TX pool 15:26:34 Adds txs to the pool as pending (456 ms) 15:26:34 Removes txs from the pool (105 ms) 15:26:34 Marks txs as mined (170 ms) 15:26:34 Marks txs as pending after being mined (128 ms) 15:26:34 Only marks txs as pending if they are known (119 ms) 15:26:34 Returns all transactions in the pool (220 ms) 15:26:34 Returns all txHashes in the pool (174 ms) 15:26:34 Returns txs by their hash (191 ms) 15:26:34 Returns a large number of transactions by their hash (75738 ms) 15:26:34 Returns whether or not txs exist (234 ms) 15:26:34 Returns pending tx hashes sorted by priority (208 ms) 15:26:34 Returns archived txs and purges archived txs once the archived tx limit is reached (450 ms) 15:26:34 Evicts low priority txs to satisfy the pending tx size limit (528 ms) 15:26:34 respects the overflow factor configured (526 ms) 15:26:34 Evicts txs with nullifiers that are already included in the mined block (145 ms) 15:26:34 Evicts txs with an insufficient fee payer balance after a block is mined (175 ms) 15:26:34 Evicts txs with a max block number lower than or equal to the mined block (119 ms) 15:26:34 Evicts txs with invalid archive roots after a reorg (130 ms) 15:26:34 Evicts txs with invalid fee payer balances after a reorg (144 ms) 15:26:34 Does not evict low priority txs marked as non-evictable (218 ms) 15:26:34 Evicts low priority txs after block is mined (303 ms) 15:26:34 15:26:34 Test Suites: 1 passed, 1 total 15:26:34 Tests: 21 passed, 21 total 15:26:34 Snapshots: 0 total 15:26:34 Time: 83.878 s 15:26:34 Ran all test suites matching p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts. 15:26:34 Force exiting Jest: Have you considered using `--detectOpenHandles` to detect async operations that kept running after all tests finished?