Parent Log: http://ci.aztec-labs.com/2221392aab538d60 Command: 6e611b2acba6dc44:ISOLATE=1:NAME=p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts Commit: https://github.com/AztecProtocol/aztec-packages/commit/9a37be7dc4208f0eeb44ec50194c00ca3f5c4cc6 Env: REF_NAME=gh-readonly-queue/next/pr-15072-1e338a3fb2e2077f1feaee8b86c42644ff8a5352 CURRENT_VERSION=0.87.6 CI_FULL=1 Date: Mon Jun 16 16:42:28 UTC 2025 System: ARCH=amd64 CPUS=128 MEM=493Gi HOSTNAME=pr-15072_amd64_x3-full Resources: CPU_LIST=0-127 CPUS=2 MEM=8g TIMEOUT=600s History: http://ci.aztec-labs.com/list/history_0a54840dde01048b_next 16:42:28 +++ id -u 16:42:28 +++ id -g 16:42:28 ++ docker run -d --name p2p_src_mem_pools_tx_pool_aztec_kv_tx_pool.test.ts --net=none --cpuset-cpus=0-127 --cpus=2 --memory=8g --user 1000:1000 -v/home/aztec-dev:/home/aztec-dev --mount type=tmpfs,target=/tmp,tmpfs-size=1g --workdir /home/aztec-dev/aztec-packages -e HOME -e VERBOSE -e GIT_CONFIG_GLOBAL=/home/aztec-dev/aztec-packages/build-images/src/home/.gitconfig -e FORCE_COLOR=true -e CPUS -e MEM aztecprotocol/build:3.0 /bin/bash -c 'timeout -v 600s bash -c '\''yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts'\''' 16:42:29 + cid=3a54fa313851e7231cc3f7ca400c2f5a969204d3c277e29bd95cd015677895d3 16:42:29 + set +x 16:42:33 [16:42:33.774] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:42:33 [16:42:33.788] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:42:34 [16:42:34.244] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:42:34 [16:42:34.274] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:42:34 [16:42:34.384] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:42:34 [16:42:34.386] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:42:34 [16:42:34.539] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:42:34 [16:42:34.541] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:42:34 [16:42:34.671] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:42:34 [16:42:34.678] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:42:34 [16:42:34.786] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:42:34 [16:42:34.789] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:42:34 [16:42:34.978] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:42:34 [16:42:34.980] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:42:35 [16:42:35.112] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:42:35 [16:42:35.114] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:42:35 [16:42:35.343] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:42:35 [16:42:35.346] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:49 [16:43:49.969] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:49 [16:43:49.973] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:50 [16:43:50.200] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:50 [16:43:50.203] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:50 [16:43:50.433] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:50 [16:43:50.435] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:50 [16:43:50.440] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:50 [16:43:50.442] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:50 [16:43:50.828] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:50 [16:43:50.830] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:50 [16:43:50.833] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:50 [16:43:50.834] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:50 [16:43:50.836] INFO: p2p:tx_pool Setting maximum tx mempool size {"maxTxPoolSize":15000} 16:43:51 [16:43:51.373] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:51 [16:43:51.378] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:51 [16:43:51.382] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:51 [16:43:51.384] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:51 [16:43:51.385] INFO: p2p:tx_pool Setting maximum tx mempool size {"maxTxPoolSize":1000} 16:43:51 [16:43:51.386] INFO: p2p:tx_pool Allowing tx pool size to grow above limit {"maxTxPoolSize":1000,"txPoolOverflowFactor":1.5} 16:43:52 [16:43:52.129] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:52 [16:43:52.132] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:52 [16:43:52.373] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:52 [16:43:52.376] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:52 [16:43:52.643] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:52 [16:43:52.645] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:52 [16:43:52.821] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:52 [16:43:52.831] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:52 [16:43:52.977] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:52 [16:43:52.979] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:53 [16:43:53.147] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:53 [16:43:53.150] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:53 [16:43:53.152] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:53 [16:43:53.153] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:53 [16:43:53.158] INFO: p2p:tx_pool Setting maximum tx mempool size {"maxTxPoolSize":15000} 16:43:53 [16:43:53.403] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:53 [16:43:53.405] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:53 [16:43:53.410] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 16:43:53 [16:43:53.411] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 16:43:53 [16:43:53.412] INFO: p2p:tx_pool Setting maximum tx mempool size {"maxTxPoolSize":15000} 16:43:53 PASS src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts (83.534 s) 16:43:53 KV TX pool 16:43:53 Adds txs to the pool as pending (492 ms) 16:43:53 Removes txs from the pool (139 ms) 16:43:53 Marks txs as mined (156 ms) 16:43:53 Marks txs as pending after being mined (131 ms) 16:43:53 Only marks txs as pending if they are known (115 ms) 16:43:53 Returns all transactions in the pool (191 ms) 16:43:53 Returns all txHashes in the pool (134 ms) 16:43:53 Returns txs by their hash (230 ms) 16:43:53 Returns a large number of transactions by their hash (74623 ms) 16:43:53 Returns whether or not txs exist (231 ms) 16:43:53 Returns pending tx hashes sorted by priority (232 ms) 16:43:53 Returns archived txs and purges archived txs once the archived tx limit is reached (394 ms) 16:43:53 Evicts low priority txs to satisfy the pending tx size limit (544 ms) 16:43:53 respects the overflow factor configured (756 ms) 16:43:53 Evicts txs with nullifiers that are already included in the mined block (244 ms) 16:43:53 Evicts txs with an insufficient fee payer balance after a block is mined (266 ms) 16:43:53 Evicts txs with a max block number lower than or equal to the mined block (179 ms) 16:43:53 Evicts txs with invalid archive roots after a reorg (157 ms) 16:43:53 Evicts txs with invalid fee payer balances after a reorg (169 ms) 16:43:53 Does not evict low priority txs marked as non-evictable (253 ms) 16:43:53 Evicts low priority txs after block is mined (374 ms) 16:43:53 16:43:53 Test Suites: 1 passed, 1 total 16:43:53 Tests: 21 passed, 21 total 16:43:53 Snapshots: 0 total 16:43:53 Time: 83.646 s 16:43:53 Ran all test suites matching p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts. 16:43:53 Force exiting Jest: Have you considered using `--detectOpenHandles` to detect async operations that kept running after all tests finished?