Parent Log: http://ci.aztec-labs.com/0813959a3b08c9c9 Command: f682b45a820409ef:ISOLATE=1:NAME=p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts Commit: https://github.com/AztecProtocol/aztec-packages/commit/75d792847d8434a0c504e7adf5c102a913065272 Env: REF_NAME=gh-readonly-queue/next/pr-15015-cf8be0f9e81e248048560619de041e90d9d6990a CURRENT_VERSION=0.87.6 CI_FULL=1 Date: Fri Jun 13 08:35:02 UTC 2025 System: ARCH=amd64 CPUS=128 MEM=493Gi HOSTNAME=pr-15015_amd64_x3-full Resources: CPU_LIST=0-127 CPUS=2 MEM=8g TIMEOUT=600s History: http://ci.aztec-labs.com/list/history_0a54840dde01048b_next 08:35:02 +++ id -u 08:35:02 +++ id -g 08:35:02 ++ docker run -d --name p2p_src_mem_pools_tx_pool_aztec_kv_tx_pool.test.ts --net=none --cpuset-cpus=0-127 --cpus=2 --memory=8g --user 1000:1000 -v/home/aztec-dev:/home/aztec-dev --mount type=tmpfs,target=/tmp,tmpfs-size=1g --workdir /home/aztec-dev/aztec-packages -e HOME -e VERBOSE -e GIT_CONFIG_GLOBAL=/home/aztec-dev/aztec-packages/build-images/src/home/.gitconfig -e FORCE_COLOR=true -e CPUS -e MEM aztecprotocol/build:3.0 /bin/bash -c 'timeout -v 600s bash -c '\''yarn-project/scripts/run_test.sh p2p/src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts'\''' 08:35:03 + cid=a39be074ba103dd16caca563bcb8ff32761980a6c9c54039af96b35fbbc75ed8 08:35:03 + set +x 08:35:06 [08:35:06.852] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:35:06 [08:35:06.864] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:35:07 [08:35:07.274] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:35:07 [08:35:07.276] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:35:07 [08:35:07.379] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:35:07 [08:35:07.421] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:35:07 [08:35:07.572] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:35:07 [08:35:07.574] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:35:07 [08:35:07.701] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:35:07 [08:35:07.704] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:35:07 [08:35:07.810] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:35:07 [08:35:07.812] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:35:08 [08:35:08.005] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:35:08 [08:35:08.006] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:35:08 [08:35:08.154] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:35:08 [08:35:08.155] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:35:08 [08:35:08.320] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:35:08 [08:35:08.322] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:12 [08:36:12.547] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:12 [08:36:12.550] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:12 [08:36:12.707] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:12 [08:36:12.709] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:12 [08:36:12.896] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:12 [08:36:12.898] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:12 [08:36:12.901] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:12 [08:36:12.902] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:13 [08:36:13.260] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:13 [08:36:13.263] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:13 [08:36:13.265] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:13 [08:36:13.267] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:13 [08:36:13.268] INFO: p2p:tx_pool Setting maximum tx mempool size {"maxTxPoolSize":15000} 08:36:13 [08:36:13.729] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:13 [08:36:13.731] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:13 [08:36:13.733] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:13 [08:36:13.734] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:13 [08:36:13.735] INFO: p2p:tx_pool Setting maximum tx mempool size {"maxTxPoolSize":1000} 08:36:13 [08:36:13.736] INFO: p2p:tx_pool Allowing tx pool size to grow above limit {"maxTxPoolSize":1000,"txPoolOverflowFactor":1.5} 08:36:14 [08:36:14.392] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:14 [08:36:14.393] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:14 [08:36:14.542] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:14 [08:36:14.543] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:14 [08:36:14.694] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:14 [08:36:14.696] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:14 [08:36:14.815] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:14 [08:36:14.817] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:14 [08:36:14.968] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:14 [08:36:14.970] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:15 [08:36:15.142] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:15 [08:36:15.144] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:15 [08:36:15.146] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:15 [08:36:15.148] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:15 [08:36:15.149] INFO: p2p:tx_pool Setting maximum tx mempool size {"maxTxPoolSize":15000} 08:36:15 [08:36:15.409] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:15 [08:36:15.410] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:15 [08:36:15.412] INFO: kv-store:lmdb-v2:p2p Starting data store with maxReaders 16 08:36:15 [08:36:15.414] INFO: kv-store:lmdb-v2:archive Starting data store with maxReaders 16 08:36:15 [08:36:15.415] INFO: p2p:tx_pool Setting maximum tx mempool size {"maxTxPoolSize":15000} 08:36:15 PASS src/mem_pools/tx_pool/aztec_kv_tx_pool.test.ts (71.594 s) 08:36:15 KV TX pool 08:36:15 Adds txs to the pool as pending (430 ms) 08:36:15 Removes txs from the pool (105 ms) 08:36:15 Marks txs as mined (193 ms) 08:36:15 Marks txs as pending after being mined (129 ms) 08:36:15 Only marks txs as pending if they are known (109 ms) 08:36:15 Returns all transactions in the pool (193 ms) 08:36:15 Returns all txHashes in the pool (149 ms) 08:36:15 Returns txs by their hash (167 ms) 08:36:15 Returns a large number of transactions by their hash (64224 ms) 08:36:15 Returns whether or not txs exist (161 ms) 08:36:15 Returns pending tx hashes sorted by priority (188 ms) 08:36:15 Returns archived txs and purges archived txs once the archived tx limit is reached (365 ms) 08:36:15 Evicts low priority txs to satisfy the pending tx size limit (468 ms) 08:36:15 respects the overflow factor configured (662 ms) 08:36:15 Evicts txs with nullifiers that are already included in the mined block (150 ms) 08:36:15 Evicts txs with an insufficient fee payer balance after a block is mined (150 ms) 08:36:15 Evicts txs with a max block number lower than or equal to the mined block (123 ms) 08:36:15 Evicts txs with invalid archive roots after a reorg (152 ms) 08:36:15 Evicts txs with invalid fee payer balances after a reorg (170 ms) 08:36:15 Does not evict low priority txs marked as non-evictable (268 ms) 08:36:15 Evicts low priority txs after block is mined (303 ms) 08:36:15 08:36:15 Test Suites: 1 passed, 1 total 08:36:15 Tests: 21 passed, 21 total 08:36:15 Snapshots: 0 total 08:36:15 Time: 71.683 s 08:36:15 Ran all test suites matching /p2p\/src\/mem_pools\/tx_pool\/aztec_kv_tx_pool.test.ts/i. 08:36:15 Force exiting Jest: Have you considered using `--detectOpenHandles` to detect async operations that kept running after all tests finished?