Puppeteer is the most popular library for browser automation in Node.js, with over 90,000 stars on GitHub. It provides a clean API for controlling Chrome and Chromium. For development and prototyping, running Puppeteer locally works fine.
In production, it breaks.
The Local Puppeteer Problem
Running Puppeteer locally means launching a real Chrome browser process on your machine or server. Chrome is not designed to be a server application. It was built for humans browsing the web on their personal computers. When you try to run it as infrastructure, you hit a wall.
The problems start small. A script that processes 10 pages works perfectly. At 50 pages, you notice occasional crashes. At 200 pages, your server runs out of memory. At 1,000 pages, you need to rethink your entire architecture.
Every team that builds browser automation goes through this progression. The question is not whether local Puppeteer will fail, but when.
Memory and Resource Issues
Each Chrome instance consumes 150-500MB of RAM depending on the page complexity. A complex single-page application with heavy JavaScript can push this to 800MB or more.
When you run multiple instances concurrently, memory usage multiplies:
| Concurrent Browsers | RAM Usage | Typical Server |
| 1 | 300MB | Any machine |
| 5 | 1.5GB | Manageable |
| 10 | 3GB | Tight on 4GB servers |
| 20 | 6GB | Needs 8GB+ server |
| 50 | 15GB | Needs 32GB server |
But raw memory is not the only issue. Chrome leaks memory over time. Long-running instances accumulate state (cached resources, DOM nodes, JavaScript heap objects) that the garbage collector cannot reclaim. After hours of operation, a Chrome instance that started at 200MB might be using 500MB.
The solution for local Puppeteer is aggressive lifecycle management: kill and restart browser instances regularly, use incognito contexts to isolate pages, and implement watchdog processes that kill runaway browsers. This is complex code that every team writes from scratch.
With BrowseFleet, sessions are isolated and ephemeral by default. Each session starts fresh, and resources are fully reclaimed when the session closes. Memory leaks are not your problem.
CI/CD Headaches
Running Puppeteer in CI/CD pipelines is a common source of frustration. The typical issues:
Missing dependencies. Chrome requires system libraries (libxss, libnss3, libatk-bridge, libgtk, and many more) that are not present in minimal CI containers. Teams maintain custom Dockerfiles or shell scripts that install these dependencies, and they break when CI base images update.
Different behavior. Tests that pass locally fail in CI because of font rendering differences, screen resolution, timing issues, or missing GPU acceleration. Developers waste hours debugging tests that only fail in the pipeline.
Resource constraints. CI runners have limited RAM and CPU. Running Chrome alongside your application, test framework, and other tools often exceeds the runner's memory limit, causing OOM kills.
Headless mode quirks. Chrome's headless mode (the "new headless" and the "old headless") behaves differently from headed mode in subtle ways. Some CSS features render differently, some JavaScript APIs behave differently, and some websites detect headless mode and serve different content.
BrowseFleet eliminates these issues entirely. Your CI pipeline makes HTTP requests to create sessions. It does not need Chrome, its dependencies, or the memory to run it. Tests are more reliable because the browser environment is identical regardless of where the test runs.
# Before: CI needs Chrome, system deps, lots of RAM
- name: Install Chrome
run: |
sudo apt-get update
sudo apt-get install -y chromium-browser
# ... 15 more system dependencies
# After: CI just needs network access
- name: Run Tests
env:
BROWSEFLEET_API_KEY: ${{ secrets.BF_KEY }}
run: npm testConcurrency Limits
The hardest problem with local Puppeteer is concurrency. Running multiple browser instances simultaneously requires:
Process management. Each Chrome instance is a separate process with multiple child processes (renderer, GPU, utility). Managing this process tree correctly (starting, monitoring, and killing instances) is non-trivial.
Port management. Each browser instance needs a unique debugging port. Allocating and recycling ports without conflicts requires careful bookkeeping.
Resource isolation. Without proper isolation, one browser instance can starve others of CPU or memory. A page that runs heavy JavaScript can slow down all concurrent browsers.
Error recovery. When one instance crashes (and they do crash), you need to clean up its resources, restart it, and retry the failed work without affecting other instances.
BrowseFleet handles all of this. Each session is an isolated browser instance managed by the BrowseFleet server. You create sessions, use them, and close them. The server handles process management, port allocation, resource isolation, and crash recovery.
The One-Line Migration
The good news: migrating from local Puppeteer to BrowseFleet is trivial. Puppeteer's connect method accepts a WebSocket URL, and BrowseFleet provides one.
// Before: Local Puppeteer
import puppeteer from 'puppeteer';
const browser = await puppeteer.launch({
headless: 'new',
args: ['--no-sandbox', '--disable-setuid-sandbox'],
});
// After: BrowseFleet
import puppeteer from 'puppeteer-core';
import { BrowseFleet } from 'browsefleet';
const bf = new BrowseFleet({ apiKey: 'bf_...' });
const session = await bf.sessions.create({ stealth: 'full' });
const browser = await puppeteer.connect({
browserWSEndpoint: session.websocketUrl,
});
// Everything below this line stays exactly the same
const page = await browser.newPage();
await page.goto('https://example.com');
const title = await page.title();Note that we switch from puppeteer to puppeteer-core. The full puppeteer package bundles a Chromium binary (300MB+) that you no longer need. puppeteer-core is the library without the browser, which is exactly what you want when connecting to a remote browser.
The rest of your code does not change. Every Puppeteer API (page.goto, page.evaluate, page.screenshot, page.click, page.type) works identically. BrowseFleet provides a standard CDP WebSocket, so Puppeteer cannot tell the difference between a local browser and a cloud browser.
Performance Comparison
In our benchmarks, BrowseFleet sessions are faster than local Puppeteer for most workloads:
Session startup. BrowseFleet starts a new browser session in under 1 second. Local Puppeteer launch takes 2-5 seconds depending on the machine and Chrome version.
Page load. Network latency between BrowseFleet and the target website is typically lower than between your local machine and the target, because BrowseFleet servers are in data centers with fast network connections. For most pages, the difference is under 200ms.
Concurrent performance. This is where the difference is dramatic. Running 20 concurrent local Puppeteer instances on a 16GB server shows significant degradation. Page loads slow by 2-3x and crashes increase. BrowseFleet handles 20 concurrent sessions without degradation because each session is resource-managed.
Cost. Local Puppeteer is "free" but requires server infrastructure. A server capable of running 20 concurrent Chrome instances costs $100-200/month for cloud VMs. BrowseFleet's Developer plan at $99/month provides 20 concurrent sessions without the operational overhead of managing the server.
The performance advantage grows with scale. At 50+ concurrent sessions, managing local browsers becomes a full-time infrastructure problem. BrowseFleet lets you focus on your product instead.