110,000 Domains, One Misconfiguration
In 2024, Palo Alto's Unit 42 documented an extortion campaign that started not with a zero-day, not with a phishing kit, not with a stolen credential — but with an HTTP GET /.env against a list of 110,000 domains. The attackers' script downloaded every public .env file the request returned, parsed the AWS access keys inside, and used them to enumerate S3 buckets, exfiltrate data, and post ransom demands on the victims' own infrastructure. The technique required no exploit. The vulnerability was that a file containing production secrets was served by the web server alongside style.css.
This is the most pedestrian vulnerability class still in active mass exploitation. Every year, security newsletters profile a fresh campaign that relied on .env exposure. Every year, scanner vendors publish updated lists of the file paths attackers probe. And every year, more applications ship to production with .env sitting one directory above the document root — or sometimes inside it.
How .env Files Came to Hold Production Secrets
The .env file format is a convention, not a standard. It was popularized by the Twelve-Factor App methodology, which argued that configuration — database URLs, API keys, encryption secrets — should live in environment variables rather than committed code. Tools like dotenv for Node.js, python-dotenv, laravel-dotenv, and dozens of equivalents read a plain-text .env file at process startup and populate process.env or its equivalent. The pattern is convenient for local development: edit one file, restart the server, your local DB connection string is updated.
The trouble is that nothing about the file format prevents it from ending up in the wrong place. It's a regular UTF-8 file. It has no encryption, no access controls beyond filesystem permissions, no signing. If it's inside your deployment artifact, it's deployed. If your deployment artifact maps to your web server's document root, it's served. If your .gitignore was added after the first commit, it's in your repository's history forever.
The result: .env files commonly contain — in plain text — database connection strings, AWS access keys and secrets, third-party API keys (Stripe, SendGrid, OpenAI, Twilio), JWT signing secrets, encryption keys, SMTP credentials, and Redis passwords. A single leaked .env is often a complete credential set for an entire production stack.
How Attackers Find Them
The discovery loop is trivial, which is why it's automated at internet scale. The typical scanner:
- Pulls a large list of domains from public sources — Certificate Transparency logs, passive DNS, certificate aggregators.
- Sends an HTTPS request to a hardcoded list of common paths:
/.env,/env,/.env.production,/.env.bak,/.env.old,/.env.local,/.env.development,/config/.env,/app/.env. - Checks the response for the file's characteristic format: lines like
DB_PASSWORD=,AWS_ACCESS_KEY_ID=,STRIPE_SECRET_KEY=. - Parses the contents into structured credentials and tries each one against the corresponding service.
The whole pipeline runs in parallel against tens of thousands of targets per hour. Cloud providers have logging for this — AWS will show you when an access key from a leaked .env first appears in CloudTrail being used from an IP that's not yours — but by the time you notice the log entry, the key has typically been used to enumerate your S3 buckets, dump customer data, and post an extortion demand.
Variants of the path-list approach also probe for /wp-config.php (Wordpress configuration with database credentials), /config.php.bak, /database.yml (Rails), /appsettings.json (ASP.NET), /credentials.json, and /.git/config (Git remote URLs containing tokens). The principle is the same — convention-named files at predictable paths, served by webservers that don't know not to serve them.
The Three Ways It Ships to Production
The exposure isn't usually a single mistake. It's one of three patterns:
Pattern 1: Document root misalignment. The application's working directory and the webserver's document root are the same directory. A Laravel app's public directory should be the document root, with .env one level above. But a misconfigured nginx root directive points at the project root instead, and now every file in the project is reachable. This is by far the most common cause and the easiest to miss because the application still works correctly — only the static-file fall-through path leaks.
Pattern 2: Backup file artifacts. The deployment is correct: .env is outside the document root. But a careless edit creates .env.bak inside the document root, or a sysadmin runs cp .env public/.env.old for safekeeping during a migration and never cleans up. Backup files often slip past .gitignore and IaC scrubbing because the patterns target .env exactly, not .env.*.
Pattern 3: Build pipeline leakage. The CI/CD process copies .env into the build artifact 'temporarily' to make tests pass, with a comment promising to remove it before deploy. The comment is forgotten. The artifact ships to production with the file embedded. This pattern is common in container images — docker history often reveals .env files layer-by-layer even when the running container doesn't visibly contain one.
Detection: What Actually Works
The naive check is to curl your own domain for /.env and see if it returns a 200. This catches the most obvious failures but misses the long tail of variant paths and the case where the file is served as text/html with the right MIME type but still contains your secrets. Effective detection has three layers:
- Path enumeration with content fingerprinting. Request every known-sensitive path against every web-facing asset, then check the response body for credential-shaped content. A
.envfile that returns 200 with HTML content (because a SPA framework intercepts the route) is a different finding from one returning 200 withDB_PASSWORD=hunter2. - Continuous monitoring, not point-in-time scanning. The most common version of this vulnerability is created during a deploy and exists for hours or days before someone notices. A weekly scan misses it. A scan after every deploy — or continuous polling, which is what we recommend — catches it within minutes.
- Recursive backup-pattern probes. For every sensitive file you check, also check the
.bak,.old,.orig,.tmp,~, and.swpvariants. Editor backup files and ops-team copies are how most of these slip past primary.envchecks.
You can run all three with whatever scanner you prefer. The important property is that this finding is critical-severity regardless of which scanner finds it — if .env is reachable, the rest of the security program isn't the bottleneck.
What FortWatch's Sensitive Files Scanner Does
Our sensitiveFilesScan processor runs against every web-facing asset on every continuous scan. It checks roughly 80 paths — the canonical .env variants, common config files for major frameworks (Wordpress, Rails, Django, Laravel, ASP.NET), Git and SVN metadata directories, common backup-file extensions, and exposed admin paths. Each hit is fingerprinted by content type and body pattern before being classified.
A .env file returning 200 with a body that matches credential patterns is critical-severity, with the matched contents redacted in our UI but linked to the original response for forensics. A .env file returning 200 with HTML content (probably caught by a SPA catch-all route) is medium-severity — still a misconfiguration worth investigating but not an immediate credential leak. A 403 or 404 is no finding; the file isn't reachable.
This is one of the higher-precision scanners in our pipeline because the content fingerprint either matches or it doesn't. False positives are rare; missed detections almost always trace back to a path we haven't added to our list yet (the list is open-ended and we ship updates whenever someone in the industry documents a new pattern).
Remediation Is Two Steps
Once you find an exposed .env, the response sequence is non-negotiable:
- Rotate every credential in the file. Don't wait until you understand how it leaked. Assume the file has been downloaded. Rotate database passwords, AWS keys, JWT secrets, third-party API tokens, SMTP credentials — everything. If you have evidence of which credentials were used (CloudTrail, application logs), prioritize those, but rotate the rest anyway.
- Fix the deployment, then verify the file isn't reachable. Move
.envoutside the document root, or add an explicit deny in your webserver config: nginx'slocation ~ /\.env { deny all; return 404; }is the minimum. Test by requesting the original path and confirming a 404 or 403.
The order matters. Rotating credentials before securing the file is a temporary fix; the new credentials in the new .env are equally exposed if the deployment path hasn't been corrected. Fixing the deployment before rotating credentials leaves your existing keys in the wild.
What Do I Do With This?
- Curl your own domain right now.
curl -I https://yourdomain.com/.env. A 200 response is a critical finding; a 403 or 404 is the right answer. Repeat for/.env.bak,/.env.production,/wp-config.php,/.git/config. - Add a webserver-level deny rule for
.envand its backup variants as a defense-in-depth measure. Even when the file shouldn't be in the document root, the deny rule catches the case where it ends up there anyway. - Audit your build pipeline for any step that copies
.envinto the deploy artifact. If one exists, replace it with environment variables injected at runtime by your platform (Kubernetes secrets, ECS task definitions, Heroku config vars). - Run continuous monitoring for sensitive-file exposure. The window between 'bad deploy' and 'attacker downloads' is often hours. A point-in-time audit doesn't catch it; FortWatch's sensitive-files scanner runs on every continuous scan and flags exposures within the scan interval.
