The question gets asked on r/node every few months, on r/selfhosted every few weeks, and on Hacker News whenever a Datadog invoice goes viral. Some variant of: "What are you actually using for observability on small or side projects?"
The answers always go the same way. Someone confesses to Datadog and feels bad about it. Someone defends a Grafana + Prometheus stack they spent a weekend wiring up. Someone pitches their YC-funded unified-observability tool. Eventually a quieter voice admits they're using Sentry for errors and UptimeRobot for uptime and not much else. The quieter voice gets upvoted but rarely defended.
The quieter voice is right. Observability for an indie app on a single host isn't a stack you build, it's a small set of pillars you can mostly skip. This is the case for skipping.
Observability is not one thing
The mistake the unified-observability vendors want you to make is treating observability as one category. It isn't. It's four to seven separate concerns, ranked here by how often a one-person team genuinely needs them:
- Errors — when your code blows up, what was the stack trace and what request triggered it?
- Uptime — is the site responding to the outside world right now?
- Host metrics — is the VPS hot, full, or running out of disk?
- Logs — what was the app doing in the seconds before something went wrong?
- Cron / dead-man-switch — did the backup job actually run last night?
- Traces — when a request fans out across services, which step was slow?
- Session replay — what did the user click before they hit the bug?
These are different problems. They have different data shapes. They have different storage requirements. The unified-observability pitch is that you save effort by handling them in one tool. That's true at the scale where you have enough of each problem to justify the joint cost. On a single host with one developer and a few hundred users a day, most of these pillars don't fire often enough to need a tool at all.
Three pillars solved by tools already on your box
Three of the seven pillars are handled by tools that have been on your Linux box for thirty years.
Logs. docker logs -f containername shows you the live tail of any container. journalctl -fu servicename does the same for systemd units. tail -F /var/log/app.log for anything writing to disk. For "find me every 500 in the last hour," grep -E "5[0-9]{2}" app.log | tail -100 is fine. This stops scaling when you have multiple hosts, multiple services, or log volume that breaks grep — none of which describes a single-host indie app.
Host metrics. htop for CPU and memory. df -h for disk. iostat 1 for I/O patterns. iftop if you genuinely care about per-connection bandwidth. These are enough to answer "is the box hot, full, or starved?" — which is the question an indie monitoring dashboard would be asking on your behalf, just with a graph in front of it. The graph is nice. It is not load-bearing.
Cron failures. Crontab supports MAILTO= at the top of the file. Any cron job that exits non-zero or produces output gets emailed to that address. This catches the failure mode that observability dashboards are worst at: the job didn't run, so there's nothing to alert on. The cheapest dead-man-switch in existence is one line of configuration that's already on every Linux box. Use it.
Logs, metrics, and cron failures on one host are three commands and one line of config away from solved. The reason most observability writeups don't tell you this is that it doesn't sell tools. The reason indie writeups don't tell you this is that the people writing them have already passed the scale where this stops working, and they're remembering the world they're in now, not the world the reader is asking about.
There's a real inflection where these tools stop being enough. Until you hit it, the four-figure Datadog bill and the weekend-long Grafana setup are both solving the same problem you don't have yet.
The two pillars you can't grep
Two of the seven pillars genuinely can't be handled by SSH and grep. They need a dedicated tool from day one.
Errors. An unhandled exception usually lands in docker logs too — but the log line is a tombstone, not an autopsy. By definition you weren't watching when it fired, so what you actually want is the request that triggered it, the user and session state at that moment, the release version, and the 400 other occurrences grouped into one issue with a regression flag. Grep gives you none of that. And the errors that never touch your server logs at all — frontend JS exceptions, failed background jobs — don't even leave a tombstone. There's no SSH-and-grep substitute for this pillar.
Uptime. By definition, the box can't monitor itself. When the VPS hangs, an uptime check running on the same VPS hangs with it. The monitor has to live in a different failure domain from the thing it watches. For a single-host indie app, that means an external SaaS, not Uptime Kuma on the same box.
The reader's actual decision is among three configurations:
1. BetterStack alone. BetterStack bundles error tracking, uptime monitoring, logs, status pages, and on-call into one product. Free tier covers 100k exceptions/month, 10 uptime monitors at 30-second intervals, 3 GB logs, a status page, and 5,000 session replays per month. The error tracking is Sentry-SDK compatible — point your existing Sentry SDK at BetterStack's endpoint and it works. Marketing copy is "Sentry-compatible at 1/6th the price" on paid tiers. Simplest path: one SaaS, one signup, both can't-grep pillars covered, plus a few extras.
2. Bugsink + UptimeRobot. Two focused tools, one self-host option. Bugsink for errors — hosted free at 15k events/month, or self-host in a single container. Their install docs are direct: "SQLite is used by default when running Bugsink in Docker and not specifying a DATABASE_URL." UptimeRobot for uptime — 50 monitors at 5-minute intervals on the free tier. Bugsink is also Sentry-SDK compatible, so the switching cost between any of these error trackers is near zero.
3. Sentry + UptimeRobot. Sentry covers errors hard — 5k errors plus 5M spans plus 50 replays per month on the free plan, with full traces and the deepest SDK ecosystem of any of the three. The catch on the free plan is that uptime is limited to one monitor, which is too tight for most indie apps; pair with UptimeRobot for the uptime pillar. Best if you specifically want traces and replays, not just errors and uptime.
A few notes on these configurations:
- Lock-in is low across this category. All three error-tracking tools — Sentry, BetterStack, Bugsink — accept the same Sentry SDK, so changing tools is changing a DSN, not rewriting code. More broadly, observability data isn't sticky the way data in a notes app or an ebook library is: your last month of error reports matters; your last three years almost never does. That makes "pick whichever one you like and switch later if you outgrow it" a defensible plan, not just a technical possibility — which is also why a sustainability hiccup in a monitoring tool carries milder consequences than the same hiccup in a tool where years of user data accumulate.
- The errors pillar has a self-host option (Bugsink); the uptime pillar doesn't. Not for any technical reason — Uptime Kuma is good software — but because of the failure-domain rule: a monitor on the same box can't tell you the box is down. Uptime has to live external.
- The recurring r/node question "Anyone using sentry in self hosted mode? How much of a pain is it?" gets a real answer here. Sentry's self-host project is actively maintained, but the documented minimum specs are 4 CPU cores, 16 GB RAM, and 16 GB swap. That's a $40+/month VPS dedicated to your error tracker before you've started monitoring anything. Bugsink runs in one container on the same VPS as your app.
- For session replay specifically: Sentry (50/month free) and BetterStack (5,000/month free) both offer it; Bugsink doesn't. If replay matters, that narrows the choice.
All four tools have durable funding mechanisms as of 2026-05-12 — Sentry is public-company scale, BetterStack is well-funded private commercial, UptimeRobot is mature commercial, Bugsink is a small project with a clear hosted-tier revenue path (€16–158/month paid tiers). That matters more than the free tier itself. Free monitoring SaaS from operators with no revenue mechanism have a tendency to disappear when the operator loses interest, and you find out the way you usually find out: by realizing the dashboard hasn't updated in three weeks.
Disclosure: the UptimeRobot and BetterStack links above are PI affiliate links — we earn a commission if you sign up for a paid plan, on top of what you'd pay anyway. The free tiers don't pay anything; we recommend them because they're genuinely what most indie apps need. Bugsink and Sentry aren't affiliates.
When each remaining pillar earns a container
The day-one stack covers errors and uptime, plus grep and SSH for everything else. The next four pillars each have a specific moment when they start earning their place. None of them is "you'll need this eventually, set it up now."
Second host enters the picture → centralized logs. Once you have more than one box producing logs, ssh box1; docker logs followed by ssh box2; docker logs stops being acceptable. This is where a logs backend earns its container — Loki in monolithic mode, Vector with a sink to object storage, or a single-binary tool like VictoriaLogs. The threshold isn't about volume; it's about how many places you SSH into to debug one incident.
Three or more services with internal HTTP calls → traces. A request fans out across services. One step is slow. You don't know which. Tracing is the only tool that answers this; the others are workarounds. OTLP-native backends like OpenObserve, Quickwit, or Jaeger v2 run as single binaries. You can also pick up the trace pillar by adopting a unified tool like Traceway, which runs on SQLite in a single Alpine container and bundles errors, spans, logs, and replay together — a different graduation path than adding a dedicated tracer to an existing stack.
Consumer-facing UI where reproducing a bug matters → session replay. This pillar earns its weight only when your support workflow involves "what did the user click before this broke?" If that's not how you debug, replay is extra storage you don't need. When it is, it's the moment a unified-observability tool starts genuinely paying back.
Sustained host-metrics need on a single VPS → a dedicated metrics tool. Beszel is the lightest option — a Go hub plus a tiny agent, MIT-licensed, SQLite-backed, designed for small-VPS use. Solo-maintained without a commercial mechanism, but switching agents is cheap if it ever comes to that.
Team of two or more → status page becomes external-facing. A status page only your team can see is a debug tool, not a status page. The moment customers exist, you want something at status.yourapp.com so they don't email you to ask whether the API is down.
Multi-host with a real ops team is a different problem. At that point the question stops being "what's the minimum?" and starts being "what does our team need long-term?" — and the answer to that one isn't a stack for indies, it's an ops practice for teams.
When unified observability becomes worth it
The case against an observability stack on day one is also the case for one later. Unified observability is genuinely useful — it just earns its place at a specific inflection, not at the start.
That inflection looks roughly like: a second or third host, three or more services with internal traffic, a consumer-facing UI where session replay actually helps, and a team large enough that ad-hoc SSH stops being how anyone debugs. By the time you're there, the savings from one tool instead of four start to pay for themselves. By the time you're there, you also have enough scale that the question stops being whether anything's free and starts being which paid tier costs less.
Until you hit that inflection: BetterStack alone, or Bugsink + UptimeRobot, or Sentry + UptimeRobot. SSH and grep for the rest. The bill is small. The setup is short. What it covers is what you actually need.