Self-hosted S3 after MinIO: lightweight alternatives for 2026

By Seba Kubisz · · 12 min read · Self-Hosted

On April 25, 2026, the minio/minio GitHub repository was archived. The Reddit and Hacker News reactions framed it as the death of MinIO. That framing is wrong in a useful way.

MinIO the company is not dead. It has $126M in funding, an enterprise product, and paying customers. What ended on April 25 was the public head-stone on something that had already been buried over the previous eleven months: the practical use of MinIO as a free, self-hosted, S3-compatible object store. The archive is the ceremony. The substantive closing happened in May 2025, when the admin web UI was stripped out of the community edition.

For a self-hoster who needed an S3 endpoint for their backups, their photo app, or their self-hosted Mastodon — and there are a lot of those people — the question is no longer "what is MinIO doing." It is "what now."

This is a guide for that audience. It focuses on lightweight, single-node S3-compatible storage. It is not a Petabyte-scale enterprise architecture comparison; the existing comparison guides cover that ground well, and the small-scale self-hosted use case is different enough to deserve its own pass.

Three deployment shapes share this need and this article applies to all of them. The first is the homelab — a NAS or small server running on local hardware, often behind a residential connection. The second is the VPS self-hoster — a stack running on a virtualized box at Hetzner, OVH, Vultr, or similar, typically with a few hundred GB of attached storage. The third is the rented dedicated server — Hetzner Server Auction, OVH Kimsufi, Scaleway Dedibox, and the rest — which is increasingly popular for self-hosting precisely because it delivers multi-TB local disk and generous bandwidth at prices that VPS plans cannot match. The capability requirements are essentially identical across all three; the deployment context differs (network locality, disk economics, geographic distribution, egress pricing), but the right tool largely does not.

Why a self-hosted setup needs an S3 endpoint at all

The Reddit thread that prompted this article opened with a confused beginner question: "Maybe a dumb question, but where exactly would you use this in a typical homelab setup?"

It is not a dumb question. The answer is that the S3 API has quietly become the default storage interface for a generation of self-hosted tools. When you read that an app supports "object storage" or "S3-compatible storage," it usually means the app speaks the same HTTP API that AWS S3 introduced in 2006. Once a tool speaks that API, it does not particularly care whether the bytes land at AWS, at Backblaze B2, at Cloudflare R2, or at a binary running on the NAS in your basement or on the VPS you rent.

The list of self-hosted tools that expect an S3 endpoint is long and getting longer. Backup tools — restic, Kopia, Borgmatic, duplicati — all use S3 as a first-class backend. Photo management — Immich, PhotoPrism, Ente — supports S3 for original storage. Nextcloud can be backed by S3 instead of disk. Mastodon stores media on S3. Self-hosted analytics tools (Plausible, PostHog) emit event data to S3. Kubernetes clusters use S3 for volsync, for velero cluster backups, for Loki log storage, for Mimir metrics. CI runners, build caches, container registries — all S3-shaped under the hood.

For most of the past five years, the way to get an S3 endpoint on your own infrastructure — whether that meant a homelab on local hardware or a stack on a rented VPS — was to run MinIO in a Docker container. It was the obvious choice. It is no longer. So this article exists.

How MinIO got here

The detail that matters most in MinIO's timeline is that the closing of the free self-hosted path took eleven months from inflection to archive, and the inflection happened nearly a year before most people noticed. Reading the commit history of the archived repository, the slow-motion arc is unmistakable.

April 2021. MinIO relicensed from Apache 2.0 to GNU AGPL v3. At the time, this was framed as cleanup — the company had been operating with a mixed-license model and wanted simplification. AGPL v3 also created legal leverage: companies that embedded MinIO in proprietary products without contributing back now had a copyleft obligation, which the company could relax by selling a commercial license. Binaries were still freely usable for self-hosted use. The community largely accepted the change.

2022 to 2024. License-enforcement actions began. The atmosphere shifted in ways that did not show up in any individual release.

May 2025. The inflection. The MinIO Console — the web UI used by basically every self-hosted MinIO operator to manage buckets, accounts, policies, lifecycle rules, configuration, and replication — was removed from the community edition and reduced to a read-only object browser. The release notes treated it as a routine deprecation. The community did not. A long Hacker News thread, a pinned GitHub discussion, and several Reddit threads framed the change as a bait-and-switch. By May 25, 2025, a fork called OpenMaxIO appeared, restoring the removed UI features. From this point onward, the community version of MinIO was meaningfully crippled for the self-hosted use case it had been adopted for.

October 15, 2025. The repository's last functional release: RELEASE.2025-10-15T17-29-55Z, a security and CVE patch. After this date, no further binary releases were tagged on the community track.

October 2025. Pre-built binary and Docker distributions were halted on the public track. Operators could still build from source, but the friction was meaningful — and a pgsty/minio fork appeared to package binaries from the last known-good source.

November 6, 2025. The last code commit to the archived repository: a small documentation cleanup ("Drop v3 metrics from community docs"). After this, every commit was a README edit.

December 3, 2025. The README was updated to declare "maintenance mode" — security fixes only, no new features.

February 12, 2026. The README was updated again to "clarify state of the project," language read by most observers as an effective end-of-life. (Several news write-ups from this period mistakenly treated this README change as the archive itself; the repository remained writable.)

April 25, 2026. The repository was formally archived. Read-only. No further commits accepted.

The interesting thing about this arc is the asymmetry between when the meaningful change happened and when most coverage noticed it. By the time the Reddit and Hacker News threads about the April 25 archive went up, the substantive product change was nearly a year old. Most operators who paid attention had already migrated. The April 25 event was, for them, formal confirmation of a decision they had already made.

This pattern is worth recognizing. The death of an open-core product is rarely a single dramatic event. It is a sequence of small, individually defensible decisions — a license clarification, a feature deprecation, a binary distribution change — each accompanied by a corporate blog post explaining why this one is no big deal. The commit history is the only honest narrator. The earliest leading indicator is usually a feature being removed from the free tier with the explanation that the feature was "really" enterprise all along.

This is the same pattern documented in Lifetime subscriptions don't mean what you think they mean and Every app you buy has an expiration date — the structural reality that long-term promises in software depend on whether the underlying business model still supports them. When the business model changes, the promises do too.

The lightweight self-hosted landscape

The self-hosted S3 alternatives that matter at small scale divide into a few clean categories. Below, each option is evaluated on the axes that actually predict whether it will be a sensible choice in two years: who maintains it, how the project is funded, what licensing risk it carries for your particular use case, and what its current security posture looks like. Stars and benchmark numbers, the metrics most other comparison articles lead with, are deliberately not the headline here.

Garage — the small-scale self-hosted default

License: AGPL v3 · Repo: garagehq.deuxfleurs.fr (GitHub mirror: 3,603 stars as of April 2026) · Latest release: v2.3.0, April 16, 2026

Garage is built by Deuxfleurs, a small French collective that runs a federated hosting cooperative. It is the option most often recommended in self-hosted communities post-MinIO, and the recommendation has held up in the months since the May 2025 inflection.

The design choices are aimed at exactly the small-scale self-hosted profile. Garage is a single Go binary plus a config file. It does not require Docker, Kubernetes, or any external dependency. It runs comfortably on modest hardware — a Raspberry Pi 4 is sufficient for personal use, a small VPS handles single-node deployments without strain, and a Hetzner-auction-class dedicated server with multi-TB local disk is one of the natural homes for the project. The geo-distributed replication that is Garage's headline feature works as well across two physical sites or two rented boxes in different regions as across two folders on the same machine. The S3 API surface covers what most personal tools need: PUT, GET, multipart upload, presigned URLs, basic IAM. Setup is genuinely under thirty minutes for a single-node deployment.

The caveats are worth knowing before you commit. Garage uses full duplication rather than erasure coding for data redundancy, which means a 3-node replicated deployment uses 3× the underlying disk for each stored byte. For personal volumes this is fine; at TB scale it becomes meaningful, particularly on metered VPS storage (less so on a dedicated box where local disk is cheap). Garage does not implement S3 Object Lock, which matters if you use restic or Kopia and rely on bucket-level immutability for ransomware-resistance. And the AGPL v3 license is something to read carefully if you intend to embed Garage in a product you distribute — the copyleft obligations attach even to network use. For pure personal self-hosted operation, AGPL has no practical impact.

The project has 83 GitHub contributors and a steady, unhurried release cadence — v1.0 shipped April 2024, v2.0 in June 2025, v2.3.0 in April 2026 — which reads as deliberate engineering rather than a feature treadmill.

Versity S3 Gateway — when you already have a filesystem

License: Apache 2.0 · Repo: github.com/versity/versitygw (2,256 stars, 244 forks, 44 contributors as of April 2026) · Latest release: v1.4.1, April 22, 2026

Versity Software is a long-running storage-software company founded in 2011. The Versity S3 Gateway (versitygw) is their open-source project, started in 2023, that solves a specific problem: it puts an S3 API in front of any existing POSIX filesystem. If your bytes already live on a ZFS pool, an EXT4 disk, or a CephFS mount, versitygw turns that storage into an S3 endpoint without moving the data.

This is a different philosophy than Garage's. Garage owns its on-disk format and treats objects as a distinct type of storage. Versity is a translation layer; the bytes on disk are still files, browseable with ls. For operators who already have their files organized on a NAS, on a mounted block-storage volume, or on any other POSIX directory, and just need an S3 façade for an Immich or a restic that demands one, this is often the right answer.

The release cadence is roughly monthly — v1.2.0 in February 2026, v1.3.0 in March, v1.4.1 in late April — which is brisk for a 1.x stable line. Versity gets less attention than Garage in self-hosted forums but consistently shows up as a recommendation on Lobsters and from operators who want minimal abstraction. Apache 2.0 licensing makes it embeddable without copyleft obligations.

RustFS — the closest visual MinIO replacement, with caveats

License: Apache 2.0 · Repo: github.com/rustfs/rustfs (26,470 stars, 100+ contributors as of April 2026) · Latest release: v1.0.0-alpha.99, April 25, 2026

RustFS is the option that comes up first if your search terms are "MinIO replacement with web UI," because it is the project that most explicitly sets out to be that. It is written in Rust, ships under Apache 2.0, and includes a management web interface that visually resembles the pre-deprecation MinIO Console. For operators who built workflows around the MinIO UI and resent having to give it up, RustFS is the obvious target.

The first thing to weigh against that is the project's own version label. As of April 2026 — roughly two and a half years after development began — RustFS is still releasing under the 1.0.0-alpha.X tag, currently on alpha 99. Distributed mode is not yet officially released. The release cadence is aggressive: alpha 95 through alpha 99 shipped in six days. By its own versioning, the project has not declared production-readiness.

The security advisory record is consistent with that alpha label. Between December 2025 and April 2026, the project disclosed thirteen advisories on its GitHub Security Advisories page. The class of issues is the part worth knowing:

  • December 30, 2025: hardcoded gRPC token authentication bypass (medium)
  • January 7, 2026: path traversal vulnerability (high)
  • January 8, 2026: two IAM authorization issues enabling privilege escalation
  • February 24, 2026: stored XSS in the preview modal leading to administrative account takeover (critical)
  • February 24, 2026: missing post-policy validation enabling arbitrary object writes (high)
  • April 7, 2026: cross-bucket object exfiltration via multipart upload bypass (medium)
  • April 22 and 25, 2026: two further authorization bypass advisories

A note on interpretation. Raw advisory counts across projects are not directly comparable: high-profile projects attract more security research attention than smaller ones. Versity S3 Gateway has disclosed one advisory in its three-year history, and Garage and SeaweedFS have disclosed none, but those projects also receive less researcher scrutiny than RustFS does. What is genuinely informative is the combination of the project's own alpha label, the rapid pre-1.0 release cadence, and the fact that the disclosed issues include authorization bypasses and a critical administrative-takeover path — all of which the project, by tagging the release alpha, is acknowledging is normal for its current stage.

For a deployment where the data is recoverable from another source and the threat model is single-tenant trusted, RustFS is fine; the dashboard is genuinely useful and fast. For a deployment where the data is the canonical copy — and especially for any VPS-hosted instance with an internet-facing endpoint — the prudent read is to honor the project's own labeling and wait for a stable 1.0 with a slower disclosure rhythm before adopting.

SeaweedFS — the scale-up path

License: Apache 2.0 · Repo: github.com/seaweedfs/seaweedfs (31,746 stars as of April 2026) · Latest release: v4.21, April 19, 2026

SeaweedFS is twelve years old, has hundreds of contributors, and was adopted by Kubeflow Pipelines as the default object storage backend in the wake of MinIO's retreat. It is the option to choose if you suspect your deployment is going to outgrow a single binary within the next two years, or if you are already operating at multi-TB scale and want a system that has been hardened over a long period.

The tradeoff is operational complexity. SeaweedFS exposes more concepts (master server, volume servers, filer, S3 gateway, optionally a Mount filer) than Garage or Versity. A "just works" single-node deployment is achievable but requires more configuration. The release cadence is weekly or near-weekly, which is healthy for a mature project but produces a lot of upgrade events.

For most self-hosters starting from scratch in 2026, SeaweedFS is overkill on day one but the right answer if the deployment grows. It is also the safest pick licensing-wise (Apache 2.0) for anyone considering embedding S3 storage in a product they distribute.

Ceph RGW, rclone serve s3, and the pgsty/minio fork — briefly

Ceph with its RADOS Gateway (RGW) is the production-grade enterprise answer to S3 self-hosting. It is also genuinely overkill for any small-scale self-hosted use case; the setup complexity, hardware requirements, and operational overhead are sized for environments where someone is paid full-time to run them. If you are reading this article, Ceph is probably not your answer.

rclone serve s3 turns rclone, the cross-cloud sync tool, into an S3 server backed by a local directory. It is excellent for development and testing — useful when you want to point a tool at "an S3 endpoint" and verify that it speaks S3 correctly without committing to anything heavier. It is not designed as a production storage backend; treat it as a fixture.

The pgsty/minio community fork, created October 25, 2025 in response to the binary distribution halt, is the closest thing to "MinIO without the company." It packages the last known-good MinIO source, applies CVE patches, and continues to issue builds (last update April 17, 2026). It has 1,375 stars — modest, but it is the natural fallback for operators with deep MinIO operational muscle memory who do not want to migrate. The structural risk is the same risk that applies to any single-maintainer fork of an upstream that has stopped contributing: it lives or dies on one person's continued attention, and the upstream code base is no longer being improved.

A second fork, OpenMaxIO, appeared in May 2025 to restore the removed admin UI. Its repository has not seen a push since June 24, 2025; treat it as effectively dormant.

Picking one

The decision usually collapses to a small number of axes:

  • You want the closest thing to MinIO with a familiar web UI, and you can live with alpha-quality and an active CVE stream. → RustFS, with the security posture in mind.
  • You want a single binary, a config file, and a project that does not move faster than it can review itself.Garage, accepting the AGPL v3 implications and the lack of S3 Object Lock.
  • Your bytes already live on a filesystem and you just need an S3 façade in front of them.Versity S3 Gateway.
  • You expect to outgrow a single binary in the next 18 months, or you are already at multi-TB scale. → SeaweedFS from day one.
  • You operate enterprise infrastructure and you have a team. → Ceph RGW, and you are reading the wrong article.
  • You have years of MinIO operational experience and migrating to a new stack is more disruptive than continuing on a frozen code base → the pgsty/minio fork, with awareness that the upstream is gone.

For most readers of this article, the headline pick is Garage. It is the option that best matches what MinIO was originally adopted for: a small, boring, S3-shaped binary that sits quietly in a corner of your stack and works — whether that corner is a NAS in your basement, a small VPS, or a Hetzner auction box in Falkenstein.

What to check before migrating

Migrations between S3-compatible backends are usually straightforward — point your tool at the new endpoint, replay the data, update the credentials — but a few capability differences are worth checking against your actual use case before you start.

Object Lock and immutability. If you use restic or Kopia and rely on S3 Object Lock for ransomware-resistance, Garage does not currently support this and Versity's support is partial. SeaweedFS and the MinIO fork do. Confirm before migrating a backup target.

Versioning. Bucket versioning is supported across all the alternatives but the exact semantics differ at the edges. If your tool depends on versioning behavior (some Kubernetes operators do), test in a staging bucket first.

Multipart upload limits. The S3 multipart spec allows up to 10,000 parts and 5 TiB objects. Most alternatives implement these limits faithfully but a few cap part size more aggressively. Check if you are storing large objects.

IAM and bucket policies. All four alternatives implement enough IAM to issue access keys and define basic per-bucket permissions. None implements the full AWS policy DSL. If your tool uses scoped IAM users with fine-grained policies, plan to translate the policies to the simpler model.

Public-bucket support. If you use S3 as a static asset CDN, confirm public-read bucket policies work and that pre-signed URL semantics match what your application generates.

A practical migration writeup that captures the texture of moving from MinIO to Garage in a personal-photo-storage context is this OPUM-LABS guide, shared in the r/selfhosted thread that prompted this article. It is specific to Ente Photos but the shape of the migration generalizes.

The takeaway

The end of MinIO as a free self-hosted option is not the death of self-hosted S3. The replacements exist, they are healthier, and several of them are better aligned with the small-scale self-hosted use case than MinIO ever was — a small binary, a config file, a single thing that does object storage without trying to also be an enterprise platform.

The lesson worth carrying forward is the one the timeline tells. The closing of MinIO's free self-hosted path took eleven months and was visible in the commit history nearly a year before most people noticed. The same pattern is currently underway in adjacent open-core categories. The signs to watch for are consistent: a feature being removed from the free tier with the explanation that it was "really" enterprise; a binary distribution channel being deprecated in favor of "build it yourself"; a release cadence that quietly slows; a README that subtly clarifies the project's "state."

When you choose a self-hosted tool, the question is not only "is this good today" but "is the funding model under which this exists going to support its current free-tier promise three years from now." For S3-compatible storage in 2026, the safest answers to that question are Garage and SeaweedFS — both built by entities whose survival does not depend on extracting commercial revenue from the same artifact you are running. That is a less exciting headline than benchmark numbers, and it is probably the more useful one.