I’m looking for experiences and opinions on kubernetes storage.

I want to create a highly available homelab that spans 3 locations where the pods have a preferred locations but can move if necessary.

I’ve looked at linstore or seaweedfs/garage with juicefs but I’m not sure how well the performance of those options is across the internet and how well they last in long term operation. Is anyone else hosting k3s across the internet in their homelab?

Edit: fixed wording

  • Possibly linux@lemmy.zip
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    3 days ago

    That isn’t how you would normally do it

    You don’t want to try and span locations on a Container/hypervisor level. The problem is that there is likely to much latency between the sites which will screw with things. Instead, set up replicated data types where it is necessary.

    What are you trying to accomplish from this?

    • InnerScientist@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      The problem is that I want failover to work if a site goes offline, this happens quite a bit with private ISP where I live and instead of waiting for the connection to be restored my idea was that kubernetes would see the failed node and replace it.

      Most data will be transfered locally (with node affinity) and only on failure would the pods spread out. The problem that remained in this was storage which is why I’m here looking for options.

      • Possibly linux@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        That isn’t going to work unfortunately

        You need very low latency (something like 10ms or preferably less)

  • ChaosMonkey@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    Longhorn is pretty easy to use. Garage works well too. Ceph is harder to use but provides both block and object storage (s3).

    • InnerScientist@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      Ceph (and longhorn) want “10 Gbps network bandwidth between nodes” while I’ll have around 1gbit between nodes, or even lower.

      What’s your experience with Garage?

  • karlhungus@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    My gut says go multi cluster (or not) at that pointbut treat the remote as a service, have a local container be a proxy

    • InnerScientist@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 days ago

      I mean storage backends as in the provisioner, I will use local storage on the nodes with either lvm or just storage on a filesystem.

      I already set up a cluster and tried linstore, I’m searching for experiences with the options because I don’t want to test them all.

      I currently manage all the servers with a NixOS repository but am looking for better failover.

  • Getting6409@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    22 hours ago

    I’ve been using backblaze b2 (via s3fs-fuse container + bidirectional mount propagation to a host path) and a little bit of google drive (via rclone mount + the same mounting business) within kubernetes. I only use this for tubearchivist which I consider to be disposable. No way I’m using these “devices” for anything I really care about. I haven’t tried gauging the performance of either of these, but I can say, anecdotally, that both are fine for tubearchivist to write to in a reasonable amount of time (the bottleneck is yt-dlp ingesting from youtube) and playback seems to be on par with local storage with the embedded tubearchivist player and jellyfin. I’ve had no issues with this, been using it about a year now, and overall I feel it’s a decent solution if you need a lot of cheap-ish storage that you are okay with not trusting.