I know that Lemmy is open source and it can only get better from here on out, but I do wonder if any experts can weigh in whether the foundation is well written? Or are we building on top of 4 years worth of tech debt?

  • ShittyKopper [old]@lemmy.w.on-t.work
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 year ago

    Microservices aren’t a silver bullet. There’s likely quite a lot that can be done until we need to split some parts out, and once that happens I expect that federation would be the thing to split out as that’s one of the more “active” parts of the app compared to logins and whatnot.

    • BURN@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Definitely not a silver bullet, but should stop the app from locking up when one thing gets overloaded. I’m sure they have their reasons for how it’s designed now and I’m probably missing something that would explain it all.

      I’m still not familiar enough with how federation works to speak to how easy that would be. Unfortunately this has happened all as I’ve started moving and I haven’t gotten a chance to dive into code like id want to.

      • Sir_Simon_Spamalot@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        It’s also not the only solution for high-availability system. Multiple monoliths with load-balancing can be used as well.

        Also, a lot of people are self-hosting. In this case, microservice won’t give them any scaling benefit.

        • boeman@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          The problem with scaling monoliths is you are scaling everything, including the pieces that have lower usage. The huge benefit you get from going to micoservices is you only have to scale the pieces that need to be scaled. This allow for horizontal scaling to use less compute resources. It also allow for these compute resources to be spread out as well.

          A lot of the headaches can be removed by having an effective CI/CD strategy that is completely reusable with minimal effort.

          The last headache would be observability. There you’re stuck either living with the nightmare of firefighting problems with 100 services in possibly 10 locations, rolling your own platform using FOSS tools or spending a whole lot of money on something like honeycomb, datadog or new relic.

          But I’m an SRE, I live my life for scalablability and DevOps processes. I know I’m biased.

          • Sir_Simon_Spamalot@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            I’m a software engineer myself, but not familiar with your field. How would your practice be applied to self-hosting? I’m assuming a bunch of people with their home servers wouldn’t want to just run OpenShift.

            • boeman@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Personally, I wouldn’t touch OpenShift. As someone that has a kubernetes cluster hosted at my house on a mixture of RPis, a nas and in VMs, I’m not one to to say what anyone else would do :).

              But, that can be overcome, it’s all about designing you application for multiple different installs. You don’t have to have all your services running fully separately. You can containerize each service and deploy to an orchastration engine such as kubernetes or docker swarm, or you can have the multiple endpoints on a single machine with an install package that keeps them together. It’s all about architecting toward resiliency, not toward a pattern for no other reason.

              Also, Google has some very good books on SRE for free.