buster 3 days ago

After some work with kubernetes, i must really say, helm is a complexity hell. I'm sure it has much features but many aren't needed but increase the complexity nonetheless.

Also, please fix the "default" helm chart template, it's a nightmare of options and values no beginner understands. Make it basic and simple.

Nowadays i would very much prefer to just use terraform for kubernetes deployments, especially if you use terraform anyway!

  • verdverm 3 days ago

    Helm is my example of where DevOps lost it's way. The insanity of multiple tiers on templating an invisible char scoped language... it blows my mind that so many of us just deal with it

    Nowadays I'm using CUE in front of TF & k8s, in part because I have workloads that need a bit of both and share config. I emit tf.json and Yaml as needed from a single source of truth

    • pjmlp 3 days ago

      The problem with Kubernetes, Docker and anything CNCF related is what happens when everyone and their dog tries to make a business out of an OS capability with venture capital.

    • mkroman 3 days ago

      shudders.. `| nindent 12`..

      I've been trying to apply CUE to my work, but the tooling just isn't there for much of what I need yet. It also seems really short-sighted that it is implemented in Go which is notoriously bad for embedding.

      • verdverm 3 days ago

        > seems really short-sighted that it is implemented in Go

        CUE was a fork of the Go compiler (Marcel was on the Go team at the time and wanted to reuse much of the infra within the codebase)

        Also, so much of the k8s ecosystem is in Go that it was a natural choice.

        • mkroman 3 days ago

          > CUE was a fork of the Go compiler (Marcel was on the Go team at the time and wanted to reuse much of the infra within the codebase)

          Ah, that makes sense, I guess. I also get the feeling that the language itself is still under very active development, so until 1.0 is released I don't think it matters too much what it's implemented in.

          > Also, so much of the k8s ecosystem is in Go that it was a natural choice.

          That might turn out to be a costly decision, imho. I wanted to use CUE to manage a repository of schema definitions, and from these I wanted to generate other formats, such as JSON schemas, with constraints hopefully taken from the high-level CUE.

          I figured I'd try and hack something together, but it was a complete non-starter since I don't work within the Go ecosystem.

          Projects like the cue language live and breathe from an active community with related tooling, so the decision still really boggles my mind.

          I'll stay optimistic and hope that once it reaches 1.0, someone will write an implementation that is easily embedded for my use-cases. I won't hold my breath though, since the scope is getting quite big.

          • lifty 3 days ago

            Why don't you work with the Go ecosystem? You don't use K8s, terraform, etc? What ecosystem do you prefer?

          • verdverm 3 days ago

            what language would you have chosen?

            > I wanted to use CUE to manage a repository of schema definitions, and from these I wanted to generate other formats, such as JSON schemas, with constraints hopefully taken from the high-level CUE.

            Have you tried a Makefile to run cue? There should be no need to write code to do this

      • jonasdegendt 3 days ago

        We evaluated CUE, Jsonnet and CDK8s when we wanted to move on from Helm, and ended up using CDK8s. It's proven to be a good pick so far, it's in Typescript.

      • hvenev 3 days ago

        Back when my job involved using Kubernetes and Helm, the solution I found was to use `| toJson` instead: it generates one line that happens to be valid YAML as well.

      • lillecarl 3 days ago

        Both Jsonnet and CUE are implemented in Go which happens to be the language Helm is written in. While I agree that it reduces "general embedability" it's ripe fruit for Helm to integrate either or both of these as alternatives to YAML templating.

      • gopaz 3 days ago

        Holos[1] is an interesting project I’ve been looking at trying out.

        1. https://holos.run/

        • verdverm 3 days ago

          I've looked at Holos recently

          1. it seems like development has largely ceased since Sept

          2. it looks to only handle helm, not terraform, I'm looking for something to unify both and deal with dependencies between charts (another thing helm is terrible at)

      • lucyjojo 3 days ago

        cue and argocd here. it is pretty neat.

        the tf is still in hcl form for now.

    • candiddevmike 3 days ago

      RIP Ksonnet, we hardly knew what we were missing

      • verdverm 3 days ago

        jsonnet is the main DX issue therein

  • nullwarp 3 days ago

    I don't think I've ever seen a Helm template that didn't invoke nightmares. Probably the biggest reason I moved away from Kubernetes in the first place.

    • bigstrat2003 3 days ago

      We have several Helm charts we've written at my job and they are very pleasant to use. They are just normal k8s templates with a couple of values parameterized, and they work great. The ones people put out for public consumption are very complex, but it isn't like Helm charts have to be that complex.

      • phyrog 3 days ago

        In my book the main problem with Helm charts is that every customization option needs to be implemented by the chart that way. There is no way for chart consumer to change anything the chart author did not allow to be changed. That leads to these overly complex and config heavy charts people publish - just to make sure everything is customizable for consumers.

        I'd love something that works more like Kustomize but with other benefits of Helm charts (packaging, distribution via OCI, more straight forward value interpolation than overlays and patches, ...). So far none have ticked all my boxes.

        • ranger207 2 days ago

          Yeah too many times the Helm chart is barely less complex than writing all the manifests yourself because all the manifest options are still in the chart

        • glotzerhotze 3 days ago

          fluxCD brings a really nice helm-controller that will allow to change manifests via a postRenderers stub while still allowing to use regular helm tooling against the cluster.

          https://fluxcd.io/flux/components/helm/helmreleases/#post-re...

          • phyrog 3 days ago

            Yeah, but then it is yet another layer of configuration slapped on top of the previous layer of configuration. That can't be the best solution, can it? Same thing for piping helm template through Kustomize.

            • maherbeg 2 days ago

              Yeah, this setup is both nice and insane. If you don't need much extra customization it's great. But I have a setup where I needed both postBuild and postRenderer's + actual kustomization layering and it was awful trying to figure out the order of execution to get the right final output.

              In hindsight it would have been much faster to write the resources myself.

            • nwmcsween 20 hours ago

              Use helm to generate the manifests with a Makefile, use Kustomize to change said manifests for prod, staging, etc.

        • lillecarl 3 days ago

          Kustomize can render Helm charts. It's "very basic" as in Kustomize will call the Helm binary to render the template, ingest it and apply patches.

          I wrote a tool called "easykubenix" that works in a similar way, render the chart in a derivation, convert the YAML to JSON, import JSON into the Nix module structure and now you're free to override, remove or add anything you want :)

          It's still very CLI deploy centric using kluctl as the deployment engine, but there's nothing preventing dumping the generated JSON (or YAML) manifests into a GitOps loop.

          It doesn't make the public charts you consume any less horrible, but you don't have to care as much about them at least

      • honkycat 3 days ago

        Yes, this is the key. Helm charts should basically be manifests with some light customization.

        Helm is not good enough to develop abstractions with. So go the opposite way: keep it stupid simple.

        Pairing helm with Kustomize can help a lot as well. You do most of the templating in the helm chart but you have an escape hatch if you need more patches.

      • cogman10 3 days ago

        That's generally what I try to push for in my company.

        A single purpose chart for your project is generally a lot easier to grok and consume vs what can be done.

        I think the likes of "kustomize" is probably a more sane route to go down. But our entire infrastructure is already helm so hard to switch that all out.

        • Hamuko 3 days ago

          I've personally boiled down the Helm vs. Kubernetes to the following:

          Does your Kubernetes configuration need to be installed by a stranger? Use Helm.

          Does your Kubernetes configuration need to be installed by you and your organization alone? Use Kustomize.

          It makes sense for Grafana to provide a Helm chart for Grafana Alloy that the employees of Random Corp can install on their servers. It doesn't make sense for my employer to make a Helm chart out of our SaaS application just so that we can have different prod/staging settings.

          • uf00lme 2 days ago

            This has been my argument for years now.

            I think it is because most engineers learn to use Kubernetes by spinning up a cluster and then deploying a couple of helm charts. It makes it feel like that’s the natural way without understanding the pain and complexity of having to create and maintain those charts.

            Then there are centralised ‘platform’ teams which use helm to try and enforce their own templating onto everything even small simple micro services. Functionally it works and can scale, so the centralised team can justify their existence but as a pattern it costs everyone a little bit of sanity.

        • bigstrat2003 3 days ago

          I'm ashamed to say it but I cannot for the life of me understand how kustomize works. I could not ever figure out how to do things outside the "hello world" tutorials they walk you through. I'm not a stupid person (citation needed lol), but trying to understand the kustomize docs made me feel incredibly stupid. That's why we didn't go with that instead of Helm.

          • globular-toast 3 days ago

            Helm requires you to write a template and you need to know (or guess) up front which values you want to be configurable. Then you set sane defaults for those values. If you find a user needs to change something else you have to edit the chart to add it.

            With Kustomize, on the other hand, you just write the default as perfectly normal K8s manifests in YAML. You don't have to know or care what your users are going to do with it.

            Then you write a `kustomizatiom.yaml` that references those manifests somehow (could be in the same folder or you can use a URL). Kustomize simply concatenates everything together as its default behaviour. Run `kubectl kustomize` in the directory with `kustomization.yaml` to see the output. You can run `kubectl apply -k` to apply to your cluster (and `kubectl delete -k` to delete it all).

            From there you just add what you need to `kustomization.yaml`. You can do a few basics easily like setting the namespace for it all, adding labels to everything and changing the image ref. Keep running `kubectl kustomize` to see how it's changing things. You can use configmap and secret generators to easily generate these with hashed names and it will make sure all references match the generated name. Then you have the all powerful YAML or JSON editing commands which allow you to selectively edit the manifests if you need to. Start small and add things when you need them. Keep running `kubectl kustomize` at every step until you get it.

      • brainzap 3 days ago

        this, our helm charts are flat and for year only passed in the image as variable

  • lxe 3 days ago

    Infrastructure as code should from the beginning have been through a strict typed language with solid dependency and packaging contract.

    I know that there are solutions like CDK and SST that attempt this, but because the underlying mechanisms are not native to those solutions, it's simply not enough, and the resulting interfaces are still way too brittle and complex.

    • contrahax 2 days ago

      Try pulumi!

      • lxe 2 days ago

        Yup that's what SST wraps (or at least it did when I was fiddling with it). And even Pulumi still is at the behest of the cloud providers... it still has to mirror complexity of the providers to a considerable degree. Devexp is leaps and bounds better with Pulumi than CDK though.

    • JohnMakin 3 days ago

      I mean terraform provides this but using it doesn't give a whole lot of value, at least IME. I enforce types but often an upstream provider implementation will break that convention. It's rarely the fault of the IAC itself and usually the fault of the upstream service when things get annoying.

  • jadbox 3 days ago

    I don't think I want to use kubernetes (or anything that uses it) again. Nightmare of broken glass. Back in the day Docker Compose gave me 95% of what I wanted and the complexity was basically one file with few surprises.

    • pphysch 3 days ago

      If you can confidently get it done with docker-compose, you shouldn't even think about using k8s IMO. Completely different scales.

      K8s isn't for running containers, it's for implementing complex distributed systems: tenancy/isolation and dynamic scaling and no-downtime service models.

      • ansgri 3 days ago

        One of the problems seems to be that most moderately complex companies where any one system would be fine with Compose would want to unify their operations, thus going to a complex distributed system with k8s. And then either your unified IT/DevOps team is responsible for supporting all systems on k8s, or all individual dev teams have to be competent with k8s. Worst case, both.

      • williamdclt 3 days ago

        no-downtime is table stakes in 2025. I can't look at anyone in the eyes and tell them that our product is going to go down for a bit everytime we deploy (it'd also be atrocious friction for frequent deployment).

        • yomismoaqui 2 days ago

          You can do no-downtime deploy of a web service with:

          - Kamal

          - Docker compose with Caddy (lb_try_duration to hold requests while the HTTP container restarts)

          - Systemd using socket activation (same as Docker compose, it holds HTTP connections while the HTTP service restarts)

          So you don't have to buy the whole pig and butcher it to eat bacon.

          • jcgl 2 days ago

            > - Systemd using socket activation (same as Docker compose, it holds HTTP connections while the HTTP service restarts)

            Nit: it holds the TCP connections while the HTTP service restarts. Any HTTP-level stuff would need to be restarted by the client. But that’s true of every “zero downtime” system I’m aware of.

        • pphysch 2 days ago

          Being successful enough that any amount of downtime is an existential risk is a great problem to have. 99.99% don't have that problem; even huge successful businesses can survive unplanned downtimes (see: recent major outages).

          It's far from table stakes and you can absolutely overengineer your product into the ground by chasing it.

          "0 downtime" system << antifragile systems with low MttR.

          Something can always break even if your system is "perfect". Utilities, local disasters, cloud dependencies.

    • lxe 3 days ago

      Docker Compose still takes you 95% of what you need. I wish Docker Swarm survived.

      • Alir3z4 3 days ago

        What happened to it?

        I'm still using it with not a single issue (except when is messes up the iptables rules)

        I still confidently, upgrade the docker across all the nodes, workers and managers and it just works. Not a single time that it caused an issue.

        • Cyphus 3 days ago

          Docker the company bet big on Swarm being the de facto container orchestration platform for businesses. It just got completely overshadowed by k8s. Swarm continues to exist and be actively developed, but it’s doomed to fade into obscurity.

        • lxe 3 days ago

          For some reason I assumed it was unsupported. That doesn't seem to be the case.

          • Cyphus 3 days ago

            The original iteration of Docker Swarm, now known as Classic, is deprecated. Maybe you were thinking of that?

            • lxe 3 days ago

              As I read more about it, yes, that is indeed the case.

      • mkroman 3 days ago

        > I wish Docker Swarm survived.

        I heard good things about Nomad (albeit from before Hashicorp changed their licenses): https://developer.hashicorp.com/nomad

        I got the impression it was like a smaller, more opinionated k8s. Like a mix between Docker Swarm and k8s.

        It's rare that I see it mentioned though, so I'm not sure how big the community is.

        • lovehashbrowns 3 days ago

          I’d wager that like half the teams (at least) using kubernetes today should be using Nomad instead. Like the team I’m on now where I’m literally the only one familiar with Kubernetes and everyone else only has familiarity with more classic EC2-based patterns. Getting someone to even know what Helm does is its own uphill battle. Nomad is a lot more simple. That’s what I like about it a lot.

        • rzerowan 3 days ago

          For better or for worse its a orchestrator (for containers/scripts/jars/baremetal) full stop.

          Everything else is composable from the rest of the hashicorp stack consul(service mesh and discovery),vault(secrets) allowing you to use as much/or as little as you need and truly able to scale to a large deployment as needed.

          In the plus column , picking up its config/admin is intuitive in a way that helm/k8s never really comes across.

          Philosophy wise can put it int the unix way of doing things - it does one thing well and gets out of your way , and you add to it as you need/want. Whereas k8s/heml etc have one way or the high way - leaving you fighting the deployment half the time.

          • hylaride 3 days ago

            Mitchel Hashimoto was a genius when it came to opinionated design and that was Hashicorp's biggest strength when it was part of their culture.

            It's a shame Nomad couldn't overcome the K8s hype-wagon, but either way IBM is destroying everything good about Hashicorp's products and I would proceed with extreme caution deploying any of their stuff net-new right now...

      • KronisLV 2 days ago

        > I wish Docker Swarm survived.

        Using it in prod and also for my personal homelab needs - works pretty well!

        At the scale you see over here (load typically served on single digit instances and pretty much never needing autoscaling), you really don't need Kubernetes unless you have operational benefits from it. The whole country having less than 2 million people also helps quite a bit.

  • e12e 3 days ago

    I only whish terraform was more recognized by upstream projects, like postgres, tailscale, ingress operators.

    A one-time adoption from kubectl yaml or helm to terraform is doable - but syncing upstream updates is a chore.

    If terraform (or another rich format) was popular as source of truth - then perhaps helm and kubectl yaml could be built from a terraform definition, with benefits like variable documentation, validation etc.

  • vbezhenar 3 days ago

    I've embraced kustomize and I like it. It's simple enough and powerful enough for my needs. A bit verbose to type out all the manifests, but I can live with it.

    • natebc 3 days ago

      This is what I've done too. Just enough features easily available to handle everything i've ever needed in the simple deployments I use. Secrets, A/B configuration, even "dynamic reload" of a Deployment for Configmap changes.

      Gets the job done.

    • pyrale 3 days ago

      I'm using sed on my yaml files. Currently considering kustomize instead, but I wouldn't touch Helm with a 10 foot pole.

  • Hamuko 3 days ago

    Incidentally, Terraform is the only way I want to use Helm at all. Although the Terraform provider for Helm is quite cumbersome to use when you need to set values.

  • ctm92 3 days ago

    Kustomize with ArgoCD is my go to

  • timiel 3 days ago

    Do you have any resources regarding using tf to handle deployments ?

    I’d love to dig a bit.

    • buster 3 days ago
      • Traubenfuchs 3 days ago

        …but how do you install helm charts via terraform?

        Is there a helm provider?

        If not, what would be the right way to install messy stuff like nginx ingress, cert-manager, etc.?

        • buster 3 days ago

          There is a helm provider. Why would you need it? Can't you just use the kubernetes provider?

          People probably don't realize, that helm mostly is templating for the YAMLs kubernetes wants (plus a lot of other stuff that increases complexity).

          • vbezhenar 3 days ago

            There are many applications which are distributed as helm charts. Those charts install multiple deployments, service accounts and whatnot. They barely document all these things.

            So if you want to avoid helm, you gotta do a whole lot of reverse-engineering. You gotta render a chart, explore all the manifests, explore all the configuration options, find out if they're needed or not.

            An alternative is to just use helm, invoking it and forgetting about it. You can't blame people for going the easy way, I guess...

            • darkwater 3 days ago

              Yep, this 100%. Every time there is a technology which has became the "de facto" standard, and there are people proposing "simpler alternatives", this is the kind of practical detail that makes a GIANT difference and that's usually never mentioned.

              Network effect is a thing, Helm is the de facto "package manger" for Kubernetes program distribution. But this time there are generally no alternative instructions like

                tar xzf package.tar.gz; ./configure; make; adduser -u foo; chown -R foo /opt/foo
            • buster 2 days ago

              I think we are having different contexts here. I am mostly talking about selfwritten services and how writing, maintaining and deploying helm charts for them is a nightmare.

              Regarding dependencies: Using some SaaS Kubernetes (Google GKE) for example, you'll typically use terraform for SQL and other Services anyway (atleast we do use Google CloudSQL and not some selfhosted postgres in k8s).

              I find it interesting that cert-manager points to kubectl for new users and not helm: https://cert-manager.io/docs/installation/

              But, for sure, there may be reasons to use helm, as you said. I'm sure it is overused, though.

          • uf00lme 2 days ago

            It might feels natural to try and use terraform to deploy kubernetes resources since you’ve likely configured the cluster using it, but the helm/kubeneters/kubectl providers are limited by terraform’s way of working. So whilst the providers try to marry the two when deploying anything complex it generally ends up feels like a hack and you lose a lot of the benefits of using terraform in the first place.

            In my experience, it’s best to bootstrap ArgoCD/flux, rbac and cloud permissions those services need in Terraform and then move on to do everything else can via Kustomize via gitop. This keeps everything sane and relatively easy to debug on the fly, using the right tool for the job.

          • shagmin 2 days ago

            I know at least a couple times just the templating side saved me where it was convenient to just run a helm command with --dry-run to get the yaml and grab & modify the relevant pieces and apply those manually where I don't necessarily want the whole package or I want snippets of a package or modified yaml that their helm chart didn't support out of the box, etc.,.

    • Aeolun 3 days ago

      The kubernetes provider mostly just works exactly as you expect

  • dev_l1x_be 3 days ago

    Could you explain this a bit? Is helm optional part of the k8s stack?

    • buster 3 days ago

      Yes, you really don't need to use helm if you have terraform. Just use https://registry.terraform.io/providers/hashicorp/kubernetes... .

      If you used helm + terraform before, you'll have no problem understanding the terraform kubernetes provider (as opposed to the helm provider).

      • e12e 3 days ago

        It does make it challenging to track operators as upstream usually only provide/document helm installation.

        If you write your own tf definition of operator x v1, it can be tricky to upgrade to v2 - as you need to figure out what changes are needed in your tf config to go from v1 to v2.

    • pests 3 days ago

      Helm is not official or blessed or anything, just another third party tool people install after install k8s.

    • mx_03 3 days ago

      The way I understand, helm is the npm of k8s.

      You can install, update, and remove an app in your k8s cluster using helm.

      And you release a new version of your app to a helm repository.

      • holysoles 3 days ago

        The thing i would add to this is that in most cases, you need to manually provide config values to the install.

        This sounds okay in principle, but I far too often end up needing to look through the template files (what helm deploys) to understand what a config option actually does since documentation is hit or miss.

    • JamesSwift 3 days ago

      Helm is sort of like a docker (or maybe docker compose) for k8s, in terms of a helm chart is a prepackaged k8s "application" that you can ship to your cluster. It got very popular very quickly because of the ease of use, and I think that was premature which affects its day-to-day usability.

    • globular-toast 3 days ago

      It's a client-side preprocessor essentially. The K8s cluster knows nothing about Helm as it just receives perfectly normal YAMLs generated by Helm on the client.

      • c45y 3 days ago

        I really appreciate the k3s default with HelmChart type and operator installed. Makes working with charts simpler in my view

        • globular-toast 3 days ago

          Yes, I use flux which has a similar HelmChart/HelmRelease resource. One of the things that took me a while to "get" with K8s is operators are just clients running on the cluster.

zdw 3 days ago

Helm is truly a fractal of design pain. Even the description as a "package manager" is a verifiable lie - it's a config management tool at best.

Any tool that encourages templating on top of YAML, in a way that prevents the use of tools like yamllint on them, is a bad tool. Ansible learned this lesson much earlier and changed syntax of playbooks so that their YAML passes lint.

Additionally, K8s core developers don't like it and keep inventing things like Kustomize and similar that have better designs.

  • torginus 3 days ago

    Imho, anyone who thought putting 'templating language' and 'significant whitespace' together is a good idea deserves to be in the Hague

    • Cyphus 3 days ago

      Seriously. I’ve lost at least 100 hours of my life debugging whitespace in templated yaml. I shudder to think about the total engineering time wasted since yaml’s invention.

      • zdc1 3 days ago

        You blame YAML but I blame helm. I can build a dict in Python and dump it as YAML. I've painlessly templated many k8s resources like this. Why can't we build helm charts in a DSL or more sensible syntax and then dump k8s manifests as YAML? Using Go templating to build YAML is idiocy and the root of the issue here.

        There's lots of advice on StackOverflow against building your own JSON strings instead of using a library. But helm wants us to build our own YAML with Go templating. Make it make sense.

        • Cyphus 10 hours ago

          I 100% agree. It’s not so much the yaml as it is the templating. I originally wanted to say “since the invention of yaml/jinja” in the parent comment because that’s what I’ve gotten most of my gray hairs from (saltstack templating). Go templates are not jinja but fundamentally the same thing - they have no syntax awareness and effectively are just string formatters.

          I took out the part about templating because I thought it made my comment too wordy, but ended up oversimplifying.

        • nucleardog 2 days ago

          This is more or less the approach Apple's "pkl" takes.

          You define your data in the "pkl language", then it outputs it as yaml, json, xml, apple property list, or other formats.

          You feed in something like:

              apiVersion = "apps/v1"
              kind = "Deployment"
              metadata {
               name = "my-deployment"
               labels {
                ["app.kubernetes.io/name"] = "my-deployment"
                ["app.kubernetes.io/instance"] = "prod"
               }
              }
              spec {
               replicas = 3
               template {
                containers {
                 new {
                  name = "nginx"
                 }
                 new {
                  name = "backend"
                 }
                }
               }
              }
          
          And then you `pkl eval myfile.pkl -f yaml` and get back:

              apiVersion: apps/v1
              kind: Deployment
              metadata:
                name: my-deployment
                labels:
                  app.kubernetes.io/name: my-deployment
                  app.kubernetes.io/instance: prod
              spec:
                replicas: 3
                template:
                  containers:
                  - name: nginx
                  - name: backend
          
          The language supports templating (structurally, not textually), reuse/inheritance, typed properties with validation, and a bunch of other fun stuff.

          They also have built in package management, and have a generated package that provides resources for simplifying/validating most kubernetes objects and generating manifests.

          There's even a relatively easy path to converting existing YAML/JSON into pkl. Or the option to read an external YAML file and include it/pull values from it/etc (as data, not as text) within your pkl so you don't need to rebuild everything from the ground up day 1.

          Aaaaand there's bindings for a bunch of languages so you can read pkl directly as the config for your app if you want rather than doing a round trip through YAML.

          Aaaaand there's a full LSP available. Or a vscode extension. Or a neovim extension. Or an intellij extension.

          The documentation leaves a bit to be desired, and the user base seems to be fairly small so examples are not the easiest to come by... but as far as I've used it so far it's a pretty huge improvement over helm.

      • torginus 3 days ago

        Yaml wouldn't be so bad if they made the templates and editors indent-aware.

        Which is a thing with some Python IDEs, but it's maddening to work on anything that can't do this.

        • emmelaich 3 days ago

              autocmd FileType yaml setlocal et ts=2 ai sw=2 nu sts=0
          
          I'm sure Emacs and others have something similar
  • lucyjojo 3 days ago

    we use cue straight to k8s resources. it made life way better.

    but we don't have tons of infra so no idea how it would run for big thousands-of-employees corps.

honkycat 3 days ago

Helm sucks.

Helm, and a lot of devops tooling, is fundamentally broken.

The core problem is that it is a templating language and not a fully functional programming language, or at least a DSL.

This leads us to the mess we are in today. Here is a fun experiment: Go open 10 helm charts, and compare the differences between them. You will find they have the same copy-paste bullshit everywhere.

Helm simply does not provide powerful enough tools to develop proper abstractions. This leads to massive sprawl when defining our infrastructure. This leads to the DevOps nightmare we have all found ourselves in.

I have developed complex systems in Pulumi and other CDKs: 99% of the text just GOES AWAY and everything is way more legible.

You are not going to create a robust solution with a weak templating language. You are just going to create more and more sprawl.

Maybe the answer is a CDK that outputs helm charts.

  • cryptonector 2 days ago

    Ok, thought experiment: why not use the k8s JSON interfaces and use jq to generate/template your deployments/services/statefulsets/argo images/etc.?

    You say you want a functional DSL? Well, jq is a functional DSL!

jarym 3 days ago

So many people complaining about Helm but I'll share my 2 experiences. At my last 2 companies we shipped Helm charts for administrators to easily deploy our stuff.

It worked fine and was simple enough which is what the goal was. But then people came along wanting all sorts of customisations to make the chart configurable to work in their environments. The charts ended up getting pretty unwieldy.

Helm is a product that serves users who like customization to the nth-degree. But everyone else hates it.

Personally, I would prefer it if the 'power users' just got used to forking and maintaining their own charts with all the tweaks they want. The reason they don't do that of course is that it's harder to keep up with updates - maybe that's the problem that needs solving.

  • btown 3 days ago

    I recently learned about Helmfile's support for deep declarative patching of rendered charts, without requiring full forks with value-template-wiring. It's been a gamechanger!

    https://helmfile.readthedocs.io/en/latest/advanced-features/...

    In your context, it might help certain clients. It does require that the upstream commit to not changing its architecture, but if the upstream is primarily bumping versions and adding backwards-compatible features, and if you document all the patches you're recommending in the wild, it might be an effective tool.

ojhughes 3 days ago

Helm shines when you’re consuming vendor charts (nginx-ingress, cert-manager, Prometheus stack). It’s basically a package manager for k8s. Add a repo, pin a version, set values, and upgrade/rollback as one unit. For third-party infra, the chart’s values.yaml provides a fairly clean and often well documented interface

  • rirze 3 days ago

    I used to be a team that hosted internal enterprise services and this was the main reason we used helm. Someone wrote charts for these self-hosted applications.

    (Not all of them were written in a sane manner, but that's just how it goes)

  • preisschild 3 days ago

    Yeah, I agree. Creating and maintaining helm charts sucks, but using them (if they are properly made and exposes everything you want to edit in the values.yaml) is a great experience with gitops tools such as FluxCD or helmfile.

sprior 3 days ago

I have several Docker hosts in my home lab as well as a k3s cluster and I'd really like to use k3s as much as possible. But when I want to figure out how to deploy basically any new package they say here are the Docker instructions, but if you want to use Kubernetes we have a Helm chart. So I invariably end up starting with the Docker instructions and writing my own Deployment/StatefulSet, Service, and Ingress yaml files by hand.

  • frogperson 3 days ago

    Ive found it easier, in most cases, to run 'helm template ...' on an existing chart, snd then use the output as my starting point.

  • mkesper 2 days ago

    That's probably easier than figuring out using a complicated Helm chart.

jasonvorhe 3 days ago

Helm is the number 1 reason I'm looking to leave behind my DevOps/SRE job. Basically every job or project I accept involves working with helm in some capacity and I'm just tired of working with mostly garbage helm charts, especially big meta-charts or having to fork a chart to add a config parameter value override somewhere. Debugging broken chart installs or incomplete upgrades is also nothing but pain. Most helm charts remind me of working with ansible-galaxy roles around ~2015.

  • preisschild 3 days ago

    Been using bjw-s' common library chart (& its app-template companion) [1] for my homelab and it improved my experience with helm by a lot, since you only have to edit the values.yaml without doing weird text templating. Hope he gets more funding for maintainence so it can be used for more "production" systems.

    [1]: https://github.com/bjw-s-labs/helm-charts/tree/main

    See here for more examples on how people are using this chart:

    https://kubesearch.dev/#app-template

    • jasonvorhe 2 days ago

      Really appreciate this, I will look into it!

  • 12_throw_away 2 days ago

    > Helm is the number 1 reason I'm looking to leave behind my DevOps/SRE job.

    A few years ago, the startup I worked at folded - just as the new CTO's mandate to move everything to K8s with Helm was coming into effect. Having to scramble for a new job sucked of course, but in retrospect, I honestly have good feelings associated with the whole debacle: A) I learned a lot about Helm, B) I no longer needed to work with Helm, and C) I'm now quite sure that I don't want to be part of any engineering org that makes the decision to use it.

    This is not exactly a criticism of these technologies, but simply me discovering that I'm simply utterly incompatible with it. Whether it's a failing with the Cloud Native Stack, or a personal failing of mine, it doesn't matter - everyone's better off when I stay far away from it.

vxvrs 3 days ago

As someone who stared out with Helm and has not used any of its alternatives, I had no idea how hated it is. Maybe it's just because of how I use it, but once I got the hang of the template charts I don't feel like I'm running into any hurdles while using it.

smetj 3 days ago

Came here to feel the temperature of the comments, and unsurprisingly, most folks seem to have plenty of gripes with Helm.

A Helm chart is often a poorly documented abstraction layer which often makes it impossible to relate back the managed application's original documentation to the Helm chart's "interface". The number of times I had to grep through the templates to figure out how to access a specific setting ...

  • cryptonector 2 days ago

    I don't understand why having to "grep through the templates" is so bad. Oh, I get it, you just want to know what knobs are available for tweaking, and in a well-designed chart those will all be segregated in values files, with overrides specified on the command-line as needed. And so that's what documentation is for, and if a chart does not surface certain knobs from the product, well, yeah, you'll have to modify the chart if you want it to.

    What is the essence of the complaint here? That chart authors do poor jobs? That YAML sucks (it does! it so so does!)? Just that charting provides an abstraction you'd rather not have? (If so, why not just... not use Helm?) Something else?

    • smetj 2 days ago

      > What is the essence of the complaint here?

      As said, that I often cannot relate the managed application's documentation to the Helm chart's interface?

      Reason for it can vary ... poor Helm chart documentation, poor Helm chart design, Helm chart not in sync with application releases, ... The consequence is that I often need to grep through its templates and logic to figure out how to poke the chart's interface to achieve what I want. I don't think that's reasonable to say that's part of the end-user experience.

      PS: I have no gripes with YAML

      • cryptonector a day ago

        None of that seems like a reasonable complaint about Helm itself, unless Helm makes it easy to write poor charts and never document them. When faced with shitty code with shitty docs, maybe the thing to do is to send a PR and make it better. At least a ticket.

        • Too 21 hours ago

          When 99% of all charts have this problem, maybe the problem isn’t with the individual charts, rather a symptom of a fundamentally dissonant abstraction model. The core problem is that it’s overkill. The solution to that is not to add or fix, the solution is to remove.

          Why would I need a chart for a single container app? Making this simple is what Kubernetes is designed for. No, I don’t want your ServiceAccounts or PVs because I anyway need to grant and understand the permissions and select the size and SKU of the underlying disk.

          Deploying an app in your own infrastructure has too many knobs that need to be turned so you need to expose all of them. Just spend a few minutes extra to write your own deployment manifest. While it’s a few more lines of code vs ”helm install”, you will not regret it and you’ll get a much better understanding of what’s actually running.

          Now there are of course exceptions to this, like Prometheus or Ingress operators where more complex charts are warranted. What I’m talking about is those charts that just wrap what can be translated from docker-compose to k8s in two minutes.

          • cryptonector 6 hours ago

            But no one is forcing you to use charts. If you have a trivial single-container app, then don't use Helm. I'm still not seeing the complaint.

  • Too 2 days ago

    This. Almost every chart try to be helpful and hide the upstream configuration of the application. Inevitably, you will sooner or later need to change a config. Now it’s not enough to read the documentation of the application, you also need to map this parameter into whatever values the helm chart translated it to. I wouldn’t even call it an abstraction, since it’s only read in a single location, it’s just a dumb and pointless translation. Total nonsense.

greenwallnorway 3 days ago

Can I hear from those of you who have had a good IAC experience? What tools worked well?

  • badLiveware 3 days ago

    ArgoCD + Helm

    But really any kind of reconciler, e.g. flux or argo with helm works very well. Helm is only used as a templating tool, i.e. helm template is the only thing allowed. It works very well and I've ran production systems for years without major issues.

    I dont really understand how people have so much trouble with Helm, granted yaml whitespace + go templating is sometimes awful, it is the least bad tool out there that I have tried and once you learn the arcane ways of {{- its mostly a non-issue.

    I would recommend writing your own charts for the most part and using external charts when they are simple, or well proven. Most applications you want to run arent that complicated, they are mostly a collection of environment variables, config files, and arguments.

    If I could wish for a replacement of helm, it would be helm template with the chart implemented in a typed language, e.g. TypeScript, instead of go template but backwards compatible with go template.

  • trenchpilgrim 3 days ago

    I wrote Go and Python programs that constructed the manifests using the native Kubernetes types and piped them into kubectl apply. Had to write my own libraries for doing migrations too. But after that bootstrapping it worked great.

    • anttiharju 3 days ago

      Reminds me of cdk8s if one is looking for a framework if it can be called that

      cdk8s.io

  • vbezhenar 3 days ago

    Kubernetes API uses JSON. JSON is JavaScript Object Notation. So naturally the best approach to work with JSON is to write JavaScript or TypeScript code. You can just output JSON and consume it with kubectl. You can read data from whatever format you want, process it and output JSON. You can write your little functions to reduce boilerplate. There are many options that are obvious once you just embrace JavaScript.

    Of course most other programming languages will work just as well, it's just JavaScript being the most natural fit for JSON.

    • HumanOstrich 2 days ago

      > Kubernetes API uses JSON. JSON is JavaScript Object Notation. So naturally the best approach to work with JSON is to write JavaScript or TypeScript code.

      I don't really like this superficial reasoning. You can specify, generate, parse, and validate JSON in many common languages with similar levels of effort.

      Saying you should use JavaScript to work with JSON because it has JavaScript in the acronym is about as relevant as comparing Java to JavaScript because both have Java in the name.

    • trenchpilgrim 3 days ago

      There are some features of Kubernetes that are only available in the Go client like Informers. So Go is a much more natural fit (you can move between JSON and Go structs with one function call + error check)

  • fjsdkfjwjd 2 days ago

    I like pulumi (iff typescript) and cdk8s.

    terraform with helm/kubernetes/kubectl providers is hit or miss. But I love it for simple things. For hairy things I will want full TypeScript with Pulumi.

  • preisschild 3 days ago

    Im quite happy with FluxCD+Helm. Helm also supports creating library charts (basically component libraries) that can improve the experience of creating and maintaining helm charts by a lot.

  • tribaal 3 days ago

    Probably an unpopular opinion, but it’s been a couple of jobs that I write “just python” to generate k8s manifests, and it works really, really well.

    There’s packages. You can write functions. You can write tests trivially (the output is basically a giant map that you just write out as yaml)…

    I’m applying this to other areas too with great success, for example our snowflake IaC is “just python” that generates SQL. It’s great.

  • Too 2 days ago

    Terraform. It’s declarative, type safe and just expressive enough to create basic conditionals, loops and reusable modules. Providers exists for all clouds and k8s.

    Now it’s not perfect either. It does have some issues with slow querying of the current state during planning, even when it has the tfstate as a cache, which is another source of errors.

  • mattcanhack 3 days ago

    Like the others, I'm using a programming language except it is Javascript because we're a Node.js company. It actually works well enough

solatic 3 days ago

Most people in this thread, it seems, just want a simple way to manage Kubernetes manifests, something that keeps track of different settings for different environments and what's in common for each environment in order to generate the final manifests for an environment. If so, Helm is over-engineered for your use-case. Stick with Kustomize or jsonnet.

Helm's contribution (as horrible as text templating on YAML is) is, yes, to be a package manager. Part of a Helm chart includes jobs ("hooks") that can be run at different stages (pre-install, pre-upgrade, etc.) as well as a job to run when someone runs "helm test", and a way to rollback changes ("helm rollback"), which is more powerful than just rolling back a Deployment, because it will rollback changes to CRDs, give you hooks/jobs that can run pre- and post-rollback, etc.

Helm charts are meant to be written by someone with the relevant skills sitting next to the developers, so that it can be handed off to another team to deploy into production. If that's not your organization or process, or if your developers are giving your ops teams Docker images instead of Helm charts, you're probably over-engineering by adopting it.

  • aduwah 2 days ago

    CRDs in helm are such a freaking nightmare! You want a clean install because you are in a hole? No worries let's remove all the crds and delete/create everything else relying on them. Separating the two (crds and other objects) is a solution but then you have a bastardize thing to maintain that is not latching upstream

    Also I cannot count how many times I had to double/triple run charts because crds were in a circular dependency. In a perfect world this must not be an issue but if you want to be a user of an upstream chart this is a pain

  • hylaride 2 days ago

    The core problem, I think, is that K8s is overly complicated for 95% of deployments out there, but it's become the default standard.

    People then start creating tooling to mask some of the complexity, but then said tooling grows to support the full K8s feature set and then we're back to square one.

    Because the rush to K8s was so fast (and arguably before it was ready) the tooling often became necessary.

    > Helm charts are meant to be written by someone with the relevant skills sitting next to the developers.

    That makes sense for large organizations, but it still gets complicated depending on how your service plugs into a greater mesh of services.

    I currently treat helm the same way I treat Cloudformation on AWS (another horrid thing to deal with). If some third party has it so that I can easily take the template and launch it, then great. I don't want to go any further under the hood than that.

annexrichmond 3 days ago

Helm is the necessary evil for Kubernetes chose YAML

  • vbezhenar 3 days ago

    Helm works at text level. This approach could have worked with YAML, JSON, XML or any other text format. You can template C++ code with Helm if you really want. It's just golang templates below.

    And that makes it wrong. YAML is structured format and proper templating should work with JSON-like data structures, not with text. Kustomize is better example.

    • cryptonector 2 days ago

      If you're going to template JSON, I recommend jq for that.

vibe_assassin 2 days ago

I am a fan of helm. The templating language can be pretty ugly sometimes, but your helm charts can be as simple as you want, and the basic functionality along with dependencies work fine. Some of my helm charts are basically just straight K8s manifests with minimal templating. I like helm because it lets me encapsulate a deployment in a single package, if I want to add some insane templating logic, that's on me.

mt42or 3 days ago

Amazing how people are complaining while proposing shit solutions. Seems like nobody is doing infra seriously there.

  • koalalorenzo 3 days ago

    Probably they have a different experience! I love using helm but I feel I got used to go templates and sub charts done right. I use it at work a lot and at home on my homelab with no issues at all: I guess is the usual tab vs spaces.

    The alternatives of helm are not that interesting to me: I still have nightmare when I had to use jsonnet and kustomize just for istio, with upgrade hell.

    So I am sticking to helm as it feels way straight forward when you need to change just a few things from an upstream open source project: way fewer lines to maintain and change!

    • fsniper 2 days ago

      Most of the time discussion derails as everyone is focusing on different aspects of the experience.

      When you look into all the complaints one by one they are exceptionally acurate.

        * yaml has its quirks. - check
        * text templating can’t be validated for spec comformity - check
        * helm has lot’s of complexity - check
        * helm has dependency problems - check
        * helm charts can have too many moving parts with edge cases causing deep dive in the chart - check
      
      and many others. However proposed solutions cut short on providing the value helm brings on.

      Helm is not just a templating engine to provide kubernetes manifests. It’s an application deployment and distribution ecosystem. Emphasis on the "ecosystem".

        * It brings dependency management,
        * It provides kubernetes configuration management.
        * It provides abstraction over configuration to define applications instead of configuration.
        * It provides application packaging solution.
        * It provides an application package manament solution.
        * There is community support with huge library of packages.
        * It’s relatively easy to create or understand charts with a varied experience level. A more robust and strictly typed templating system would remove at least half of this spectrum.
        * The learning curve is flat.
      
      When you put all of these in to consideration, it’s relatively easy to understand why it’s this prominant in the kubernetes ecosystem.
webcoon 3 days ago

And it STILL uses text-based Go templates instead of a proper language based on structured input and output? This was always my main pain point with Helm and also of many others I talked to. This major upgrade was years in the making and they couldn't add support for a single of many available options like CUE, JSONNET, or KCL? What an utter waste.

JohnMakin 3 days ago

> CLI Flags renamed

> Some common CLI flags are renamed:

> --atomic → --rollback-on-failure > --force → --force-replace

> Update any automation that uses these renamed CLI flags.

I wish software providers like this would realize how fucking obnoxious this is. Why not support both? Seriously, leave the old, create a new one. Why put this burden on your users?

It doesn't sound like a big deal but in practice it's often a massive pain in the ass.

  • fsniper 2 days ago

    Helm developers are not known to be developer/user friendly at all.

sureglymop 3 days ago

I really don't like helm. I think we have arrived at abstraction over abstraction over abstraction.

The last project I had to be involved with used kustomize for different environments, flux to deploy, helm to use a helmchart which took in a list of configmaps using "valuesFrom". Not only does kustomize template and merge together yaml but so does the valuesFrom thing, however at "runtime" in the cluster.

There's just not a single chance to get any coherent checking/linting or anything before deployment. I mean how could a language server even understand how all this spaghetti yaml merges together? And note that I was working on this as a developer in a very restricted environment/cluster.

Yaml is too permissive already, people really start programming with it. The thing is, kubernetes resources are already an abstraction. That's kind of the nice thing about it, you can create arbitrary resources and kubernetes is the management platform for them. But I think it becomes hairy already when we create resources that manage other resources.

And also, sure some infrastructure may be "cattle" but at some point in the equation there is state and complexity that has to be managed by someone who understands it. Kubernetes manifests are great for that, I think using a package manager to deploy resources is taking it too far. Inevitably helm charts and the schema of values change and then attention is needed anyway. It makes the bar for entry into the kubernetes ecosystem lower but is that actually a good thing for the people who then fall into it without the experience to solve the problems they inevitably encounter?

Sorry for the rant but given my second paragraph I hope there is some understanding for my frustrations. Having all that said, I am glad they try to improve what has established itself now and still welcome these improvements.

  • sgarland 3 days ago

    > I think we have arrived at abstraction over abstraction over abstraction.

    > The thing is, kubernetes resources are already an abstraction.

    Your first comment was more accurate - they’re heavily nested abstractions.

    A container represents a namespace with a limited set of capabilities, resources, and a predefined root.

    A Pod represents one of more containers, and pulls the aforementioned limitations up to that level.

    A ReplicaSet represents a given generation of a set amount of Pods.

    A Deployment represents a desired number of Pods, and pulls the ReplicaSet abstraction up to its level to manage the stated end state (and also manages their lifecycle).

    I think most infra-adjacent people I’ve worked with who use K8s could accurately describe these abstractions to the level of a Pod, but few could describe what a container actually is.

    > It makes the bar for entry into the kubernetes ecosystem lower but is that actually a good thing for the people who then fall into it without the experience to solve the problems they inevitably encounter?

    It is not a good thing, no. There is an entire generation of infra folk who have absolutely no clue how computers actually work, and if given an empty bare metal server connected to a LAN with running servers, would be unable to get Linux up and running on the empty server.

    I am not against K8s, nor am I against the cloud - I am against people using abstractions without understanding the underlying fundamentals.

    The counter to this argument is always something along the lines of, “we build on abstractions to move faster, and build more powerful applications - you don’t need to understand electron flow to use EC2.” And yes, of course there’s a limit; it’s probably somewhere around understanding different CPU cache levels to be well-rounded. However, IME at the lower levels, the assumption that you don’t need to understand something to use it doesn’t hold true. For example, if you don’t understand PN junctions, you’re probably going to struggle to effectively use transistors. Sure, you could know that to turn a silicon BJT transistor on, you need to establish approximately 0.7 VDC between its base and emitter, but you wouldn’t understand why it’s much slower to turn off than to turn on, or why thermal runaway happens, etc.

    • sureglymop 2 days ago

      > The thing is, kubernetes resources are already an abstraction.

      What I meant by that is that kubernetes resources are generic. "Objects" in the cluster representing arbitrary things. And this makes sense because, it's okay if one doesn't know what cgroups and namespaces are to deploy a container/pod resource. What I'm trying to say is that this kind of arbitrary abstraction is what k8s brought to the table but people keep trying to abstract again on top of that which makes no sense. "Resource" is already generic.

beefnugs 3 days ago

nightmares (if anything went wrong i had to blow helm stuff away and start over) ontop of nightmares (kubernetes when i was trying it was tons of namespaces called beta, then you never knew what to update to or when you had to update, or what was incompatible) ontop of the realization that no one should be using kubernetes unless you have over 50 servers running many hundreds of services. Otherwise its just a million times simpler using docker compose

  • mch82 3 days ago

    Can you recommend any articles about minimum scale necessary to make Kubernetes worth it?

    • wvh 3 days ago

      If you count 3 control plane nodes and at least one or two extra servers worth of space for pods to go when a node goes down, I'd say don't bother for anything less than 6-7 servers worth of infrastructure. Once you're over 10 servers, you can start using node affinity and labels to have some logical grouping based on hardware type and/or tenants. At that point it's just one big computer and the abstraction starts to really pay off compared to manually dealing with servers and installation scripts.

      I'd say the abstraction is not worth it when you have only a steady 2-3 servers worth of infrastructure. Don't do it at "Hello, world!" scale, you win nothing.

      (I work for a company that helps other companies set up and secure larger projects into environments like Kubernetes.)

    • vbezhenar 3 days ago

      I would always use Kubernetes, if you have 4 or more GB RAM on your server. It's just better than docker compose in every imaginable way. The only issue with Kubernetes is that it wants around 2 GB RAM for itself.

    • wavesquid 3 days ago

      The answer today is more than one node (instance/kernel running)

nullify88 3 days ago

Running my home lab at home, I've grown sick of constant Renovate PRs against the helm charts in use. I recall one "minor" update not long ago recently in CoreDNS was messing with the exposed ports in the service and installs broke for a lot of folks. If I need to run some software now, I `helm template` the resources and commit those to git. I'm so tired of some random "Extended helm chart to customise labels / annotations in $some resource" change notes. Traefik and Cilium are the only helm charts I use, the rest I `helm template` in to my gitops repo, customize and forget.

At Dayjob in the past, we've debugged various Helm issues caused by the internal sprig library used. We fear updating Argo CD and Helm for what surprises are in store for us and we're starting to adopt the rendered manifests pattern for greater visibility to catch such changes.

therealfiona 2 days ago

Still doesn't handle CRDs.

CRDs are one of the worst things to manage in a K8s cluster.

rhaps0dy 3 days ago

To solve all the problems with Helm it seems easy enough to use Python dataclasses that serialize to YAML (or your favourite language).

Then to convert a new Helm chart to this you can just use AI + tests that check the two things render to the same output.

honkycat 3 days ago

What is Charts v3? Please tell me it is LUA support.

  • hobofan 3 days ago

    I think what Charts v3 will be is still an open question. According to the current accepted HIPs[0], there is some groundwork to in general enable a new generation of a chart format via HIP-0020, and most HIPs after that contain some parts that are planned to make it into Charts v3 (e.g. resource creation sequencing via HIP-0025).

    [0]: https://github.com/helm/community/tree/main/hips

woile 3 days ago

Now that you'll are here, has anyone tried timoni as an alternative to helm? I have it in my to-try-tools.

https://github.com/stefanprodan/timoni

  • freakybytes 3 days ago

    Yes, I currently have 2 timoni modules in production, deployed with ArgoCD - and it's great! It has a bit of a learning curve and takes a bit of getting used to, that there is no "overwriting" values, but it saves so much time on template iteration. The language server support for cue could be better, though.

CraigJPerry 3 days ago

Imagine 1,000s of helm charts. Your only abstraction tools are an umbrella chart or a library chart. There isn't much more in helm.

I liked KRO's model a lot but stringly typed text templating at the scale of thousands of services doesn't work, it's not fun when you need to make a change. I kinda like jsonnet plus the google cli i forget the name of right now, and the abstraction the Grafana folks did too but ultimately i decided to roll my own thing and leaned heavily into type safety for this. It's ideal. With any luck i can open source it. There's a few similar ideas floating around now - Scala Yaga is one.

  • jodersky 3 days ago

    I'm curious what the google cli is that you're referring to. Could it be kubecfg (https://github.com/kubecfg/kubecfg)?

    I've used it in the past (for a quite small deployment I must say), but have been very happy with it. Specifically the diff mode is very powerful to see what changes you'll apply compared to what's currently deployed.

    • CraigJPerry 2 days ago

      Yeah that's the one and the Grafana one is Tanka

bandrami 3 days ago

So this is neither helm the Emacs completion framework nor helm the wavetable synthesizer?

  • markalby 3 days ago

    I was also confused by the title. Off topic but Vital is the newer wavetable synth by the maker of the Helm synth, Matt Tytel. The synth Helm is a really good foss subtractive synth but not wavetable

    • bandrami 2 days ago

      Ah right. I haven't used it in years (since Vital came out, come to think of it). That and Rui's padth were really fun for ambient stuff back in the day.

kachapopopow 3 days ago

Obligatory complaint about bitnami rugpulling and effectively ruining a very nice eco system.

lugoues 3 days ago

Ugh, can we all just agree to stop using helm

  • verdverm 3 days ago

    would be nice, but we would also have to reimplement all of the charts we use, big ask/lift

    DevOps has more friction for tooling changes because of the large blast radius

  • pphysch 3 days ago

    What do you prefer?

    • NeckBeardPrince 3 days ago

      Just straight raw manifest files.

      • pyth0 3 days ago

        How do you have anything dynamic? How do you handle any differences at all between your infrastructure and what the authors built it for.

        • prescriptivist 3 days ago

          I get the feeling that most people commenting here have only surface level experience with deploying k8s applications. I don't care for helm myself but it's less bad than a lot of other approaches like hand rolling manifests with tools like envsubst and sed.

          Kustomize also seems like hell when a deployment reaches a certain level of complexity.

        • NeckBeardPrince 3 days ago

          Sorry, raw manifests and kustomize and a soupçon of regret.