hckrnws
I have a variation of this as well. Boring is good. Docker for deploying whatever means I can deploy to any docker capable platform, which is typically some kind of set of vms with a load balancer in front of it. Our setup runs in two separate and very different cloud environments. One of our customers insisted on this. Same stuff but provisioned differently.
We don't use terraform on purpose because it is overkill to automate something that only comes up less than once a year. Doing it manually and documenting that is good enough. And with these kind of intervals, the automation tends to break down anyway due to changes. You get funny issues like "the last time I ran this script is two lts versions of Ubuntu ago".
We do have quite a bit of build and deploy automation via github actions. That pays itself back because we use it every day.
Otherwise, the choice is between managed and self hosted databases, search clusters, etc. Managed is more expensive. But it massively frees up my time which is more valuable to me. So in one of our data centers we do managed and in the other one we self host because the cloud platform in question is just not great (obsolete, unmaintained versions of all the stuff I care about).
You might conclude based on the above that I know nothing about devops and have never worked in teams where it is done properly. The opposite is true. I've been there and seen it all. The reason I do this is that I don't have the budget to do this properly so I'm literally trading off my own time here. It's a calculated choice. So no teraform because it doesn't make sense at our scale. No Kubernetes because the idle cluster cost with nothing in it is larger than the total cost of my current setup. And I have a monolith so kubernetes is just a really convoluted way to run a single service.
I was once a consultant at a mid-sized company. One of the developers openly admitted to pushing for new technology so that he could include it on his resume for his next job application...
I believe this is not an isolated incident, but rather part of a larger trend.
It's not a trend...I've seen it for decades. For example, in my heavy network consulting days you could tell when someone was studying to get their CCIE, because they'd implement something that's on the test, is technically correct, but almost no one would do in practice. Like "why in the world are they using IS-IS instead of OSPF like the rest of the world??". 95% of the time, it turned out the person who implemented it (who as often as not had moved on to their next gig) was getting ready for the IRP lab exam when they did it.
In my circles it is called Resume Driven Development.
My favorite case was when someone moved a single component from ground into a lambda.. that called back down to a ground webservice anyway.
But hey, they were able to add AWS Lambda to their resume, right?
I came on board after the guy left, got props from the business by 'lowering our cloud spend' when I ripped the lambda back out.
Now you can add cloud resiliency to YOUR resume.
Win-win.
I did it as cloud optimization, but at least have receipts for that (and have done more notable endeavors in such as well)
"No Kubernetes because the idle cluster cost with nothing in it is larger than the total cost of my current setup" has been the case for quite a while. I wonder whether it is anywhere on the k8s priority list to fix this. Even K3s consumes a lot more than say docker swarm for more or less the same job.
I judge boring by how many external dependencies a website depends on, this is not boring. Boring is a server with Postgres (or SQLite). Maybe two servers, in which case you'd need a load balancer (another dependency).
> I write code and run the dev servers (Django runserver & webpack dev server) by using PyCharm. Yea, I know, it’s boring. After all, it’s not Visual Studio Code or Atom or whatever cool IDEs.
He seems a little fixated on calling everything he does "boring", even when uh... using an IDE?
> He seems a little fixated on calling everything he does "boring", even when uh... using an IDE?
I think he means PyCharm is not as "sexy" as VSCode but he uses it nevertheless.
Honestly can't see anything sexy (nor powerful) about VSCode and I have been using it for years now (due to lack of support in other preferred IDE's).
>He seems a little fixated on calling everything he does "boring"
Isn't that the whole point of the post? To give an account on the "boring" tech (which is a notion older than him and his post, which has been covered a few times on HN over the years) he uses?
And he's right in describing what he does as boring, when compared to what is seen as "standard practice"/cool/sexy/cutting edge/modern stacks nowadays.
That's one or two RDBMS servers, a regular old-school web application server (Django). That would be very "boring" (that is: older, tried and true) tech bets in 2024.
>even when uh... using an IDE?
IDEs have not been "exciting new tech" since the era of Symbolics and early Smalltalk IDEs.
What he says is that he doesn't use some "latest cool" IDE/editor, but an standard IntelliJ offering, like millions of enterprise programmers.
Calling Docker overkill for a “one-man sized” startup is an odd choice of words. Kubernetes, sure maybe. But deploying via containers, where orchestration can be a simple bash file beats having to git pull, pip install, and what else not on a handful machines.
Docker represents a core dependency on someone else's business decisions, technical choices, service uptime and extremely slow build tools.
For some set of reasonable values, these trade-offs are not always worth the benefits.
As one example, we run bare Linux across a collection of 4x on-prem machines for an extremely small business and it works great for us.
To me, containers are as core as version control.
Choose the flavour you prefer, but I do not believe there is a valid reason to avoid either for anything that you need to work on for more than a week.
[dead]
Docker can be overkill.
I host many rails apps on a single server, having one or two ruby/rails versions installed and a single postgres. They are all sharing the same architecture essentially.
I was thinking the same :). Docker definitely simplifies my workflow.
Previous discussions:
* https://news.ycombinator.com/item?id=20985875 (2010 points | Sept 16, 2019 | 451 comments)
* https://news.ycombinator.com/item?id=29093264 (214 points | Nov 3, 2021 | 56 comments)
Nice to see another one man company avoiding k8s.
I've been planning zero-downtime upgrades for my Elixir app (https://bernard.app) which makes heavy use of live views and long running background processes and zero downtime is not something which is super easy to do with bare podman, and I didn't want to reimplement half of k8s in bash.
Long story short, after one week of research, I have changed my mind and decided that writing a custom, half-baked crappy solution with Caddy, podman, a bash script and effort is still a couple orders of magnitude easier than buying into the k8s circus.
In fact, I like the k8s philosophy, it does make sense to someone that has managed services for almost 2 decades. K8s is wide because the problem is complex. What makes no sense whatsoever is the bullshit that lives around and outside it: Helm, YAML, templating systems which feels as flexible as COBOL, Argo, Flux, certificate rotation, etcd, k3s, k0s, RKE2, and I could go on for half an hour.
Yeah, bash script, podman and systemd it is for this one man business. I just wish I could throw Ansible into the flaming sun.
> I just wish I could throw Ansible into the flaming sun.
What's wrong with Ansible? I've successfully operated a few services at scale (~20 VMs) with it and it's always been the most dependable part of my setup. I basically use it as a "better bash" for configuring machines.
I do ignore their recommended project structure that results in an explosion of YAML files for roles, vars, etc and just define a set of simple playbooks, one for each component of the infrastructure. Everything is hooked together with a few python scripts that generate a dynamic inventory and hook up other vars that are passed to the playbooks. Works great and does what you expect.
> What's wrong with Ansible?
The abomination built upon YAML and Jinja because they wanted a declarative system (programming is scary!), and later tried to make it Turing-complete.
Jinja filters for variable means you need to learn a third language: not only you need to know YAML and Python, now you need to learn all the idiosyncrasies of Jinja's syntax and filters. Add that with the terrible documentation, which explains everything by example, but there is no single, for example, reference page that lists all supported filters and functions.
I was expecting more boring technologies. I have worked at companies +100 employees with a similar setup.
Great. You could save even more by using Ruby on rails and forget JS (React etc..). For deploy you could use Cloud66 and forget about ansible.
You missed some crucial piece of information:
> “Boring” here means I just use the tech stack that I’m familiar with so I can quickly launch the project & focus more on the business end.
For you, RoR and Cloud66 is probably boring. For him or me it would probably take some time to learn new things...
> The web frontend is primarily built with React + Redux + Webpack + ES.
No over-engineering here at all
Webpack -- instead of writing code to do what you want, install and configure a series of plugins, and then maybe chain more plugin-plugins to fix shortcomings with the original plugins.
Also, enjoy updating to a new version every few years that is completely non-backwards compatible yet never fully finished because every plugin is a separately versioned project. The never-ending update treadmill keeps you sharp!
I hope some day we can switch from Webpack to RSpack ....
Wondering if anyone has an idea on their (listennotes.com) business model?
- Is it primarily ads driven vs API usage? - How did it evolve over time? (I'm guessing the enterprise API was not there at the beginning?)
This post goes over how it came to be: https://www.listennotes.com/blog/how-i-accidentally-built-a-...
Worth noting: The link you shared also includes updates on the tech/business stack.
Thinking is the real work that engineers do.
This has been up many times before. It seems that the stack evolves so a lot can have happened since 2019 when the last update was made.
Meta: This seems to need a (2019) tag in the title.
No idea if it ages well (I'm not a web dev) but my guess would be "no".
What would be an example of the inverse?
Of all cutting edge technologies?
I think the opposite is actually a stratified repo with snapshots of the "new hotness" back through the history of the project. Each never _quite_ replaced by the next.
[dead]
Crafted by Rajat
Source Code