hckrnws
When I first heard the maxim that an intelligent person should be able to hold two opposing thoughts at the same time, I was naive to think it meant weighing them for pros and cons. Over time I realized that it means balancing contradictory actions, and the main purpose of experience is knowing when to apply each.
Concretely related to the topic, I've often found myself inlining short pieces of one-time code that made functions more explicit, while at other times I'll spend days just breaking up thousand line functions into simpler blocks just to be able to follow what's going on. In both cases I was creating inconsistencies that younger developers nitpick -- I know I did.
My goal in most cases now is to optimize code for the limits of the human mind (my own in low-effort mode) and like to be able to treat rules as guidelines. The trouble is how can you scale this to millions of developers, and what are those limits of the human mind when more and more AI-generated code will be used?
I had exactly this discussion today in an architectural discussion about an infrastructure extension today. As our newest team member noted, we planned to follow the reference architecture of a system in some places, and chose not to follow the reference architecture in other places.
And this led to a really good discussion pulling the reference architecture of this system apart and understanding what it optimizes for (resilience and fault tolerance), what it sacrifices (cost, number of systems to maintain) and what we need. And yes, following the reference architecture in one place and breaking it in another place makes sense.
And I think that understanding the different options, as well as the optimization goals setting them apart, allows you to make a more informed decision and allows you to make a stronger argument why this is a good decision. In fact, understanding the optimization criteria someone cares about allows you to avoid losing them in topics they neither understand nor care about.
For example, our CEO will not understand the technical details why the reference architecture is resilient, or why other choices are less resilient. And he would be annoyed about his time being wasted if you tried. But he is currently very aware of customer impacts due to outages. And like this, we can offer a very good argument to invest money in one place for resilience, and why we can save money in other places without risking a customer impact.
We sometimes follow rules, and in other situations, we might not.
Yes, and it is the engineering experience/skill to know when to follow the "rules" of the reference architecture, and when you're better off breaking them, that's the entire thing that makes someone a senior engineer/manager/architect whatever your company calls it.
Your newest team member sounds like someone worth holding onto.
> My goal in most cases now is to optimize code for the limits of the human mind (my own in low-effort mode) and like to be able to treat rules as guidelines. The trouble is how can you scale this to millions of developers, and what are those limits of the human mind when more and more AI-generated code will be used?
I think the truth is that we just CAN'T scale that way with the current programming languages/models/paradigms. I can't PROVE that hypothesis, but it's not hard to find examples of big software projects with lots of protocols, conventions, failsafes, QA teams, etc, etc that are either still hugely difficult to contribute to (Linux kernel, web browsers, etc) or still have plenty of bugs (macOS is produced by the richest company on Earth and a few years ago the CALCULATOR app had a bug that made it give the wrong answers...).
I feel like our programming tools are pretty good for programming in the small, but I suspect we're still waiting for a breakthrough for being able to actually make complex software reliably. (And, no, I don't just mean yet another "framework" or another language that's just C with a fancier type system or novel memory management)
Just my navel gazing for the morning.
I think the only way this gets better is with software development tools that make it impossible to create invalid states.
In the physical world, when we build something complex like a car engine, a microprocessor, or bookcase, the laws of physics guide us and help prevent invalid states. Not all of them -- an upside down bookcase still works -- but a lot of them.
Of course, part of the problem is that when we build the software equivalent of an upside down bookcase, we 'patch' it by creating trim and shims to make it look better and more structurally sound instead of tossing it and making another one the right way.
But mostly, we write software in a way that allows for a ton of incorrect states. As a trivial example, expressing a person's age as an 'int', allowing for negative numbers. As a more complicated example, allowing for setting a coupon's redemption date when it has not yet been clipped.
John Backus's Turing Award lecture meditated on this idea, and concluded that the best way to do this at scale is to simply minimize the creation of states in the first place, and be careful and thoughtful about where and how we create the states that can't be avoided.
I would argue that that's actually a better guide to how we manage complexity in the physical world. Mechanical engineers generally like to minimize the number of moving parts in a system. When they can't avoid moving parts, they tend to fixate on them, and put a lot of effort into creating linkages and failsafes to try to prevent them from interacting in catastrophic ways.
The software engineering way would be to create extra moving parts just because complicated things make us feel smart, and deal with potential adverse interactions among them by posting signs that say "Careful, now!" without clearly explaining what the reader is supposed to be careful of. 50 years later, people who try to stick to the (very sound!) principles that Backus proposed are still regularly dismissed as being hipsters and pedants.
I'd say that the extra moving parts are there in most cases not because someone wanted to "feel smart" (not that it doesn't happen), but to make the pre-existing moving parts do something that they weren't originally supposed to do, because nobody understands how those pre-existing parts work well enough to re-engineer them properly on the schedule that they are given. We are an industry that builds bridges out of matchsticks, duck tape, and glue, and many of our processes are basically about how to make the result of that "good enough".
To determine what states should be possible is the act of writing software.
I don't think we will ever get the breakthrough you are looking for. Things like design patterns and abstractions are our attempt at this. Eventually you need to trust that however wrote the other code you have to deal with is sane. This assumption is false (and it might be you who is insane thinking they could/would make it work they way you think it does).
We will never get rid of the need for QA. Automated tests are great, I believe in them (Note that I didn't say unit tests or integration tests). Formal proofs appear great (I have never figured out how to prove my code), but as Knuth said "Beware of bugs in the above code; I have only proved it correct, not tried it". There are many ways code can be meet the spec and yet wrong because in the real world you rarely understand the problem well enough to write a correct spec in the first place. QA should understand the problem well enough to say "this isn't what I expected to happen."
I suppose that depends on the language and the elegance of your programming paradigm. This is where primitive simplicity becomes important, because when your foundation is composed of very few things that are not dependent upon each other you can scale almost indefinitely in every direction.
Imagine you are limited to only a few ingredients in programming: statements, expressions, functions, objects, arrays, and operators that are not overloaded. That list does not contain classes, inheritance, declarative helpers, or a bunch of other things. With a list of ingredients so small no internal structure or paradigm is imposed on you, so you are free to create any design decisions that you want. Those creative decisions about the organization of things is how you dictate the scale of it all.
Most people, though, cannot operate like that. They claim to want the freedom of infinite scale, but they just need a little help. With more help supplied by the language, framework, whatever the less freedom you have to make your own decisions. Eventually there is so much help that all you do as a programmer is contend with that helpful goodness without any chance to scale things in any direction.
> protocols, conventions, failsafes, QA teams, etc, etc that are either still hugely difficult to contribute to (Linux kernel, web browsers, etc)
To be fair here, I don't think it's reasonable to expect that once you have "software development skills" it automatically gives you the ability to fix any code out there. The Linux Kernel and web browsers are not hard to contribute to because of conventions, they're hard because most of that code requires a lot of outside knowledge of things like hardware or HTML spec, etc.
The actual submitting part isn't the easiest, but it's well documented if you go looking, I'm pretty sure most people could handle it if they really had a fix they wanted to submit.
There are multiple reasons that contributing to various projects may be difficult. But, I was replying to a specific comment about writing code in a way that is easy to understand, and the comment author's acknowledgement that this idea/practice is hard to scale to a large number of developers (presumably because everyone's skills are different and because we each have different ideas about what is "clear", etc).
So, my comment was specifically about code. Yes, developing a kernel driver requires knowledge of the hardware and its quirks. But, if we're just talking about the code, why shouldn't a competent C developer be able to read the code for an existing hardware driver and come away understanding the hardware?
And what about the parts that are NOT related to fiddly hardware? For example, look at all of the recent drama with the Linux filesystem maintainer(s) and interfacing with Rust code. Forget the actual human drama aspect, but just think about the technical code aspect: The Rust devs can't even figure out what the C code's semantics are, and the lead filesystem guy made some embarrassing outbursts saying that he wasn't going to help them by explaining what the actual interface contracts are. It's probably because he doesn't even know what his own section of the kernel does in the kind of detail that they're asking for... That last part is my own speculation, but these Rust guys are also competent at working with C code and they can't figure out what assumptions are baked into the C APIs.
Web browser code has less to do with nitty gritty hardware. Yet, even a very competent C++ dev is going to have a ton of trouble figuring out the Chromium code base. It's just too hard to keep trying to use our current tools for these giant, complex, software projects. No amount of convention or linting or writing your classes and functions to be "easy to understand" is going to really matter in the big picture. Naming variables is hard and important to do well, but at the scale of these projects, individual variable names simply don't matter. It's hard to even figure out what code is being executed in a given context/operation.
> Yet, even a very competent C++ dev is going to have a ton of trouble figuring out the Chromium code base.
I don't think this is true, or at least it wasn't circa 2018 when I was writing C++ professionally and semi-competently. I sometimes had to read, understand and change parts of the Chromium code base since I was working on a component which integrated CEF. Over time I began to think of Chromium as a good reference for how to maintain a well-organized C++ code base. It's remarkably plain and understandable, greppable even. Eventually I was able to contribute a patch or two back to CEF.
The hardest thing by far with respect to making those contributions wasn't understanding the C++, it was understanding how to work the build system for development tasks.
Also agree that the example code base is not the best example to use.
The Chromium code base is a joy to read and I would routinely spend hours just reading it to understand deeper topics relating to the JS runtime.
Compared to my company's much smaller code base that would take hours just to understand the most simplest things because it was written so terribly.
That's true, and fair point for the example not being the best one. It was several years ago that I was poking at the Chromium code base to investigate something. I don't honestly remember much about the code itself, but I do remember struggling with the build system like you said. And that's probably why I just remember the whole endeavor as being difficult. Though, the build system being so complicated is not totally irrelevant to my point... Understanding how to actually build and use the code has some overlap with the idea of understanding the code or project as a whole.
I guess I just don't really get your point then, it's not like the Linux Kernel or Chromium or Firefox are giant buggy messes that don't work at all. They certainly have bugs but by-and-large they work very well with minimal issues for most people. I also think their codebases are pretty approachable, IMO A competent C or C++ developer can definitely read the code from either one with a little effort - It's not the easiest thing but it's definitely not impossible, most people just don't ever try.
My point was that making meaningful contributions such a big fixes requires understanding how the code is _supposed_ to function vs. how it actually functions, that's the hard part. In the majority of cases that's simply not something the code can tell you, there's no replacement for comparing the code to a datasheet or reading the HTML spec to understand how the rendering engine is supposed to work, and those things take time to learn. For the simpler parts people do actively contribute to those without tons of previous experience (or because they already have experience with a library or etc.).
> My point was that making meaningful contributions such a big fixes requires understanding how the code is _supposed_ to function vs. how it actually functions, that's the hard part. In the majority of cases that's simply not something the code can tell you [...]
That's kind of my point, though. I'm trying to zoom out and "think outside the box" for a minute. It's hard to compose smaller pieces into larger systems if the smaller pieces have behavior that's not very well defined. And our programming languages and tools don't always make it easy for the author of a piece of code to always understand that they introduced some unintended behavior.
To your first point: I'm not shitting on Chromium or Firefox or any other software projects, but they're honestly ALL "buggy messes" in a sense. I'm a middling software dev and the software I write for my day job is definitely more buggy, overall, than these projects. So, I'm not saying that other developers are stupid (quite the opposite!). But, the fact that there are plenty of bugs at any given point in any of these projects is saying something important, IMO. If I use our current programming tools to write a Base64 encode/decode library, I can do a pretty good job and there's a good chance that it'll have zero bugs in a fairly short amount of time. But, using the same tools, there's absolutely no hope that I (we, you, whoever) could write a web browser that doesn't have any bugs. That's actually a problem! We've come to accept it because that's all we've got today, but my point is that this isn't actually an ideal place to settle.
I don't know what the answer is, but I think a lot of people don't even seem to realize there's a problem. My claim is that there is a problem and that our current paradigms and tools simply don't scale well. I'm not creative enough to be the one who has the eureka moment that will bring us to the next stage of our evolution, but I suspect that it's what we'll need to actually be able to achieve complex software that actually works as we intend it to.
> I feel like our programming tools are pretty good for programming in the small, but I suspect we're still waiting for a breakthrough for being able to actually make complex software reliably. (And, no, I don't just mean yet another "framework" or another language that's just C with a fancier type system or novel memory management)
Readability is for human optimization for self or for other people's posterity and code comprehension to the readers mind. We need a new way to visualize/comprehension code that doesn't involve heavy reading and the read's personal capabilities of syntax parsing/comprehension.
This is something we will likely never be able to get right with our current man machine interfaces; keyboard, mouse/touch, video and audio.
Just a thought. As always I reserve the right to be wrong.
Reading is more than enough. What’s often lacking is usually the why? I can understand the code and what it’s doing, but I may not understand the problem (and sub problems) it’s solving . When you can find explanations for that (links to PR discussions, archives of mail threads, and forums post), it’s great. But some don’t bother or it’s somewhere in chat logs.
calculator app on latest macos (sequoia) has a bug today - if you write FF_16 AND FF_16 in the programmer mode and press =, it'll display the correct result - FF_16, but the history view displays 0_16 AND FF_16 for some reason.
> macOS is produced by the richest company on Earth and a few years ago the CALCULATOR app had a bug that made it give the wrong answers...
This is stated as if surprising, presumably because we think of a calculator app as a simple thing, but it probably shouldn't be that surprising--surely the calculator app isn't used that often, and so doesn't get much in-the-field testing. Maybe you've occasionally used the calculator in Spotlight, but have you ever opened the app? I don't think I have in 20 years.
I think this is backwards. A calculator app should be a simple thing. There's nothing undefined or novel about a calculator app. You can buy a much more capable physical calculator from Texas Instruments for less than $100 and I'm pretty sure the CPU in one of those is just an ant with some pen and paper.
You and I only think it's complex because we've become accustomed to everything being complex when it comes to writing software. That's my point. The mathematical operations are not hard (even the "fancy" ones like the trig functions). Formatting a number to be displayed is also not hard (again, those $100 calculators do it just fine). So, why is it so hard to write the world's 100,000th calculator app that the world's highest paid developers can't get it 100% perfect? There's something super wrong with our situation that it's even possible to have a race condition between the graphical effects and the actual math code that causes the calculator to display the wrong results.
If we weren't forced to build a skyscraper with Lego bricks, we might stand a better chance.
> That's my point. The mathematical operations are not hard (even the "fancy" ones like the trig functions). Formatting a number to be displayed is also not hard (again, those $100 calculators do it just fine).
Right, and that's my point: if all you want is a rock-solid computational platform, then you can use, for example, `bc`. (That's what I do.) I assume that Apple assumes that their users want something fancier than that, and it's there, with the fanciness of a shiny user interface on a less-exercised code path, that the bugs will inevitably come.
'bc' was first released literally half a century ago. If that is still the state of the art, I think it is absolutely fair to sound the alarm that something is VERY wrong with our modern software development practices. We shouldn't have to choose between "modern GUI" and "works".
(For what it's worth, Qalculate destroys both bc and the Mac calculator app in both command line and GUI categories, so making working software isn't entirely a lost art.)
Constantly, to keep the results of a calculation on screen. It's fallacious to assume that your own usage patterns are common. Hell, with as much evidence as you (none), I would venture that more people use the Calculator app than know that you can type calculations in Spotlight at all.
> It's fallacious to assume that your own usage patterns are common. Hell, with as much evidence as you (none) ….
I don't assume that my usage pattern is common. (My usage pattern is to drop to `bc`.) I assume that Calculator usage isn't common, but, recognizing that that is an assumption and that the only way to get evidence is to ask, I asked:
> Maybe you've occasionally used the calculator in Spotlight, but have you ever opened the app?
And you answered, so now together we have double the evidence that I alone did before. :-)
We've been there, done that. CRUD apps on mainframes and minis had incredibly powerful and productive languages and frameworks (Quick, Quiz, QTP: you're remembered and missed.) Problem is, they were TUI (terminal UI), isolated, and extremely focused; i.e. limited. They functioned, but would be like straight-jackets to modern users.
(Speaking of... has anyone done a 80x24 TUI client for HN? That would be interesting to play with.)
> has anyone done a 80x24 TUI client for HN
lynx still exists
Works a treat :)
I often Bang on about “software is a new form of literacy”. And this I feel is a classic example - software is a form of literacy that not only can be executed by a CPU but also at the same time is a way to transmit concepts from one humans head to another (just like writing)
And so asking “will AI generated code help” is like asking “will AI generated blog spam help”?
No - companies with GitHub copilot are basically asking how do I self-spam my codebase
It’s great to get from zero to something in some new JS framework but for your core competancy - it’s like outsourcing your thinking - always comes a cropper
(Book still being written)
> is a way to transmit concepts from one humans head to another (just like writing)
That's almost its primary purpose in my opinion... the CPU does not care about Ruby vs Python vs Rust, it's just executing some binary code instructions. The code is so that other people can change and extend what the system is doing over time and share that with others.
I get your point, but often the binary code instructions between those is vastly different.
The fact that we work with the high level languages rather than the binary code, despite all their inefficiencies, speaks to the human aspect being pretty important in the equation.
This entire conversation is about tradeoffs, but I would note that some of my favorite engineers that I've had the pleasure of knowing are: 1) very fast and 2) know exactly what the binary code of the thing they are trying to do looks like
There's a (3) where they'll quickly confirm their hypothesis using godbolt (or similar) if in doubt or they want to actually think in binary.
Fortunately for the programming community, many of us are able to create useful or interesting things without that kind of depth
I think a lot of the traditional teachings of "rhetoric" can apply to coding very naturally—there's often practically unlimited ways to communicate the same semantics precisely, but how you lay the code out and frame it can make the human struggle to read it straightforward to overcome (or near-impossible, if you look at obfuscation).
Computational thinking is more important than software per se.
Computational thinking is the mathematical thinking.
What makes an apprentice successful is learning the rules of thumb and following them.
What makes a journeyman successful is sticking to the rules of thumb, unless directed by a master.
What makes a master successful is knowing why the rules of thumb exist, what their limits are, when to not follow them, and being able to make up new rules.
There’s also the effect that a certain code structure that’s clearer for a senior dev might be less clear for a junior dev and vice versa.
Or rather, senior devs have learned to care more for having clear code rather than (over-)applying principles like DRY, separation of concerns etc., while juniors haven't (yet)...
I know it's overused, but I do find myself saying YAGNI to my junior devs more and more often, as I find they go off on a quest for the perfect abstraction and spend days yak shaving as a result.
Yes! I work with many folks objectively way younger and smarter than me. The two bad habits I try to break them of are abstractions and what ifs.
They spend so much time chasing perfection that it negatively affects their output. Multiple times a day I find myself saying 'is that a realistic problem for our use case?'
I don't blame them, it's admirable. But I feel like we need to teach YAGNI. Anymore I feel like a saboteur, polluting our codebase with suboptimal solutions.
It's weird because my own career was different. I was a code spammer who learned to wrangle it into something more thoughtful. But I'm dealing with overly thoughtful folks I'm trying to get to spam more code out, so to speak.
I’ve had the opposite experience before. As a young developer, there were a number of times where I advocated for doing something “the right way” instead of “the good enough way”, was overruled by seniors, and then later I had to fix a bug by doing it “the right way” like I’d wanted to in the first place.
Doing it the right way from the start would have saved so much time.
This thread is a great illustration of the reality that there are no hard rules, judgement matters, and we don't always get things right.
I'm pretty long-in-the-tooth and feel like I've gone through 3 stages in my career:
1. Junior dev where everything was new, and did "the simplest thing that could possibly work" because I wasn't capable of anything else (I was barely capable of the simple thing).
2. Mid-experience, where I'd learned the basics and thought I knew everything. This is probably where I wrote my worst code: over-abstracted, using every cool language/library feature I knew, justified on the basis of "yeah, but it's reusable and will solve lots of stuff in future even though I don't know what it is yet".
3. Older and hopefully a bit wiser. A visceral rejection of speculative reuse as a justification for solving anything beyond the current problem. Much more focus on really understanding the underlying problem that actually needs solved: less interest in the latest and greatest technology to do that with, and a much larger appreciation of "boring technology" (aka stuff that's proven and reliable).
The focus on really understanding the problem tends to create more stable abstractions which do get reused. But that's emergent, not speculative ahead-of-time. There are judgements all the way through that: sometimes deciding to invest in more foundational code, but by default sticking to YAGNI. Most of all is seeing my value not as weilding techno armageddon, but solving problems for users and customers.
I still have a deep fascination with exploring and understanding new tech developments and techniques. I just have a much higher bar to adopting them for production use.
We all go through that cycle. I think the key is to get yourself through that "complex = good" phase as quickly as possible so you do the least damage and don't end up in charge of projects while you're in it. Get your "Second System" (as Brooks[1] put it) out of the way as quick as you can, and move on to the more focused, wise phase.
Don't let yourself fester in phase 2 and become (as Joel put it) an Architecture Astronaut[2].
1: https://en.wikipedia.org/wiki/Second-system_effect
2: https://www.joelonsoftware.com/2001/04/21/dont-let-architect...
Heh, I've read [2] before but another reading just now had this passage stand out:
> Another common thing Architecture Astronauts like to do is invent some new architecture and claim it solves something. Java, XML, Soap, XmlRpc, Hailstorm, .NET, Jini, oh lord I can’t keep up. And that’s just in the last 12 months!
> I’m not saying there’s anything wrong with these architectures… by no means. They are quite good architectures. What bugs me is the stupendous amount of millennial hype that surrounds them. Remember the Microsoft Dot Net white paper?
Nearly word-for-word the same thing could be said about JS frameworks less than 10 years ago.
Both React and Vue are older than 10 years old at this point. Both are older than jQuery was when they were released, and both have a better backward compatibility story. The only two real competitors not that far behind. It's about time for this crappy frontend meme to die.
Even SOAP didn't really live that long before it started getting abandoned en masse for REST.
As someone who was there in the "last 12 months" Joel mentions, what happened in enterprise is like a different planet altogether. Some of this technology had a completely different level of complexity that to this day I am not able to grasp, and the hype was totally unwarranted, unlike actual useful tech like React and Vue (or, out of that list, Java and .NET).
> Some of this technology has a completely different level of complexity that to this day I am not able to grasp
Enterprise JavaBeans mentioned?
That's another great example!
> The focus on really understanding the problem tends to create more stable abstractions which do get reused. But that's emergent, not speculative ahead-of-time.
I think this takes a kind of humility you can't teach. At least it did for me. To learn this lesson I had to experience in reality what it's actually like to work on software where I'd piled up a bunch of clever ideas and "general solutions". After doing this enough times I realized that there are very few general solutions to real problems, and likely I'm not smart enough to game them out ahead of time, so better to focus on things I can actually control.
> Most of all is seeing my value not as wielding techno armageddon, but solving problems for users and customers
Also later in my career, I now know: change begets change.
That big piece of new code that “fixes everything” will have bugs that will only be discovered by users, and stability is achieved over time through small, targeted fixes.
> The focus on really understanding the problem tends to create more stable abstractions which do get reused. But that's emergent, not speculative ahead-of-time.
Thank you for putting so eloquently my own fumbling thoughts. Perfect explanation.
Here is an unwanted senior tip, in many consulting projects without the “the good enough way” first, there isn't anything left for doing “the right way” later on.
Why inflict that thinking on environments that aren’t consulting projects if you don’t have to? That kind of thinking is a big contributor to the lack of trust in consultants to do good work that is in the client’s best interests rather than the consultants’. We don’t need employers to start seeing actual employees in the same way too.
The important bit is figuring out if those times where "the right way" would have helped outweigh the time saved by defaulting to "good enough".
There are always exceptions, but there's typically order of magnitude differences between globally doing "the right thing" vs "good enough" and going back to fix the few cases where "good enough" wasn't actually good enough.
Only long experience can help you figure this out. All projects should have at least 20% of the developers who have been there for more than 10 years so they have background context to figure out what you will really need. You then need at least 30% of your developers to be intended to be long term employees but they have less than 10 years. In turn that means never more than 50% of your project should be short term contractors. Nothing wrong with short term contractors - they often can write code faster than the long term employees (who end up spending a lot more time in meetings) - but their lack of context means that they can't make those decisions correctly and so need to ask (in turn slowing down the long term employees even more)
If you are on a true green field project - your organization has never done this before good luck. Do the best you can but beware that you will regret a lot. Even if you have those long term employees you will do things you regret - just not as much.
I don’t like working in teams where some people have been there for much longer than everyone else.
It’s very difficult to get opportunities for growth. Most of the challenging work is given to the seniors, because it needs to be done as fast as possible, and it’s faster in the short term for them to do it than it would be for you to do with with their help.
It’s very difficult for anyone else to build credibility with stakeholders. The stakeholders always want a second opinion from the veterans, and don’t trust you to have already sought that opinion before proceeding, if you thought it was necessary to do so (no matter how many times you demonstrate that you do this). Even if the senior agrees with you, the stakeholder’s perception isn’t that you are competent, it’s that you were able to come to the right conclusion only because the senior has helped you.
> then later I had to fix a bug
How much later? Is it possible that by delivering sooner your team was able to gain insight and/or provide value sooner? That matters!
In many cases, we didn’t deliver sooner than we could have, because my solution had roughly equivalent implementation costs to the solution that was chosen instead. In some cases the bug was discovered before we’d even delivered the feature to the customers at all.
Ah, but that’s assuming the ‘right way’ path went perfectly and didn’t over-engineer anything. In reality, the ‘right way’ path being advocated for, statistically will also waste a lot of time, and over-engineering waste can and does grow exponentially, while under-engineering frequently only wastes linear and/or small amounts of time, until the problem is better understood.
Having witnessed first-hand over-engineering waste millions of dollars and years of time, on more than one occasion, by people advocating for the ‘right way’, I think tallying the time wasted upgrading an under-engineered solution is highly error prone, and that we need to assume that some percentage of time we’ll need to redo things the right way, and that it’s not actually a waste of time, but a cost that needs to be paid in search of whether the “right way” solution is actually called for, since it’s often not. The waste might be the lesser waste compared to something much worse, and it’s not generally possible to do the exact right amount of engineering from the start.
Someone here on HN clued me into the counter acronym to DRY, which is WET: write everything twice (or thrice) so the 2nd or 3rd time will be “right”. The first time isn’t waste, it’s necessary learning. This was also famously advocated by Fred Brooks: “Play to Throw One Away” https://course.ccs.neu.edu/cs5500f14/Notes/Prototyping1/plan...
> In reality, the ‘right way’ path being advocated for, statistically will also waste a lot of time, and over-engineering waste can and does grow exponentially, while under-engineering frequently only wastes linear and/or small amounts of time, until the problem is better understood.
The “right way” examples I’m thinking of weren’t over-engineering some abstraction that probably wasn’t needed. Picture replacing a long procedural implementation, filled with many separate deprecated methods, with a newer method that already existed and already had test coverage proving it met all of the requirements, rather than cramming another few lines into the middle of the old implementation that had no tests. After all, +5 -2 without any test coverage is obviously better than +1 -200 with full test coverage, because 3 is much smaller than 199.
You make a strong case, and you were probably right. It’s always hard to know in a discussion where we don’t have the time and space to share all the details. There’s a pretty big difference between implementing a right way from scratch and using an existing right way that already has test coverage, so that’s an important detail, thank you for the context.
Were there reasons the senior devs objected that you haven’t shared? I have to assume the senior devs had a specific reason or two in each case that wasn’t obviously wrong or idiotic, because it’s quite common for juniors to feel strongly about something in the code without always being able to see the larger team context, or sometimes to discount or disbelieve the objections. I was there too and have similar stories to you, and nowadays sometimes I manage junior devs who think I’m causing them to waste time.
I’m just saying in general it’s healthy to assume and expect imperfect use of time no matter what, and to assume, even when you feel strongly, that the level of abstraction you’re using probably isn’t right. By the Brooks adage, the way your story went down is how some people plan for it to work up front, and if you’d expected to do it twice, then it wouldn’t seem as wasteful, right?
Everything in moderation, even moderation.
This isn't meant to be taken too literally or objectively, but I view YAGNI as almost a meta principle with respect to the other popular ones. It's like an admission that you won't always get them right, so in the words of Bukowski, "don't try".
Agreed. I’ve been trying to dial in a rule of thumb:
If you aren’t using the abstraction on 3 cases when you build it, it’s too early.
Even two turns into a higher bar than I expected.
Your documentation will tell when you need an abstraction. Where there is something relevant to document, there is a relevant abstraction. If its not worth documenting, it is not worth abstracting. Of course, the hard part is determining what is actually relevant to document.
The good news is that programmers generally hate writing documentation and will avoid it to the greatest extent possible, so if one is able to overcome that friction to start writing documentation, it is probably worthwhile.
Thus we can sum the rule of thumb up to: If you have already started writing documentation for something, you are ready for an abstraction in your code.
It's more case by case for me. A magic number should get a named constant on its first use. That's an abstraction.
C++ programmers decided against NULL, and for well over a decade, recommended using a plain 0. It was only recently that they came up with a new name: nullptr. Sigh.
That had to do with the way NULL was defined, and the implications of that. The implication carried over from C was that NULL would always be null pointer as opposed to 0, but in practice the standard defined it simply as 0 - because C-style (void*)0 wasn't compatible with all pointer types anymore - so stuff like:
void foo(void*);
void foo(int);
foo(NULL);
would resolve to foo(int), which is very much contrary to expectations for a null pointer; and worse yet, the wrong call happens silently. With foo(0) that behavior is clearer, so that was the justification to prefer it.On the other hand, if you accept the fact that NULL is really just an alias for 0 and not specifically a null pointer, then it has no semantic meaning as a named constant (you're literally just spelling the numeric value with words instead of digits!), and then it's about as useful as #define ONE 1
And at the same time, that was the only definition of NULL that was backwards compatible with C, so they couldn't just redefine it. It had to be a new thing like nullptr.
It is very unfortunate that nullptr didn't ship in C++98, but then again that was hardly the biggest wart in the language at the time...
When you thought you made "smart" solutions and many years later you have to go in and fix bugs in it, is usually when you learn this.
There is a human side to this which I am going through right now. The first full framework I made is proving to be developer unfriendly in the long run, I put more emphasis on performance than readability (performance was the KPI we were trying to improve at the time). Now I am working with people who are new to the codebase, and I observed they were hesitant to criticize it in front of me. I had to actively start saying "lets remove <frame work name>, its outdated and bad". Eventually I found it liberating, it also helped me detach my self worth from my work, something I struggle with day to day.
My 'principle' for DRY is : twice is fine, trice is worth an abstraction (if you think it has a small to moderate chance to happen again). I used to apply it no matter what, soi guess it's progress...
I really dislike how this principle ends up being used in practice.
A good abstraction that makes actual sense is perfectly good even when it's used only once.
On the other hand, the idea of deduplicating code by creating an indirection is often not worth it for long-term maintenance, and is precisely the kind of thing that will cause maintenance headaches and anti-patterns.
For example: don't mix file system or low-level database access with your business code, just create a proper abstraction. But deduplicating very small fragments of same-abstraction-level can have detrimental effects in the long run.
I think the main problem with these abstractions that they are merely indirections in most cases, limiting the usefulness to several use cases (sometimes to things that never going to be needed).
To quote Dijsktra: "The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise."
I can't remember where I picked it up from, but nowadays I try to be mindful of when things are "accidentally" repeated and when they are "necessarily" repeated. Abstractions that encapsulate the latter tend to be a good idea regardless of how many times you've repeated a piece of code in practice.
Exactly, but distinguishing the two that requires an excellent understanding of the problem space, and can’t at all be figured out in the solution space (i.e., by only looking at the code). But less experienced people only look at the code. In theory, a thousand repetitions would be fine if each one encodes an independent bit of information in the problem space.
The overarching criterion really is how it affects locality of behaviour: repeating myself and adding an indirection are both bad, the trick is to pick the one that will affect locality of behaviour the least.
https://loup-vaillant.fr/articles/source-of-readability#avoi...
WET, write everything twice
"Better a little copying than a little dependency" - Russ Cox
Do you use a copy paste detector to find third copy?
twice is fine... except some senior devs apply it to the entire file (today I found the second entire file/class copied and pasted over to another place... the newer copy is not used either)
As someone who recently had to go over a large chunk of code written by myself some 10-15 years ago I strongly agree with this sentiment. Despite being a mature programmer already at that time, I found a lot of magic and gotchas that were supposed to be, and felt at the time, super clever, but now, without a context, or prior version to compare, they are simply overcomplicated.
I find that it’s typically the other way around as things like DRY, SOLID and most things “clean code” are hopeless anti-patterns peddled by people like Uncle Bob who haven’t actually worked in software development since Fortran was the most popular language. Not that a lot of these things are bad as a principle. They come with a lot of “okish” ideas, but if you follow them religiously you’re going to write really bad code.
I think the only principle in programming I think can be followed at all times is YAGNI (you aren’t going to need it). I think every programming course, book, whatever should start by telling you to never, ever, abstract things before you absolutely can’t avoid it. This includes DRY. It’s a billion times better to have similar code in multiple locations that are isolated in their purpose, so that down the line, two-hundred developers later you’re not sitting with code where you’ll need to “go to definition” fifteen times before you get to the code you actually need to find.
Of course the flip-side is that, sometimes, it’s ok to abstract or reuse code. But if you don’t have to, you should never ever do either. Which is exactly the opposite of what junior developers do, because juniors are taught all these “hopeless” OOP practices and they are taught to mindlessly follow them by the book. Then 10 years later (or like 50 years in the case of Uncle Bob) they realise that functional programming is just easier to maintain and more fun to work with because everything you need to know is happening right next to each other and not in some obscure service class deep in some ridiculous inheritance tree.
The problem with repeating code in multiple places is that when you find a bug in said code, it won't actually be fixed in all the places where it needs to be fixed. For larger projects especially, it is usually a worthwhile tradeoff versus having to peel off some extra abstraction layers when reading the code.
The problems usually start when people take this as an opportunity to go nuts on generalizing the abstraction right away - that is, instead of refactoring the common piece of code into a simple function, it becomes a generic class hierarchy to cover all conceivable future cases (but, somehow, rarely the actual future use case, should one arise in practice).
Most of this is just cargo cult thinking. OOP is a valid tool on the belt, and it is genuinely good at modelling certain things - but one needs to understand why it is useful there to know when to reach for it and when to leave it alone. That is rarely taught well (if at all), though, and even if it is, it can be hard to grok without hands-on experience.
We agree, but we’ve come to different conclusions. Probably based on our experiences. Which is why I wanted to convey that I think you should do these things in moderation. I almost never do classes, and much rarer inheritance, as an example. That doesn’t mean I wouldn’t make a “base class” containing things like “owned by, updated by, some time stamp” or whatever you would want added to every data object in some traditional system and then inherit that. I would, I might even make multiple “base classes” if it made sense.
What I won’t do, however, is abstract code until I have to. More than that as soon as that shared code stops being shared, I’ll stop doing DRY. Not because DRY is necessarily bad, but because of the way people write software which all too often leads to a dog which will tell you dogs can’t fly if you cal fly() on it. Yes, I know that is ridiculous, but I’ve never seen an “clean” system that didn’t eventually end up like that. People like Uncle Bob will tell you that is because people misunderstood the principles, and they’d be correct. Maybe the principles are simply bad if so many people misunderstand them though?
good devs*, not all senior devs have learned that, sadly. As a junior dev I've worked under the rule of senior devs who were over-applying arbitrary principles, and that wasn't fun. Some absolute nerds have a hard time understanding where their narrow expertise is meant to fit, and they usually don't get better with age.
Comment was deleted :(
I bumped into that issue, and it caused a lot of friction between me and 3 young developers I had to manage.
Ideas on how to overcome that?
Teaching.
I had this problem with an overzealous junior developer and the solution was showing some different perspectives. For example John Ousterhout's A Philosophy of Software Design.
I tried this but they just come back with retorts like "OK boomer" which tends to make the situation even worse.
How do you respond to that?
The sibling comment says "fire them". That sounds glib, but it's the correct solution here.
From what you've described, you have a coworker who is not open to learning and considering alternative solutions. They are not able to defend their approach, and are instead dismissive (and using an ageist joke to do it). This is toxic to a collaborative work environment.
I give some leeway to assholes who can justify their reasoning. Assholes who just want their way because it's their way aren't worth it and won't make your product better.
Or, perhaps better, just let that hang for a moment - long enough to become uncomfortable - and then say "Try again."
As others have said, if they can't or won't get that that's unacceptable behavior, fire them. (jerf is more patient than I am...)
This is a discriminatory statement and it should be taken seriously.
Fire them.
To be honest, at the point where they are being insulting I also agree firing them is a very viable alternative.
However, to answer the question more generally, I've had some success first acknowledging that I agree the situation is suboptimal, and giving some of the reasons. These reasons vary; we were strapped for time, we simply didn't know better yet, we had this and that specific problem to deal with, sometimes it's just straight up "yeah I inherited that code and would never have done that", honestly.
I then indicate my willingness to spend some time fixing the issues, but make it clear that there isn't going to be a Big Bang rewriting session, but that we're going to do it incrementally, with the system working the whole time, and they need to conceive of it that way. (Unless the situation is in the rare situation where a rewrite is needed.) This tends to limit the blast radius of any specific suggestion.
Also, as a senior engineer, I do not 100% prioritize "fixing every single problem in exactly the way I'd do it". I will selectively let certain types of bad code through so that the engineer can have experience of it. I may not let true architecture astronautics through, but as long as it is not entirely unreasonable I will let a bit more architecture than perhaps I would have used through. I think it's a common fallacy of code review to think that the purpose of code review is to get the code to be exactly as "I" would have written it, but that's not really it.
Many people, when they see this degree of flexibility, and that you are not riding to the defense of every coding decision made in the past, and are willing to take reasonable risks to upgrade things, will calm down and start working with you. (This is also one of the subtle reasons automated tests are super super important; it is far better for them to start their refactoring and have the automated tests explain the difficulties of the local landscape to them than a developer just blathering.)
There will be a set that do not. Ultimately, that's a time to admit the hire was a mistake and rectify it appropriately. I don't believe in the 10x developer, but not for the usual egalitarian reasons... for me the problem is I firmly, firmly believe in the existence of the net-negative developer, and when you have those the entire 10x question disappears. Net negative is not a permanent stamp, the developer has the opportunity to work their way out of it, and arguably, we all start there both as a new developer and whenever we start a new job/position, so let me sooth the egalitarian impulse by saying this is a description of someone at a point in time, not a permanent label to be applied to anyone. Nevertheless, someone who insists on massive changes, who deploys morale-sapping insults to get their way, whose ego is tied up in some specific stack that you're not using and basically insists either that we drop everything and rewrite now "or else", who one way or another refuses to leave "net negative" status... well, it's time to take them up on the "or else". I've exaggerated here to paint the picture clearly in prose, but, then again, of the hundreds of developers I've interacted with to some degree at some point, there's a couple that match every phrase I gave, so it's not like they don't exist at all either.
You mean they literally say "ok boomer"? If so they are not mature enough for the job. That phrase is equivalent to "fuck off" with some ageism slapped on top and is totally unacceptable for a workplace.
That's exactly what I try to do. I think it's an unpopular opinion though, because there are no strict rules that can be applied, unlike with pure ideologies. You have to go by feel and make continuous adjustments, and there's no way to know if you did the right thing or not, because not only do different human minds have different limits, but different challenges don't tax every human mind to the same proportional extent.
I get the impressions that programmers don't like ambiguity in general, let alone in things they have to confront in real life.
> there are no strict rules that can be applied
The rules are there for a reason. The tricky part is making sure you’re applying them for that reason.
I don't know what your comment has to do with my comment.
My intro to programming was that I wanted to be a game developer in the 90s. Carmack and the others at Id were my literal heroes.
Back then, a lot of code optimizations was magic to me. I still just barely understand the famous inverse square root optimization in the Quake III Arena source code. But I wanted to be able to do what those guys were doing. I wanted to learn assembly and to be able to drop down to assembly and to know where and when that would help and why.
And I wasn't alone. This is because these optimizations are not obvious. There is a "mystique" to them. Which makes it cool. So virtually ALL young, aspiring game programmers wanted to learn how to do this crazy stuff.
What did the old timers tell us?
Stop. Don't. Learn how to write clean, readable, maintainable code FIRST and then learn how to profile your application in order to discover the major bottlenecks and then you can optimize appropriately in order of greatest impact descending.
If writing the easiest code to maintain and understand also meant writing the most performant code, then the concept of code optimization wouldn't even exist. The two are mutually exclusive, except in specific cases where it's not and then it's not even worth discussing because there is no conflict.
Carmack seems to acknowledge this in his email. He realizes that inlining functions needs to be done with careful judgment, and the rationale is both performance and bug mitigation. But that if inlining were adopted as a matter of course, a policy of "always inline first", the results would quickly be an unmaintainable, impossible to comprehend mess that would swing so far in the other direction that bugs become more prominent because you can't touch anything in isolation.
And that's the bane of software development: touch one thing and end up breaking a dozen other things that you didn't even think about because of interdependence.
So we've come up with design patterns and "best practices" that allow us to isolate our moving parts, but that has its own set of trade-offs which is what Carmack is discussing.
Being a 26 year veteran in the industry now (not making games btw), I think this is the type of topic that you need to be very experienced to be able to appreciate, let alone to be able to make the judgment calls to know when inlining is the better option and why.
That doesn't seem like holding two opposing thoughts. Why is balancing contradictory actions to optimize an outcome different to weighing pros and cons?
What I meant to say was that when people encounter contradictory statements like "always inline one-time functions" and "breakdown functions into easy to understand blocks", they try to only pick one single rule, even if they consider the pros and cons of each rule.
After a while they consider both rules as useful, and will move to a more granular case-by-base analysis. Some people get stuck at rule-based thinking though, and they'll even accuse you of being inconsistent if you try to do case-by-case analysis.
You are probably reaching for Hegel’s concept of dialectical reconciliation
Not sure, didn't Hegel say that there should be a synthesis step at some point? My view is that there should never be a synthesis when using these principles as tools, as both conflicting principles need to always maintain opposites.
So, more like Heraclitus's union of opposites maybe if you really want to label it?
the synthesis would be the outcome maybe? writing code that doesn't follow either rule strictly:
> Concretely related to the topic, I've often found myself inlining short pieces of one-time code that made functions more explicit, while at other times I'll spend days just breaking up thousand line functions into simpler blocks just to be able to follow what's going on. In both cases I was creating inconsistencies that younger developers nitpick -- I know I did.
On a positive note, most AI-gen code will follow a style that is very "average" of everything it's seen. It will have its own preferred way of laying out the code that happens to look like how most people using that language (and sharing the code online publicly), use it.
> other times I'll spend days just breaking up thousand line functions into simpler blocks just to be able to follow what's going on
Absolutely, I'll break up a long block of code into several functions, even if there is nowhere else they will be called, just to make things easier to understand (and potentially easier to test). If a function or procedure does not fit on one screen, I will almost always break it up.
Obviously "one screen" is an approximation, not all screens/windows are the same size, but in practice for me this is about 20-30 lines.
My go to heuristic for how to break up code is white board or draw up in lucidchart your solution to explain it to another dev. If your methods don't match the whiteboard refactor.
To a certain sort of person, conversation is a game of arriving at these antithesis statements:
* Inlining code is the best form of breaking up code.
* Love is evil.
* Rightwing populism is a return to leftwing politics.
* etc.
The purpose is to induce aporia (puzzlement), and hence make it possible to evaluate apparent contradictions. However, a lot of people resent feeling uncertain, and so, people who speak this way are often disliked.To make an advance in a field, you must simultaneously believe in what’s currently known as well as distrust that the paradigm is all true.
This gives you the right mindset to focus on advancing the field in a significant way.
Believing in the paradigm too much will lead to only incremental results, and not believing enough will not provide enough footholds for you to work on a problem productively.
> My goal in most cases now is to optimize code for the limits of the human mind (my own in low-effort mode)
I think you would appreciate the philosophy of the Grug Brained Developer: https://grugbrain.dev
> I was creating inconsistencies that younger developers nitpick
Obligatory: “A foolish consistency is the hobgoblin of little minds"
Continued because I'd never read the full passage: "... adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall. Speak what you think now in hard words, and to-morrow speak what to-morrow thinks in hard words again, though it contradict every thing you said to-day. — 'Ah, so you shall be sure to be misunderstood.' — Is it so bad, then, to be misunderstood? Pythagoras was misunderstood, and Socrates, and Jesus, and Luther, and Copernicus, and Galileo, and Newton, and every pure and wise spirit that ever took flesh. To be great is to be misunderstood.” ― Ralph Waldo Emerson, Self-Reliance: An Excerpt from Collected Essays, First Series
> limits of the human mind when more and more AI-generated code will be used
We already have a technology which scales infinitely with the human mind: abstraction and composition of those abstractions into other abstractions.
Until now, we’ve focused on getting AI to produce correct code. Now that this is beginning to be successful, I think a necessary next step for it to be useful is to ensure it produces well-abstracted and clean code (such that it scales infinitely)
That’s undoubtedly a Zelda Fitzgerald quote (her husband plagiarized her shamelessly).
As a consequence of the Rule of Three, you are allowed to have rules that have one exception without having to rethink the law. All X are Y except for Z.
I sometimes call this the Rule of Two. Because it deserves more eyeballs than just being a subtext of another rule.
Wait, isn't that just Doublethink from 1984? Holding two opposing thoughts is a sign that your mental model of the world is wrong and that it needs to be fixed. Where have you heard that maxim?
No you've got it completely backwards. Reality has multiple facets (different statements, all of which can be true) and a mental model that insists on a singular judgement is reductionist, missing the forest for the trees. Light is a wave and a particle. People are capable of good and bad. The modern world is both amazing and unsustainable. etc.
Holding multiple truths is a sign that you understand the problem. Insisting on a singular judgement is a sign that you're just parroting catchy phrases as a short cut to thinking; the real world is rarely so cut and dry.
It's not referring to cognitive dissonance.
[flagged]
So this maxim can both be used for good and for bad. Extra points for this maxim.
A metamaxim?
Stretch goal: hold three
Nah. That's what the Monk is for.
A person of culture, I see.
Electric Monks were made for a reason.
Surprisingly pertinent to the current discussion.
apposite to the opposite
Comment was deleted :(
A safe work contribution plan for the year: Hold 1+ (stretch 3) opposing thoughts at a time.
"hold one opposing thought" could be a zen koan
Indoctrination is the exact opposite.
Sure, but the maxim can be used to inject this 'exact opposite', in perfect accordance with the maxim!
Maybe "indoctrination" was a poor choice of word here. The problem with this maxim is that it welcomes moral relativism.
This can be bad on the assumption that whoever is exposed to the maxim is not a proponent of "virtue ethics" (I use this as a catch-all term for various religious ethics doctrines, the underlying idea is that moral truths are given to people by a divine authority rather than discovered by studying human behavior, needs and happiness). In this situation, the maxim is an invitation to embrace ideas that aren't contradictory to one's own, but that live "outside the system", to put them on equal footing.
To make this more concrete, let's suppose the subject of child brides. Some religions have no problem with marrying girls of any age to men of any age. Now, the maxim suggests that no matter what your moral framework looks like, you should accept that under some circumstances it's OK to have child marriages. But, this isn't a contradiction. There's no ethical theory that's not based on divine revelation that would accept such a thing. And that's why, by and large, the Western society came to treat child marriages as a crime.
Contradictions are only possible when two parties agree on the premises that led to contradicting conclusion, and, in principle, should be possible to be resolved by figuring out which party had a faulty process that derived a contradicting opinion. Resolving such contradictions is a productive way forward. But, the kind of "disagreement" between religious ethics and "derived" ethics is where the premises are different. So, there can be no way forward in an argument between the two, because the only way the two can agree is if one completely abandons their premises.
Essentially, you can think about it as if two teams wanted to compete in some sport. If both are playing soccer, then there's a meaning to winning / losing, keeping the score, being good or bad at the game. But, if one team plays soccer while another team is playing chess... it just doesn't make sense to pit them against each other.
> maxim suggests that no matter what your moral framework looks like, you should accept that under some circumstances it's OK to have child marriages
You seem to have either misread the maxim, or misunderstood it.
The maxim is not that an intelligent person -must- hold two contradictory thoughts in their head at once - rather, that they should be able to. Being "able to" do something, does not mean one does it in all cases.
To say that the maxim suggests that someone "should" accept that something that is bad, is sometimes good, is a plain misreading of the text. All it's saying is that people -can- do this, if they so choose.
In this context, it doesn't matter if they "must" or "should be able to". No, I didn't misunderstand the maxim. No, I didn't mean that it has to happen in all cases. You are reading something into what I wrote that I didn't.
The maxim is not used by religious people to its intended effect. Please read again, if you didn't see it the first time. The maxim is used as a challenge that can be rephrased as: "if you are as intelligent as you claim, then you should be able to accept both what you believe to be true and whatever nonsense I want you to believe to be true."
> The maxim is not used by religious people to its intended effect.
Your comment literally says "the maxim suggests".
If that wasn't what you were saying, then your comment is misphrased.
If that -was- what you were saying, then I reiterate that, no, the maxim does not suggest that. You (or whatever hypothetical person you're referring to) are the one suggesting it, not the maxim.
It doesn't matter how you rephrase it - "should be able to" is not the same as "must". "Able-bodied people should be able to jump off the top of a building." That's a perfectly valid and true statement - jumping off of things is within the physical capabilities of the able-bodied. But that statement, however true, does not suggest that one must jump off the top of a building to prove that one is able-bodied.
> No, I didn't mean that it has to happen in all cases.
If it doesn't have to happen in all cases, then an intelligent person can simply say "no, even though I am -able to- accept contradictory ideas, in this case I still reject child marriage in all contexts". Clearly you would agree that this is perfectly compatible with the maxim. So, in what way is the maxim being harmful here?
In reality, your comment has almost nothing to do with the maxim itself, and is mostly just about people using religion and rhetoric to manipulate others. Such people would use whatever tool they have available - with or without the existence of the maxim.
Comment was deleted :(
Doublethink!
As a tool, it's a wedge to break indoctrination and overcome bias. It leads to more pragmatic and less ideological thinking. The subject is compelled to contrast opposing views and consider the merits of each.
Any use by ideological groups twists the purpose of the phrase on its head. The quote encourages thinking and consideration. You'd have to turn off your brain for this to have the opposite effect.
> Any use by ideological groups twists the purpose of the phrase on its head. The quote encourages thinking and consideration. You'd have to turn off your brain for this to have the opposite effect.
Well, it would not be too surprising that it can be used to, for example, make people think that they can trust science and also believe in some almighty, unexplainable by science divine entity.
Thoughts like this miss the purpose and significance of the maxim being discussed. Science doesn't disprove an "almighty, unexplainable divine entity" any more than an "almighty, unexplainable divine entity" could also provide science as a means to understand the nature of things.
Careful you don't fall into the trap of indoctrination. :)
You can trust science, but science doesn't cover all of reality.
My imaginary friend does, buy my magic book.
The US has a statutory rapist and someone who believes in active weather manipulation seated in Congress. It's easy to get the masses to turn off their brains.
His overall solution highlighted in the intro is that he's moved on from inlining and now does pure functional programming. Inlining is only relevant for him during IO or state changes which he does as minimally as possible and segregates this from his core logic.
Pure functional programming is the bigger insight here that most programmers will just never understand why there's a benefit there. In fact most programmers don't even completely understand what FP is. To most people FP is just a bunch of functional patterns like map, reduce, filter, etc. They never grasp the true nature of "purity" in functional programming.
You see this lack of insight in this thread. Most responders literally ignore the fact that Carmack called his email completely outdated and that he mostly does pure FP now.
Here's the link where he discusses functional programming style:
https://web.archive.org/web/20170116040923/http://gamasutra....
He does not say that that his email is completely outdated - he just says that calling pure functions is exempt from the inlining rule.
He's not off writing pure FP now. His approach is still deeply pragmatic. In the link above he discusses degrees of function purity. "Pure FP" has a whole different connotation - where whole programs are written in that constrained style.
The original article literally starts with this:
> In the years since I wrote this, I have gotten much more bullish about pure functional programming, even in C/C++ where reasonable: (link) > >The real enemy addressed by inlining is unexpected dependency and mutation of state, which functional programming solves more directly and completely. However, if you are going to make a lot of state changes, having them all happen inline does have advantages; you should be made constantly aware of the full horror of what you are doing.
He explicitly says that functional programming solves the same issue as inlining but more directly and completely.
Maybe this is just a subtle semantic thing, but what I took from this is that he is factoring out pure functions where ever possible, within a procedural context. In my mind, that’s not the same thing as “functional programming”.
Thank you for this. I appreciate that this (classic) article lays bare the essence of FP without the usual pomp and "use Lisp/Scheme/Haskell already" rhetoric. My takeaway is that FP is mostly about using functions w/o side effects (pure), which can be achieved in any programming language provided you're diligent about it.
This is a bit naive though. It depends on what you want to do and whether the language you are using offers the required primitives and other things like persistent functional data structures. Without those, you will find yourself hard-pressed to make FP happen. It is of course usually possible with most languages (except those where primitives are already mutating and therefore infectiously prevent you from writing pure functions), but it might not be idiomatic at all, or might not be feasible to roll all things your own, to replace any mutating basics. For example imagine having to copy a data structure all over the place again and again, because its methods are mutating its internal state. That would be inefficient, much more inefficient than a well written corresponding functional data structure, and it would be ugly code.
Are you going to write that extra data structure, when your task is actually something else? Some management breathing down your neck, asking when something will be done? Or not so well versed coworkers complaining about you adding a lot of code that might need to be maintained by them, while they don't understand FP? Do you even have the knowledge to implement that data structure in the first place, or will you need to study a couple of papers and carefully translate their code, if any, i to your language and then verify expected performance in meaningful benchmarks?
Lots of problems can arise there in practice.
No functional programming is about programming as if your code is a math equation.
In math people never use procedures. They write definitions in math in terms of formulas and expressions.
If you can get everything to fit on one line in your programming. Then you are doing functional programming.
The lack of side effects, lack of mutation and high modularity are the beneficial outcome of fp, it is not the core of what you're doing. The core of what you're doing is your defining your program as a formula/equation/expression rather then a list of procedures or steps. Of course, why you would write your program this way is because of the beneficial outcomes.
By coincidence if you write your code in a way where you just account for the side effects like deliberately avoiding mutation, IO and side effects... then your program will become isomorphic to a mathematical function. So it goes both ways.
Another thing you will note and most people don't get this is that for loops don't exist in FP. The fundamental unit of "looping" in fp is always done with recursion, just like how they would do it in mathematical expressions.
As someone who likes math (math major, applied math grad) and who picked up functional programming relatively early in my career, I don't find this model (fp is just math) to improve my understanding or make it easier to understand why I would want to program like this
Talking about state and error handling is helpful because it helps explain why to use the tool, not how the tool was forged (or originally conceived)
This doesn’t help you understand why you should do functional programming.
It just helps you understand the nature of what functional programming actually is. Too many people think it’s just immutability, anonymous functions, map, reduce and filter.
Understanding why you should do functional programming is orthogonal to understanding what it is.
Even if I tell you functional programming is more modular and referentially transparent and lacks state. None of these things truly register until you have done both imperative programming and Haskell programming for a non trivial amount of time.
Also error handling is orthogonal to functional programming. Yes I know it’s clever how Haskell does it but it’s independent to functional programming… and even so.. explaining maybe monads or any other error monad just makes things less understandable.
My usual statement on monads is "like any abstraction, it makes sense when you need it"
If you write a lot of go code and think "this error management (!= nil anybody?) is a drag, there has to be a better way! The truth is: go largely looks like how cpp is written at Google, EXCEPT in Google cpp you get to use macros like RETURN_IF_ERROR which handles the ubiquitous StatusOr class.
Is this StatusOr a monad? I'm failing to recall what the monad functions look like internally, but I suspect it's trivial to make it one (I mean, it probably is! And if it's not it would be trivial to make it one)
Do you need to understand monads to see why they're useful here? I don't think so! And so even if you don't know how to build the microwave, you know how to use it.
For-loops do exist, they just need to not have side effects, which in practice means the likes of map/filter/reduce (ideally promoted to a first class language feature like sequence comprehensions).
You could argue that those are still desugared to recursion, but I think at that point it's kinda moot - the construct is still readily recognizable as a loop, and it's most likely also implemented under the hood as an imperative loop with encapsulated local state; not that it matters so long as semantics stay the same.
In general, so long as mutation can be encapsulated in modules that only expose pure functional interfaces, I think it should still count as FP for practical purposes.
>For-loops do exist, they just need to not have side effects,
No they don't. For loops and loops in general are procedural actions. You are jumping from directive to directive, command to command. Loops are NOT functional at all. loops are an artifact of the computational machine, jumping to different instructions.
Functional programs like functions in mathematics DO not contain for loops.
> which in practice means the likes of map/filter/reduce (ideally promoted to a first class language feature like sequence comprehensions). >You could argue that those are still desugared to recursion, but I think at that point it's kinda moot - the construct is still readily recognizable as a loop,
All programs are desugared into assembly instructions. Assembly instructions are procedural by nature... they are not functional so your point is moot as everything is desugared into loop based jumps.
map/filter/reduce Are not loops. They are fundamentally different. It doesn't matter if it's "recognizeable" as a loop, it is NOT a loop. There is an isomorphism between imperative and functional programming, So the definition of Loop vs. no loops refers to the superficial differences between the two EVEN when the underlying things are the same.
>In general, so long as mutation can be encapsulated in modules that only expose pure functional interfaces, I think it should still count as FP for practical purposes.
It actually can't... for loops rely on mutation to work.
a for loop looks like this:
<OUTER SCOPE>
for i in range(10):
<INNER SCOPE>
By nature the for loop needs to influence outer scope otherwise your for loop is utterly useless. So how would you influence outer scope from inner scope? <VARIABLE FROM OUTER SCOPE>
for i in range(10):
<MUTATE VARIABLE FROM OUTER SCOPE WITHIN INNER SCOPE>
That's the only way man.This is the fundamental nature of for loops. They are imperative constructs. Sure it can look very similar to map or reduce or even filter, but THEY are not the same.
Python:
def range_mul(n, mul):
for i in range(n):
yield i*mul
x = list(range_mul(10, 20))
range_mul is a pure function, yet it is implemented with a for loop. Once you have first class continuations or the equivalent, the differences between imperative and pure blur (cf. Haskell do-notation, is it imperative?).In any case I think you are missing int_19h point, the it doesn't matter if a function is implemented using imperative constructs, if you can't tell from the outside it is still pure. And an FP compiler will convert pure code to imperative anyway.
range_mul is not pure. Don’t believe me? Ask ChatGPT:
“””
No, a Python function that contains the yield keyword is not a pure function.
A pure function is defined as a function where the output is solely determined by its input values, with no observable side effects. This means it should:
• Always return the same result when given the same input. • Have no side effects (like modifying global state, reading from or writing to files, etc.).
Functions containing yield are generators, which maintain internal state between successive calls, making their behavior dependent on the sequence of calls (because they return values incrementally and remember the point where they left off). This internal state makes them impure because they don’t always return the same result when called with the same input—they return successive values upon subsequent calls.
In summary, a function with yield is not pure due to its stateful behavior and non-deterministic output across multiple calls.
“””
>In any case I think you are missing int_19h point, the it doesn't matter if a function is implemented using imperative constructs, if you can't tell from the outside it is still pure. And an FP compiler will convert pure code to imperative anyway.
You’re making a side point here that’s irrelevant to the main point. Sure you can do this, these are techniques you can do to segregate your impure code away from your pure code. But is this the topic of conversation? No.
Additionally your code actually is like a virus. Not only is the example wrong but The impurity can infect other functions that use it such that those functions lose determinism and purity as well.
Which brings me to my main point. Circling back to my parent comment. Most people don’t understand fp and don’t get carmacks insight… and you are an example of such a person for now.
Well, technically a call to range_mul() is still pure and it matches the definition: it has no side effects, and it returns a new instance of the same object (the generator); the generator itself is inpure of course, so I concede the point. But that's a limitation of python generators being one-shot; with proper delimited continuations you can snapshot the state at any point and replay it at leisure, not differently than a lazy stream.
Regarding the rest, I was referring to this comment from int_19h re encapsulation:
"In general, so long as mutation can be encapsulated in modules that only expose pure functional interfaces, I think it should still count as FP for practical purposes."
> Haskell do-notation, is it imperative?
No it’s not. It only works in context of a monad and it achieves what looks like imperative code by utilizing closures.
I think it’s quite obvious imperative code with mutations but a deterministic end result can be hidden behind a function and treated as pure. I don’t think anyone misses this point so it’s likely redundant to bring that up. Either way it’s not the point of the conversation.
I dunno, I imagine that there's no appreciable difference between:
(loop :for i :below 10 :collect i)
and: (let loop ((i 0))
(if (= i 10)
nil
(cons i
(loop (1+ i)))))
First off I can’t read that. Please use more common pseudo code for your example if you want me to understand.
Second it looks like there’s a function called for here that does something similar to reduce? That’s a functional thing.
The topic at hand is for loops as in loops that are procedural statements which is what most people recognize as loops. For, while, do while and those things you see in popular languages. Not some obscure construct in lisp.
If your talking about the isomorphism between functional and imperative programming, I already covered that.
Largely orthogonal to your comment:
One interesting thing I learned/realized when reading about the msr dafny project is that for loops mean you need to provide guarantees about invariants.
How do I know there's no index out of bounds? How do I know how large the resulting array is?
When you have to write post conditions for each loop, it makes higher order functions (map, reduce, filter) much more appealing. The proof was already done in the function that will invoke yours!
He literally said he’s bullish on pure fp. Which means he is off writing pure fp. His own article about it never explicitly or implicitly implies a “pragmatic approach”.
I never said he said his email was completely outdated. He for sure implies it’s outdated and updates us on his views of inlining which I also mentioned.
> I never said he said his email was completely outdated.
From your prior message:
> Carmack called his email completely outdated
Ok my bad but I did mention what he’s doing with inlining. So I contradicted myself in the original message which you didn’t identify.
He still does inlining.
Comment was deleted :(
> he's moved on from inlining and now does pure functional programming
Neither of those are true. He does more FP ”where reasonable“, and that decreases the need for inlining. He does not do pure FP, and he still inlines.
"pure FP" does not mean only writing in a functional style. Purity refers to referential transparency, ie., functions do not depend on or modify some global state.
I know what purity is. It is a core principle of functional programing. So ”functional“ already implies purity, and ”pure functional“ implies exclusively functional (e.g. Haskell).
Purity is not a requirement for functional languages. See Ocaml, Erlang, Racket, Scala, Clojure, Common Lisp, etc.
Actually even further: They also don't modify/mutate any arguments. If they did, then that could raise problems with concurrency.
He literally says he’s more bullish on pure fp. Read it. And I also wrote about where he still inlines.
Exactly. More bullish means he uses and advocates for functional more than before. It by no means implies having ”moved on“ from inlining.
He has Moved on in general and his intro comments on how he still uses inlining for special cases.
I think more people grasp functional programming all the time, or at least the most salient detail: referential transparency. It’s easy to show the benefits in the small, without getting heavy on academics: pure functions are easier to test, understand, and change with confidence. All three of these reinforce each other, but they’re each independently beneficial as well.
There are tons of benefits to get from learning this lesson in a more intentional way—I know that I changed my entire outlook on programming after some time working in Clojure!—but I’ve seen other devs take the same lessons in multi-paradigm contexts as well.
Not just this. Modularity is the main insight as well. The reason why oop doesn’t work is because methods can’t be broken down. Your atom is oop is literally a collection of methods tied to mutating state. You cannot break down that collection further.
In pure fp. You can break your function down into the smallest computational unit possible. This is what prevents technical debt of the architectural nature as you can rewrite your code as simply recomposing your modular logic.
Over the last few decades there has been quite the rug-pull in "functional programming"!
1) programming which is based on "functions" (procedures) as values, including anonymous lambdas (hence map/fold/etc paradigms), which is only really possible in languages that intentionally support it
2) programming where every procedure (except some boundary code for IO/etc) is truly is a well-defined mathematical function, which is possible in almost any programming language
1) describes any language you would call a "functional programming language," whereas 2) involves well-understood concepts around mutability and determinism that a minority (correctly) describe as "pure functional programming."
So I think it's a bit judgmental to say "lack of insight" when it's more about shifting terminology. A very high-reliability C program might be "purely functional" (inside of an IO/memory boundary) and built by engineers with the precise insight you're discussing, but in most contexts it would be odd to say "purely functional," especially if the code eschews C mechanics around function pointers. In most imperative contexts it is clearer to describe purely functional ideas in terms of imperative programming (which are equally clear, if less philosophically interesting).
> To most people FP is just a bunch of functional patterns like map, reduce, filter, etc.
For me, these were the gateway drugs to FP, because they weren't available in the languages I was used to, namely C++ and Java. I encountered map and filter in Python in the 1990s, immediately realized a ton of Java and C++ code I wrote would be simpler with them, and dove into Lisp when I found out that's where Python got them. They have nothing to do with pure functional programming, of course; they're just nice idioms that came from functional languages. That led to a long slippery slope of ideas that upgraded my non-FP programming at every step, long before I got into anything that could be described as pure FP.
I don't know if it helps to draw a strict line between "pure" and "impure" FP. I mostly code in Scala, which is an imperative, side-effecting language. Scala gives you exactly the same power as Java to read and mutate global state. However, by design, Scala provides extremely good support for functional idioms, and you can use an effects system (such as Cats Effect or ZIO) to write in a pure FP style. But is it "pure FP" if you can read and mutate global state, and if you have to rely on libraries that are written in Java? Maybe, maybe not, but I don't think trying to answer that question yields much insight.
You do want to draw a strict line. When I say most programmers don’t get it… I’m talking about you. You don’t get why carmack is bullish about pure fp. You think it’s just map, reduce and filter and you don’t get why those functions are irrelevant.
To you you’re just fulfilling an ocd need to simplify your code and make it more pretty. There is deeper insight here that you missed and even when you read what carmack wrote about pure fp I doubt you’ll internalize the point.
Lisp is not pure. That’s why you don’t have the insight. For the true insight you need to learn about Haskell in a non trivial way. Not just youtube videos, but books that teach you the language from first principles. You need to understand the IO monad. And why it makes Haskell pure and how it forces you to organize your code in a completely different way. This is not an easy thing to understand.
The IO monad, when it appears, infects your code with the IO type and makes it extremely annoying to get rid of. I had a friend who learned Haskell and hated Haskell because of the IO monad. He stopped learning haskell to early and he never "got it".
If you reach this point you have to keep learning about Haskell until you understand why things are the way they are with haskell.
Just remember this: the annoyance of the IO monad is designed like that so that you write your logic in a way that doesn’t allow the monad to pollute most of your code.
Cats Effect manages effects using an IO monad. Most of the projects I currently work with use it.
This is why I say it's not useful to try to draw a strict line. I'm not going to argue with you on whether using an IO monad in an impure language is "pure FP" or not, but some Scala devs would. That argument in itself is not nearly as illuminating as knowing all the tools and concepts.
Some grasp it but see its trade-off contract, which is demanding.
With practice it just becomes another paradigm of programming. The trade off is really a skill issue from this perspective.
The larger issue is performance which is a legitimate reason for not using fp in many cases. But additionally in many cases there is no performance trade off.
That most wont get it is due to the fact that most are kind of "industrial programmers", who only learn and use mainstream OOP languages amd as such never actually use a mainly FP language a lot. Maybe on HN the ratio is better than on the whole market though.
where does the condescension come from? the loftiness/lording over people who "just don't get it", that clearly comes across in your communication?
this is a really non-productive comment. you have an opportunity to teach and share knowledge but instead you hoard and condescend, and rant about your implied superiority.
if so many programmers don't understand -- what's more productive: this comment, or helping "most programmers" to get it, and understand?
It’s communication to programmers who do get it.
I love to teach and explain but this is one of those things that can’t be conveyed. You have to do it yourself.
If you want an explanation though, use Google. But I don’t think explanations actually help you grok what’s really happening. You really have to come to catharsis yourself.
Learn Haskell. Learn it to the point where you completely understand the purpose of the IO monad and why it exists and helps a program organize better. Then you will understand.
When you do get it. You’ll be among a small few who have obtained something that’s almost like forbidden knowledge. No one will “get” you.
I've never seen pure FP...
I would recommend you take a look at Haskell or Elm.
John Karmack did and talked about it https://m.youtube.com/watch?v=1PhArSujR_A
I might add Carmack's take on functional programming https://web.archive.org/web/20120501221535/http://gamasutra....
Surely if you’ve seen any non-trivial amount of code, you have seen pure FP applied piecemeal even if not broadly. A single referentially transparent function is pure FP, even if it’s ultimately called by the most grotesque stateful madness.
At least for me, it's hard to get that insight, I tried to read articles and watch videos, and none of them gave me say: "oh! now I get it"
You need to learn Haskell in a non trivial way. Code in it. And then you need to completely internalize why the IO monad exists and why it encourages the programmer to code in ways that stay away from using it.
I had a friend (who’s in general a good programmer) learn Haskell and then get completely annoyed by the IO monad so that he quit learning Haskell. So yeah, it’s not easy to “get it” You only get it with reading and practice. Really you just need to completely internalize and grasp the purpose of the IO monad in Haskell.
Comment was deleted :(
> That was a cold-sweat moment for me: after all of my harping about latency and responsiveness, I almost shipped a title with a completely unnecessary frame of latency.
In this era of 3-5 frame latency being the norm (at least on e.g. the Nintendo Switch), I really appreciate a game developer having anxiety over a single frame.
You're over-crediting Carmack and under-crediting current game devs. 3-5 frames might be current end-to-end latency, but that's not what Carmack is talking about. He's just talking about the game loop latency. Even at ~4 frames of end-to-end latency, he'd be talking about an easily avoided 20% regression. That's still huge.
To be fair, back in 2014 that was one frame at 60Hz or slower for some titles. At 80-120Hz, 3-5 frames is comparatively similar time.
I don't think high frame rates are common outside of PC gaming yet.
Wikipedia indicates the Switch maxes out at 1080p60, and the newest Zelda only at 900p30 even when docked
I play Fortnite and Call of Duty at 120Hz VRR on Xbox Series X.
I believe both the PS5 and whatever nonsense string of Xs, numbers, and descriptors MS named this gen's console can do 144Hz output. I don't know how many games take advantage of that or whether that refresh rate is common on TVs.
60 FPS isn't even promised on PS5 Pro. Most graphically demanding titles still aim for 30 FPS on consoles, with any game able to support 60 FPS consistently worth noting.
What they said is true. There are some games with 120 FPS modes on PS5 and Series X, maybe even series S. That doesn't mean every game (or even most) are like that, just that the hardware supports it. At the end of the day you can't stop developers targeting whatever framerate they want.
yeah but when people talk about input lag for consoles its generally still in the 60hz sense, rare for games to be 120hz
smash brothers ultimate for example runs at 60fps and has 5-6 frames of input lag
Why would you even bother running at a game at 120Hz if the user's response to what's being drawn is effectively 24-30 FPS?
You’re still getting more information, which allows you to be more accurate with your inputs e.g. tracking a moving target.
You've seen games running at 120Hz and at 60Hz. The difference is obvious, isn't it? The difference between 24Hz and 60Hz is certainly obvious: that's the visual difference between movies and TV sitcoms.
I can type about 90 words per minute on QWERTY, which is about 8 keystrokes per second. That means that the average interval between keystrokes is about 120 milliseconds, already significantly less than my 200-millisecond reaction time, and many keystrokes are closer together than that—but I rarely make typographical errors. Fast typists can hit 150 words per minute. Performing musicians consistently nail note timing to within about 40 milliseconds. So it turns out that people do routinely time their physical movements a lot more precisely than their reaction time. Their jitter is much lower than their latency, a phenomenon you are surely familiar with in other contexts, such as netcode for games.
If someone's latency is 200 milliseconds but its jitter (measured as standard deviation) is 10 milliseconds, then reducing the frame latency from a worst-case 16.7 milliseconds (or 33.3 milliseconds in your 30Hz example) to a worst-case 8.3 milliseconds, and average-case 8.3 milliseconds to average-case 4.2 milliseconds, you're knocking off a whole 0.42 standard deviations off their latency. If they're playing against someone else with the same latency, that 0.42σ advantage is very significant! I think they'll win almost 61% of the time, but I'm not sure of my statistics†.
See also https://danluu.com/input-lag/#appendix-why-measure-latency:
> Latency matters! For very simple tasks, people can perceive latencies down to 2 ms or less. Moreover, increasing latency is not only noticeable to users, it causes users to execute simple tasks less accurately. If you want a visual demonstration of what latency looks like and you don’t have a super-fast old computer lying around, check out this MSR demo on touchscreen latency.
> The most commonly cited document on response time is the nielsen group[sic] article on response times, which claims that latncies[sic] below 100ms feel equivalent and perceived[sic] as instantaneous. One easy way to see that this is false is to go into your terminal and try sleep 0; echo "pong" vs. sleep 0.1; echo "test" (or for that matter, try playing an old game that doesn't have latency compensation, like quake 1, with 100 ms ping, or even 30 ms ping, or try typing in a terminal with 30 ms ping). For more info on this and other latency fallacies, see this document on common misconceptions about latency.
(The original contains several links substantiating those claims.)
https://danluu.com/keyboard-latency/#appendix-counter-argume... has a longer explanation.
______
† First I tried sum(rnorm(100000) < rnorm(100000) + 0.42)/1000, which comes to about 61.7 (%). But it's not a consistent 0.42σ of latency being added; it's a random latency of up to 0.83σ, so I tried sum(rnorm(100000) < rnorm(100000) + runif(100000, max=0.83))/1000, which gave the same result. But that's not taking into account that actually both players have latency, so if we model random latency of up to a frame for the 60Hz player with sum(rnorm(100000) + runif(100000, max=1.67) > rnorm(100000) + runif(100000, max=0.83))/1000, we get more like a 60.8% chance that the 120fps player will out-twitch them. I'm sure someone who actually knows statistics can tell me the correct way to model this to get the right answer in closed form, but I'm not sure I could tell the correct closed-form formula from an incorrect one, so I resorted to brute force.
> You've seen games running at 120Hz and at 60Hz. The difference is obvious, isn't it?
Honestly, I have not. I'm not much of a gamer, even though I used to be a game developer.
Certainly the difference between 30Hz and 60Hz is noticeable.
Maybe this is just because I'm old school but if it were me, I would absolutely prioritize low latency over high frame rate. When you played an early console game, the controls felt like they were concretely wired to the character on screen in a way that most games I play today lack. There's a really annoying spongey-ness to how games feel that I attribute largely to latency.
I don't really give a shit about fancy graphics and animation (I prefer 2D games). But I want the controls to feel solid and snappy.
I also make electronic music and it's the same thing there. Making music on a computer is wonderful and powerful in many ways, but it doesn't have the same immediacy as pushing a button on a hardware synth (well, on most hardware synths).
Oh! I assumed that because you were a famous game developer you would hang out with gamers who would proudly show off their 120Hz monitor setups.
I agree that low latency is more important than high frame rate, and I agree about the snappiness. But low jitter is even more important for that than low latency, and a sufficiently low frame rate imposes a minimum of jitter.
Music is even less tolerant of latency, and PCM measures its jitter tolerance in single-digit microseconds.
Haha, alas the reality of my celebrity is not as much as you might hope. :)
A lot of sluggishness in modern games isn't even from input latency but from deliberate animations that the character (and even UI) is made to follow.
I've heard that a good reaction time is around 200 ms, some experiments seem to confirm this figure [1]. At 60Hz, a frame is displayed every 17 ms.
So it would take a 12 frames animation and a trained gamer for a couple of frames to make a difference (e.g. push the right button before the animation ends and the opponent's action takes effect).
[1] https://humanbenchmark.com/tests/reactiontime/statistics
Reaction time is completely different to the input latency Carmack is worrying about in his scenario. Imagine if you thought I'm going to move my arm, and 200ms later your arm actually moved. Apply the same to a first-person shooter --- imagine you nudge your mouse slightly, and 200ms later you get some movement on screen. That is ___hugely___ noticeable.
https://www.researchgate.net/publication/266655520_In_the_bl...
this is for a stylus, but people can detect input latency as low as 1ms (possibly lower)
with VR, they use the term "motion to photon latency", and if it's over ~20ms, people start getting dizzy. at 200ms, nobody is going to be keeping their lunch down
google noticed people making fewer searches if they delayed the result by 100ms
edit: if you want an easy demo, open up vim/nano over ssh, and type something. then try it locally
I'm not sure this is the right way to look at it. I can't find stats right now, but I recall reading top players making frame-perfect moves in games like Smash Bros. Melee and Rocket League.
The mistake with focusing on reaction time is that humans can anticipate actions and can perform complex sequences of actions pretty quickly (we have two hands and 10 fingers). So someone playing one of those "test your reaction time" games might only score like 30ms. But someone playing a musical instrument can still play a 64th note at 120BPM.
Imagine playing a drum that took between 0 and 5 extra frames at 60FPS between striking the head and it producing a sound. Most people would notice that kind of delay, even if they can't "react" that quickly.
In games, frame delay translates to having to hold down a key (or wait before pressing the next one) for longer than is strickly necessary in order to produce an effect. Since fighting games are all about key sequences, the difference between needing to hold key for 0 frames and 5 frames is massive when you consider key combinations might be sequences of up to 5 key presses. 5 frames of delay x five sequential key presses x 8ms a frame = 1600ms vs 1 frame x 5 seq. key presses x 8ms = 40ms.
There's a massive difference between taking 1.6s to execute a complex move and 0.040s.
Another example is music (and relatedly, rythm games). With memorized music you have maximal anticipation of actions. The regular rithm only amplifies that anticipation. Musicians can be very consistent at timing (especially rithm section), and very little latency or jitter can throw that off.
It's something you can get used to. A concert pianist can have 2-3 notes chasing each other down his/her arm. Myelinated nerve fibres are fast (the physiology is really interesting), but still has limits. Latency is more important for organists. Firstly, some instruments can have a delay of up to half a second for some ranks (a rank is a set of pipes - there can be one or more ranks per stop). Secondly, in any church of appreciable size, there will be a significant delay between when you press a key and when you hear the congregation singing the note. In fact standard advice is to ignore the congregation, otherwise you can end up slowing down as a reaction to the latency.
So for the second problem, you just ignore input and play "open loop". For the first problem, you may have to play notes on the slow rank slightly early, although this is only practical if you separate them off on a different keyboard. Otherwise, you can only use that rank for slow music, and make use of the note increasing in volume and changing in tone as that rank comes in.
Frame perfect moves are exceedingly common in most top fields. Just watch any video about the latest speedruns.
The thing with latency is it needs to be consistent. If your latency is between 3 to 5 frames you blew it because you can't guarantee the same experience on every button press. If you always have 3 frames of latency, with modern screens, analog controls, and game design aware of those limitations, that's much better. Look at modern games like Celeste, who has introduced Coyote Time to account for all the latency of our modern hardware.
> In this era of 3-5 frame latency being the norm (at least on e.g. the Nintendo Switch)
Which titles is this true for? Have you or anyone else measured?
Almost every title. This is common knowledge.
> Inlining functions also has the benefit of not making it possible to call the function from other places.
I’ve really gone to town with this in Python.
def parse_news_email(…):
def parse_link(…):
…
def parse_subjet(…):
…
…
If you are careful, you can rely on the outer function’s variables being available inside the inner functions as well. Something like a logger or a db connection can be passed in once and then used without having to pass it as an argument all the time: # sad
def f1(x, db, logger): …
def f2(x, db, logger): …
def f3(x, db, logger): …
def g(xs, db, logger):
for x0 in xs:
x1 = f1(x0, db, logger)
x2 = f2(x1, db, logger)
x3 = f3(x2, db, logger)
yikes x3
# happy
def g(xs, db, logger):
def f1(x): …
def f2(x): …
def f3(x): …
for x in xs:
yield f3(f2(f1(x)))
Carmack commented his inline functions as if they were actual functions. Making actual functions enforces this :)Classes and “constants” can also quite happily live inside a function but those are a bit more jarring to see, and classes usually need to be visible so they can be referred to by the type annotations.
That's not an improvement, as it screws up the code flow. The point of inline blocks is that you can read the code the same way as it is executed. No surprised that code might be called twice or that a function call could be missed or reordered. Adding real functions causes exactly the indirection that one wanted to avoid in the first place. If the block has no name you know that it will only be executed right where it is written.
Yeah that’s a valid point. I tend to have in mind that as soon as I pull any of the inner functions out to the publicly visible module level I can say goodbye to ever trying to stop people reusing the code when I don’t really want them to.
For example, if your function has an implicit, undocumented contract such as assuming the DB is only a few milliseconds away, but they then reuse the code for logging to DBs over the internet, then they find it’s slow and speed it up with caching. Now your DB writing code has to suffer their cache logic bugs when it didn’t have to.
Not sure I believe the benefit of this approach outweighs the added difficulty wrt testing, but I certainly agree that Python needs a yikes keyword :-)
What is the benefit of such a yikes? Or do you consider it a yikes language as a whole?
Personally I like that functions can be inside functions, as a trade off between inlining and functional seperation in C++.
The scope reduction makes it easier to track bugs while it has the benefits of separation of concern.
> What is the benefit of such a yikes? Or do you consider it a yikes language as a whole?
None, it was just a simple joke based on the typo in the post I replied to. I like Python, and have in fact been happily using it as my main language for over 20 years.
Ahhh, now I (top level author) get it :)
> Inlining functions also has the benefit of not making it possible to call the function from other places.
Congrats, you've got an untestable unit.
Congratulations, you are writing test for things that would not need test if weren't put behind a under-defined interface. Meanwhile sprint goals are not met and overall product quality is embarrassing, but you have 100% MC/DC coverage of your addNumbersOrThrowIfAbove(a, b, c).
Which is usually a positive. Testing tiny subunits usually just makes refactoring and adding new features hard while not improving test quality.
Testing is a tool that sometimes makes your life easier. IME, many (not all) tiny subunits do actually have better tests when examined at that level. You just want to avoid tests which will need to be updated for unrelated changes, and try to avoid writing code which propagates that sort of minutia throughout the codebase:
> while not improving test quality
The big wins from fine-grained testing are
1. Knowing _where_ your program is broken
2. Testing "rare" edge cases
Elaborating on (2), your code probably works well enough on some sort of input or you wouldn't ship it. Tests allow you to cheaply test all four Turkish "i"s and some unicode combining marks, test empty inputs, test what happens when a clock runs backward ever or forward too slowly/quickly, .... You'll hit some of those cases eventually in prod, where pressures are high and debugging/triaging is slow, and integration tests won't usually save you. I'm also a huge fan of testing timing-based logic with pure functions operating on the state being passed in (so it's tested, better than an integration test would accomplish, and you never have to wait for anything godawful like an actual futex or sleep or whatever).
> makes refactoring and adding new features hard
What you're describing is a world where accomplishing a single task (refactoring, adding a new feature) has ripple effects through the rest of the system, or else the tests are examining proxy metrics rather than invariants the tiny subunits should actually adhere to. Testing being hard is a symptom of that design, and squashing the symptom (avoiding tests on tiny subunits) won't fix any of the other problems it causes.
If you're stuck in some codebase with that property and without the ability to change it, by all means, don't test every little setup_redis_for_db_payment_handling_special_case_hulu method. Do, however, test things with sensible, time-invariant names -- data structures, algorithms, anything that if you squint a bit looks kind of like parsing or serialization, .... If you have a finicky loop with a bunch of backoff-related state, pull the backoff into its own code unit and test how it behaves with clocks that run backward or other edge cases. The loop itself (or any other confluence of many disparate coding concepts) probably doesn't need to be unit tested for the reasons you mention, but you usually can and should pull out some of the components into testable units.
The problem is there is rarely a clear interface for your subunit. As such you will want to refactor that interface in ways that break tests in the future. If you are writing another string you can probably come up with a good interface and then write good tests that won't make refactoring hard - but string should be a solved problem for most of us (unless you are writing a new programming language) and instead we are working on problems that are not as clear and only our competitors work on so we can't even learn from others.
Not according to jon carmack. He stated he switched to pure functional programming in the intro which is basically stating all his logic is in the form of unit testable pure functions.
Nothing about pure functional programming requires unit testing all of your functions. You can decide to unit test larger or smaller units of code, just as you can in any other paradigm.
In pure functional programming a pure function is unit testable by definition of what a pure function is. I never said it requires functions to be tested. Just that it requires functions to be testable.
In other paradigms do not do this. As soon as a module touches IO or state it becomes entangled with that and NOT unit testable.
Is it still testable? Possibly. But not as a unit.
How do you unit test a local function that is a closure in pure functional code?
Of course you can't unit test things with restricted scope.
f(x) = x + 2 + 4
How do you unit test x + 2 or (+ 4) even if the operation is pure? You can't. Because it's not callable. It's the same thing with the closure.
The only things that are testable are things on unrestricted scope. AKA global scope. Think about what happens if you have a "closure" on global scope.
If you really want to test it then your "unit tests" which typically live on global scope, need to be moved to local scope. That's just the rules of scope.
There is one special case here. If the parent function returns the local function as a value. But even in this case the parent and local function have to be treated as a unit. The unit test will involve first calling the parent, then calling the local. The parent and child function form a "unit" thanks to shared state and the parent is essentially "moving" the local function into global scope.
Generally best practice is to use combinators if you want to maximize the granularity in which you can modularize your logic. I would even argue that closures stradle the line between pure and impure, so I actually avoid closures whenever possible.
I found this [] article of Carmack. While reading, I understood there is a large set of gray shades to pureness of "pure functional" code. He calls being functional a useful abstraction, a function() is never purely functional.
[] https://web.archive.org/web/20120501221535/http://gamasutra....
When people say pure functional programming they never mean the entire program is like this.
Because if it were your program would have no changing state and no output.
What they mean is that your code is purely functional as much as possible. And there is high segregation between functional code and non functional code in the sense that state and IO is segregated as much as possible away into very small very general functionality.
> pure fp
No he didn’t.
Like most things being talked about here, so much depends on the specifics.
I think developers should generally try and aim for, at every scale, the outputs of a system to be pure functions of the inputs (whether by reducing the scope of the system or expanding the set of things considered inputs). Beyond that there are so many decisions at the margin that are going to be based on personal inclination.
Ideally, you've just moved the unit boundary to where it logically should be instead of many small implementation details that should not be exposed.
The unit here is the email, not the email's link or subjects. Those are implementation details.
What do you use unit tests for, other than verifying implementation details?
Perhaps we have a difference in definition. To me, a unit test for a function such as "parse_news_email" would explore variations in parameters and states. Because of combinatorial explosion, that often means at least some white-box testing. I'm not going to generate random subjects and senders, and received-froms, I'm going to target based on internal details. Are we doing smart things with the message ID hostname? Then what happens if two messages come in with the same message ID but from different relays? The objective is that the unit test wrings out the implementation details, and the caller's unit test doesn't need to exhaustively test them.
This white-box texting may require directly poking at or mocking internal functions or at least abusing how they're called. For example, parsing the news item might entail pulling up and modifying conversation thread cache entries or state. For some of the tests you may need hand-crafted cache state, it's not feasible to create unique states for each parameter combination you're testing, and testing a combination will pollute the state for the following combinations. Or maybe the function depends upon an external resource you can't beat to death with a million identical requests. So the least-bad, simplest solution may be to freeze or back out part of the normal state update in the unit test. Which would usually involve directly invoking the internal routines.
Can this lead to fragile, false-positve to the point of useless tests? You betcha. That's where entertaining two contrary viewpoints is needed :) Use experience and good judgement about pros and cons in the particular situation.
Unit tests are for documenting the API contract for the user. You are going to target based on what you are willing to forevermore commit to for those who will use what you have created. Indeed, what happens when two messages come in with the same message ID is something the user needs to be aware of and how it functions needs to remain stable no matter what you do behind the scenes in the future, so you would absolutely want to document that behaviour. How it is implemented is irrelevant, though. The only thing that matters is that, from the public API perspective, it is handled appropriately.
There is a time and place for other types of tests, of course. You are right that unit tests are not the be all and end all. A good testing framework will allow you to mark for future developers which tests are "set in stone" and which are deemed throwaway.
We're in general agreement about the purpose of unit tests. I disagree on a couple of points.
Tests do not document the API. No test is complete, and for that reason alone can't completely document anything. For example, a good API might specify that "the sender must be non-null, and must be valid per RFC blah." There's no way to test that inclusively, to check all possible inputs. You can't use the test cases to deduce "we must meet RFC blah." You might suspect it, but you'd be risking undefined behavior it you stray from input that doesn't exactly match the test cases. And before anyone objects "the API docs can be incomplete too," well, that true. But the point is that a written API has vastly more descriptive power than a set of test cases. (The same applies to "self-documenting code". Bah humbug.) There's also the objection "but you can't guarentee cases you don't test!" Also true. That's reality. _You can never test all your intended behavior._ You pick your test cases to do the best you can, and change your cases as problems pop up.
The other thing I would shy away from is including throwaway tests in the framework. Throwaways are a thing, developers use them all the time, but don't make them unwanted stepchildren--poorly (incompletely?) designed, slapped together, limited scope, confusing for another developer (including time-traveling self) to wade through and decide whether this is a real failure or just bogus test. They're tech debt. Less frequently used tests are another matter. For example, release-engineering tests that only get run against release candidates. But these should be just as much real, set in stone, as any other deliverable.
Which I guess is a third viewpoint nuance difference. I treat tests as being part of the package just as much as any other deliverable. They morph and shift as APIs change, or dependencies mutate, or bugs are found. They aren't something that can be put to the side and left to vegetate.
> There's no way to test that inclusively, to check all possible inputs.
Which means the RFC claim is false and should not be asserted in the first place. The API may incidentally accept valid RFC input, but there is no way to know that it does for sure for all inputs. You might suspect it conforms to the RFC, but to claim that it does with certainty is incorrect. Only what is documented in the tests is known to be true.
Everything else is undefined behaviour. Even if you do happen to conform to an RFC in one version, without testing to verify that continues to hold true, it probably won’t.
This is exactly why unit tests are the expected documentation by users. It prevents you, the author, from make spurious claims. If you try, the computer will catch you in your lies.
> The other thing I would shy away from is including throwaway tests in the framework.
What does that mean? I suspect you are thinking of something completely different as this doesn't quite make sense with respect to what I said. It probably makes sense in another context, and if I have inferred that context correctly, I'd agree... But, again, untreated to our discussion.
OK, one more round. An API spec is a contract, not a guarentee of correctness. You, as the client, are free to pass me any data that fits the spec. If my parsing library does the wrong thing, then I've got a bug and need to fix it. My tests are also defective and need to be adjusted.
If you passed 3.974737373 to cos(x), and got back 200.0, would you be mollified if the developers told you "that value clearly isn't in the unit test cases, so you're in undefined behavior"? Of course not. The spec might be "x is a single-float by value, 0.0 <= x < 2.0 * PI, result is the cosine of X as a single-float." That's a contract, an intent--an API.
The same for a mail parser. If my library croaks with a valid (per RFC) address then I've got a problem. If I try to provide some long, custom, set of cases I will or won't support, then my customer developers are going to be rightfully annoyed. What are they supposed to do when they get a valid but unsupported address? Note we're not talking about carving out broad exceptions reasonable in context ("RFC 5322 except we don't support raw IP addresses foo@[1.2.3.4]", "we treat all usernames as case-insensitive"). And we're not talking about "Our spec (intent) is foo, but we've only tested blah blah blah."
Early in my career I would get pretty frustrated by users who were not concerned with arranging their data and procedures the right way, clueless about what they really were doing. OK, so I still get frustrated by stupid :) But it's gradually seeped into my head that what matters is the user's intentions. Specs are an imperfect simplificaton of those very complex things, APIs are imperfect simplifcations of the specs, and our beautiful code and distributed clusters and redundant networks are extremely limited and imperfect implementations of the APIs. Some especially harmful potential flaws get extra attention during arch, implementation, and testing. When things get too far out we fix them.
> What do you use unit tests for, other than verifying implementation details?
1. Determining when the observable behavior of the program changes.
2. Codifying only the specific behaviors that are known to be relied on by callers.
3. Preventing regressions after bugs are fixed.
Failing tests are alarm bells, when do you want them to grab your attention?
Excellent points, violently agree, my question was poorly worded. The purpose of units tests is to verify the contracted API is actually being provided by the implementation details. A clearer question might have been "what are unit tests for if not to exercise the implementation details, verifying they adhere to the API?" Unit tests validate implementation details, integration tests validate APIs.
To me, a good unit test beats the stuffing out of the unit. It's as much a part of the unit as the public functions, so should take full advantage of internal details (keeping test fragility in mind); of course that implies the unit test needs ongoing maintenance just as much as the public functions. If you're passing a small set of inputs and checking the outputs, well that's a smoke test, not a unit test.
To answer your last question, I want the alarm bells to ring whenever the implementation details don't hold up. That's whether the function code changed, a code or state dependency changed, or the testing process itself changed. If at all feasible all the unit tests run every time the the complete suite is run, in full meat-grinder mode. "Complete suite" is hand-wavy; e.g. it might be the suite for a major library, but not the end-to-end application.
All that doesn't mean that you have to consider artificial boundaries that you yourself have introduced for convenience when deciding on the proper boundaries for what constitutes a "unit". Not every instance of code reuse makes for a good unit to test.
> What do you use unit tests for, other than verifying implementation details?
You don't need to verify the return of `parse_subject()` directly, since it will be part of the return of `parse_email()`. Verify it there.
This is a major insight. Defining a local function isn't a big deal you can always just copy and pasta it out to global scope.
Any time you merge state with function you can no longer move the function. This is the same problem as OOP. Closures can't be modular the same way methods in objects can't be modular.
The smallest unit of testable module is the combinator. John Carmack literally mentioned he does pure functional programming now which basically everyone in this entire thread is completely ignoring.
Yup, and I should have called this out as a downside. Thank you for raising it.
On visibility, one of the patterns I’ve always liked in Java is using package level visibility to limit functions to that code’s package and that packages tests, where they are in the same package (but possibly defined elsewhere.)
(This doesn’t help though with the reduction in argument verbosity, of course.)
The latter pattern is very popular in Python web scraping and data parsing niches as the code is quite verbose and specific and I'm very happy with this approach. Easy to read and debug and the maintenance is naturally organized.
Funny enough, the equivalent of your Python example is how Haskell 'fakes' all functions with more than one argument (at least by default).
Imperative blocks of code in Haskell (do-notation) also work like this.
That’s gonna be quite expensive, don’t do this in hot loops. You’re re-defining and re-creating the function object each time the outer function is called.
Good point. I measured it for 10^6 loops:
(1) 40ms for inline code;
(2) 150ms for an inner function with one expression;
(3) 200ms for a slightly more complex inner function; and
(4) 4000ms+ for an inner function and an inner class.
def f1(n: int) -> int:
return n * 2
def f2(n: int) -> int:
def g():
return n * 2
return g()
def f3(n: int) -> int:
def g():
for _ in range(0):
try:
pass
except Exception as exc:
if isinstance(exc, 1):
pass
else:
while True:
pass
raise Exception()
return n * 2
return g()
def f4(n: int) -> int:
class X:
def __init__(self, a, b, c):
pass
def _(self) -> float:
return 1.23
def g():
for _ in range(0):
try:
pass
except Exception as exc:
if isinstance(exc, 1):
pass
else:
while True:
pass
raise Exception()
return n * 2
return g()
It might be a benefit in some cases, but I do feel that f1/f2/f3 are the prime candidates for actual unit testing
It's possible to nest subprograms within subprograms in Ada. I take advantage of this ability to break a large operation into one or more smaller simpler "core" operations, and then in the main body of the procedure write some setup code followed by calls to the core operation(s).
Where is the part, where this is "careful"? This is just how scopes work. I don't see what is special about the inner functions using things in the scope of the outer functions.
Excessive use of external bindings in a closure can make it hard to reason about lifetimes in cases where that matters (e.g. when you find out that a huge object graph is alive solely because some callback somewhere is a lambda that closed over one of the objects in said graph).
So inlining is the private of functions without a object. Pop it all to stack, add arguments, set functionpointer to instructionstart of inline code, challenge accepted, lets go to..
Remember to `nonlocal xs, db, logger` inside those inner functions. I'm not sure if this is needed for variables that are only read, but I wouldn't ever leave it out.
> I'm not sure if this is needed for variables that are only read
It’s not needed. In fact, you should leave it out for read-only variables. That’s standard practice - if you use `nonlocal` people reading the code will expect to see writes to the variables.
> That’s standard practice - if you use `nonlocal` people reading the code will expect to see writes to the variables.
Since when? I was under the impression Python virtually doesn't have lexical scoping at all and that's why `nonlocal` exists. I mean hell, in CPython you can literally access and modify the local variables of your caller (and everything else up the call stack too). I never associated `nonlocal` at all with specifically writes. Just access in general.
> I was under the impression Python virtually doesn't have lexical scoping at all and that's why `nonlocal` exists
Python has had lexical scoping since version 2.2. PEP 227 [0] "describes the addition of statically nested scoping (lexical scoping)" - allowing access to (but not assignment to) names in outer scopes.
`nonlocal` was introduced later, in Python 3.0 [1], specifically to allow assignment to names in outer scopes.
You can do this in C++, too, but the syntax is a little uglier.
Not that bad?
int main() {
int a = -1;
[&] {
a = 42;
printf("I'm an uncallable inline block");
}();
printf(" ");
[&] {
printf("of code\n");
}();
[&] {
printf("Passing state: %d\n", a);
}();
return 0;
}
At this point, why wouldn't you just use a nested block?
It’s not horrible, a little bit verbose though.
Here are some information theoretic arguments why inlining code is often beneficial:
https://benoitessiambre.com/entropy.html
In short, it reduces scope of logic.
The more logic you have broken out to wider scopes, the more things will try to reuse it before it is designed and hardened for broader use cases. When this logic later needs to be updated or refactored, more things will be tied to it and the effects will be more unpredictable and chaotic.
Prematurely breaking out code is not unlike using a lot of global variables instead of variables with tighter scopes. It's more difficult to track the effects of change.
There's more to it. Read the link above for the spicy details.
This is why I think it's a mistake that many popular languages, including standard c/c++, do not support nested function definitions. This for me is the happy medium where code can be broken into clear chunks, but cannot be called outside of the intended scope. A good compiler can also detect if the nested function is only called once and inline it.
C++ has lambdas and local classes. Local classes have some annoying arbitrary limitations, but they are otherwise useful.
After spending a lot of time writing idiomatic React components in es6, I've found my love of locally declared lambdas to really grow. If I give the lambdas really good names, I find that the main body of my component is very, very readable, even more so than if I'd used a more traditional style liberally sprinkled with comments.
> If I give the lambdas really good names
That's a really funny way to say it.
Giving your lambdas names defeats part of their purpose though.
They have two distinct purposes: anonymous functions, and closures. Those often go together, but there are many scenarios where you only care about the latter, and don't actually need the former. Named lambdas (i.e. lambdas assigned to local consts) covers this case if the language doesn't have dedicated syntax for it.
maybe one purpose. but it fulfills another purpose- self-documenting code, and a really simple non-nested main body to your function.
In Java, a local function reference (defined inside a method and never used outside of this method ) is possible. Notre that this function is not really tied to an object, which is why I don't call it a method, and I don't use the expression "method reference", it is just tired to the function that contains it, which may be a method - or not.
Code can always be called outside of that scope just by returning function pointers or closures. The point is not to restrict calling that code, but to restrict the ability to refer to that piece of code by name.
As mentioned by others, C++ has lambdas. Even if you don't use lambdas, people used to achieve the same effect by using plenty of private functions inside classes, even though the class might have zero variables and simply holds functions. In even older C code, people are used to making one separate .c file for each public function and then define plenty of static functions within each file.
Of course all this needs to be weighed against maintainability and readability of the code. If the code base is not mainly about something very performance critical and this kind of thing shows to be a bottleneck, then changing things away from more readable towards performance optimized implementation would require a very good justification. I doubt, that this kind of optimization is justified in most cases. For that reason I find the wording "prematurely breaking out code" to be misleading. In most cases one should probably prioritize readability and maintainability and if breaking code out helps those, then it cannot be premature. It could only be premature from a performance limited perspective, which might have not much to do with the use case/purpose of the code.
It is nice, if a performance optimization manages to keep the same degree of readability and maintainability. Those concerns covered, sure we should go ahead and make the performance optimization.
What I'm advocating here is only coincidentally a performance optimization. Readability and maintainability (and improved abstraction) are the primary concern and benefit of (sometimes) keeping things inline or more specifically of reducing entropy.
Here is a followup post for those interested: https://benoitessiambre.com/integration.html
Related:
John Carmack on Inlined Code - https://news.ycombinator.com/item?id=39008678 - Jan 2024 (2 comments)
John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=33679163 - Nov 2022 (1 comment)
John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=25263488 - Dec 2020 (169 comments)
John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=18959636 - Jan 2019 (105 comments)
John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=14333115 - May 2017 (2 comments)
John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=12120752 - July 2016 (199 comments)
John Carmack on Inlined Code - https://news.ycombinator.com/item?id=8374345 - Sept 2014 (260 comments)
There is a longer version of this thought-provoking post, also including Carmack's thoughts in 02012, at https://cbarrete.com/carmack.html. But maybe that version has not also had threads about it.
It doesn't seem to have, since https://news.ycombinator.com/from?site=cbarrete.com is empty.
Should we change the top link to that URL?
I do think it's a better page, but I wouldn't change the link if I were in charge. On the other hand, I think everyone is grateful that you're in charge of HN and not me. Especially not me. So I think you should use your judgment.
Always read older stuff from Carmack remembering the context. He made a name for himself getting 3D games to run on slow hardware. The standard advice of write for clarity first, make sure algorithms have reasonable runtimes, and look at profiler data if it's slow is all you need 99% of the time.
I find the inlined style can actually improve clarity.
A lot of code written toward the "uncle bob" style where you maximize the number of functions has fantastic local clarity, you can see exactly what the code you are looking at is doing; but atrocious global clarity, where it's nearly impossible to figure out what the system does on a larger scale.
Inlining can help with that, local clarity deteriorates a bit, but global clarity typically improves by reducing the number of indirections. The code does indeed also tend to get faster, as it's much easier to identify and remove redundant code when you have it all in front of you. ... but this also improves the clarity of the code!
You can of course go too far, in either direction, but my sense is that we're often leaning much too far toward short isolated functions now than is optimal.
One thing that's nice about functions is that they force the associated block of code to be named, and for state that is specific to the function to be clearly separate from external state (closures aside). It would be good to be able to retain those advantages even in linear code that nevertheless has clear boundaries between different parts of it that would be nice to enforce or at least highlight, but without losing the readability of sequential execution.
To some extent you can have that in languages that let you create a named lambda with explicit captures and immediately invoke it, e.g. in C++:
int g;
void doThisAndThat(int a, int b, int c) {
doThis: auto x = [&a, &b] {
...
}();
doThat: [&g, &c, &x] {
...
}();
}
The syntax makes it kind of an eyesore though. Would be nice to have something specifically designed for this purpose.> atrocious global clarity
much like microservices.
And before that, 2D games (side-scrolling platformers were not a thing on PC hardware until Carmack did it, iirc). I think his main thing is balancing clarity - what happens when and in what order - with maintainability.
Compare this with enterprise software, which is orders of magnitude more complex than video games in terms of business logic (the complexity in video games is in performance optimization), but whose developers tend to add many layers of abstraction and indirection, so the core business process is obfuscated, or there's a billion non-functional side activities also being applied (logging, analytics, etc), again obfuscating the core functionality.
It's fun to go back to more elementary programming things, in e.g. Advent of Code challenges or indeed, game development.
"compare this with enterprise software, which is orders of magnitude more complex than video games in terms of business logic" Maybe this was true 20 years ago, but I do not think this is true today. Game code of some games is almost as complex as enterprise software or even more complex in some cases (think of grand strategy games like Civilization or Paradox games). The difference is that it still needs to be performant, so the evolutionary force just kills programmers and companies creating unperformant abstractions. In my opinion game programming is just harder than enterprise programming if we speak about complex games. (I have done both). The only thing which is easier in game programming is that it is a bit easier to see clearly in terms of 'business requirements', and also it is more meritocratic (you can start a game company anywhere on the globe, no need to be at business centers.) And of course game programming is more fun, so programmers do the harder job even for less money.
For people who think game programming is less complex than enterprise software, I suggest the CharacterMovementComponent class in unreal engine which is the logic of movement of characters (people) in a networked game environment... With multiple thousand lines of code in just the header is not uncommon in unreal. And this is not complex because of optimization mostly. This is very complex and messy logic. Of course we can argue that networking and physics could be done in a simple naive way, which would be unacceptable in terms of latency and throughput, so all in all complexity is because of optimization after all. But it is not the 'fun' elegant kind of optimization, it is close to messy enterprise software in some sense in my opinion.
I have heard modern game development compared to OS development in terms of complexity and I think that comparison is quite apt; especially when the game involves intricate graphics and complicated networking involving multiple time scales as you say.
>Compare this with enterprise software, which is orders of magnitude more complex than video games in terms of business logic
I dont buy it in games like gta, cyberpunk or witcher 3
In both design space and programming complexity, you're right.
> And before that, 2D games (side-scrolling platformers were not a thing on PC hardware until Carmack did it, iirc). I think his main thing is balancing clarity - what happens when and in what order - with maintainability.
Smooth side-scrollers did exist on the PC before Keen (An early one would be the PC port of Defender). Moon Patrol even had jumping in the early '80s.
Furthermore other contemporaries of Carmack were making full-fledged side-scrolling platformers in ways different from how Keen did it (there were many platformers released in 1990). They all involved various limitations on level design (as did what Keen used), but I don't believe any of them allowed both X and Y scrolling like the Keen games did.
I agree with this in general, but his essay on functional programming in C++ (linked at the top of the page) is phenomenal and is fantastic general advice when working in any non-functional language.
Link at the top of the page broken, found an archived version at https://web.archive.org/web/20120501221535/http://gamasutra....
Interesting: this is a 2014 post from Jonathan Blow reproducing a 2014 comment by John Carmack reproducing a 2007 e-mail by the same Carmack reproducing a 2006 conversation (I assume also via e-mail) he had with a Henry Spencer reproducing something else the same Spencer read a while ago and was trying to remember (possibly inaccurately?).
I wonder what is the actual original source (from Saab, maybe?), and if this indeed holds true?
Is this kind of like 300 was a movie about a Frank Miller novel about a Greek legend about the actual Battle of Thermopylae?
I have a coworker that LOVES to make these one or two line single use functions that absolutely drives me nuts.
Just from a sheer readability perspective being able to read a routine from top to bottom and understand what everything is doing is invaluable.
I have thought about it many times, I wish there was an IDE where you could expand function calls inline.
It’s called “self documenting code” and the way you self document code it is by taking all your comments and make them into functions, named after your would-be comment.
I’m not a fan either.
Everything must be done to taste. I think code can be made "self-documenting" without going overboard and doing silly things.
This can be done in a good way and in bad ways. With most code you will be calling builtin procedures/functions. You also don't look under the hood for those usually. But for the code of your coworker it seems to irritate you. This could mean many things. Just to name a few: (1) The names are not giving a good idea what those functions do. (2) The level of abstraction is not the same inside the calling function, so that you feel the need to check the implementation detail of those small functions. (3) You don't trust the implementation of those smaller functions. (4) The separated out functions could be not worth separating out and being given names, because what the code in them does is clear enough without them being separated out. (n) or some other reason.
The issue does not have to be that those things are split out into separate small functions. The issue might be something else.
Sometimes it's easier to define some vocabulary and then use it. Like defining push and pop on a stack vs stack[++ix] = blah and blah = stack[ix--].
And avoids needing to think about it being prefix or postfix after you don't that one time.
But at other times it's insufferable, when the abstraction is leaky and unintuitive.
I find that when initially exploring a problem space, it's useful to consider functions as “verbs” to help me think through the solution, and that feels useful in helping me figure out a solution to my problem—I've isolated some_operation() into its own function, and it's easy to see at a glance whether or not some_operation() does the specific thing its name claims to do (and if so, how well).
But then after things have solidified somewhat, it's good practice to go back through your code and determine whether those “verbs” ended up being used more than once. Quite often, something that I thought would be repeated enough to justify being its own function, is actually only invoked in one specific place—so I go back and inline these functions as needed.
The less my code looks like a byzantine tangle of function invocations, and the more my code reads like a straightforward list of statements to execute in order, the better it makes me feel, because I know that I'm not unnecessarily hiding complexity, and I can get a better, more concrete feel for what my program's execution looks like.
I feel like this style is also encouraged in Go and / or the clean/onion architecture / DDD, to a point, where the core business logic can and should be a string of "do this, then do that, then do that" code. In my own experience I've only had a few opportunities to do so (most of my work is front-end which is a different thing entirely), the one was application initialisation (Create the logger, then connect to the database, then if needed initialize / migrate it, then if needed load test data. Then create the core domain services that uses the database connection. Then create the HTTP handlers that interface with the domain services. Then start the HTTP server. Then listen for an end process command and shut down gracefully), the other was pure business logic (read the database, transform, write to file, but "database" and "file" were abstract concepts that could be swapped out easily). You don't really get that in front-end programming though, it's all event driven etc.
> the one was application initialisation
...and then you want to parallelize as much as possible to allow for fast boot times which helps the development process immensely.
One of the things I've learned is that optimizing for developer quality of life is one of the best approaches when it comes to correctness and performance. Then, the developers would be able to run multiple iterations of the real thing.
"Typically I am there to rail against the people that talk about using threads and an RTOS for such things, when a simple polled loop that looks like a primitive video game is much more clear and effective. "
Yess, I finally feel vindicated. I've been having this argument with embedded people since forever. I was of the opinion that if million line big boy PC apps can make do with just one thread, having fifteen threads and synchronizing between them using mutexes and condition variables on a microcontroller with 64kb RAM is just bonkers.
For some reason, the statement that a while(true) loop + ISRs + DMA can do everything an RTOS like FreeRTOS can do, can rile up embedded folks to no end.
> I have gotten much more bullish about pure functional programming, even in C/C++ where reasonable: (link)
The link is no longer valid, I believe this is the article in question:
https://www.gamedeveloper.com/programming/in-depth-functiona...
Probably the more important link. He's basically saying his old email is outdated and he does pure FP now.
This is over a decade old at this stage, it would be interesting to know how his thoughts have evolved since.
This largely concurs with clean architecture[1], especially considering his foreword containing hindsight.
Clean architecture can be summarized thusly:
1. Bubble up mutation and I/O code.
2. Push business logic down.
This is how it's stated in [1]:
> The concentric circles represent different areas of software. In general, the further in you go, the higher level the software becomes. The outer circles are mechanisms. The inner circles are policies.
Inlining as a practice is in service of #1, while factoring logic into pure functions addresses #2, noted in the foreword:
> The real enemy addressed by inlining is unexpected dependency and mutation of state, which functional programming solves more directly and completely. However, if you are going to make a lot of state changes, having them all happen inline does have advantages; you should be made constantly aware of the full horror of what you are doing. When it gets to be too much to take, figure out how to factor blocks out into pure functions (and don.t let them slide back into impurity!).
1: https://blog.cleancoder.com/uncle-bob/2012/08/13/the-clean-a...
I think when developing something from scratch, it's actually not a terrible strategy to do this and pick out boundaries when they become clearer. Creating interfaces that make sense is an art, not a science.
> The function that is least likely to cause a problem is one that doesn’t exist, which is the benefit of inlining it.
I think that summarizes the case pro inlining.
For some reason this quote by Carmack stands out for me:
> "it is often better to go ahead and do an operation, then choose to inhibit or ignore some or all of the results, than try to conditionally perform the operation."
I'm not the audience for this topic, I do javascript from a designer-dev perspective. But I get in the weeds sometimes, maxing out my abilities and bogged down by conditional logic. I like his quote it feels liberating... "just send it all for processing and cherry-pick the results". Lightbulb moment.
my CPU does that all of the time, it is speculative execution :-)
I wish languages had the following:
let x = block {
…
return 5
} // x == 5
And the way to mark copypaste, e.g. common foo {
asdf(qwerty(i+j));
printf(“%p”, write));
bar();
}
…(repeats verbatim 20 times)…
…
common foo {
asdf(qwerty(i+k));
printf(“%d”, (int)write); // cast to int
bar();
}
…
And then you could `mycc diff-common foo` and see: <file>:<line>: common
<file>:<line>: common
…
<file>:<line>:
@@…@@
-asdf(qwerty(i+j));
+asdf(qwerty(i+k));
@@…@@
-printf(“%p”, write));
+printf(“%d”, (int)write); // cast to int
With this you can track named common blocks (allows using surrounding context like i,j,k). Without them being functions and subject for functional entanglement $subj discusses. Most common code gets found out and divergences get bold. IDE support for immediate highlighting, snippeting and auto-common-ing similar code would be very nice.Multi-patching common parts with easily reviewing the results would also be great. Because the bugs from calling a common function arise from the fact that you modify it and it suddenly works differently for some context. Well, you can comment a common block as fragile and then ignore it while patching:
common foo {
// @const: modified and fragile!
…
}
You still see differences but it doesn’t add in a multi-patch dialog.Not expecting it to appear anywhere though, features like that are never considered. Maybe someone interested can feature it in circles? (without my name associated)
In C++ it's an idiom to use immediately invoked lambdas:
auto x = []{
/*...*/
return 5;
}();
There is/was an attempt to introduce a more of a first-class language construct for such immediate "block expressions":https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p28...
I'm not convinced that automatic checking of copy-paste errors of such blocks make much sense though. At least I think the false positive rate would be way too high.
GCC has supported statement expressions for ages: https://gcc.gnu.org/onlinedocs/gcc/Statement-Exprs.html
They're also used extensively in the Linux kernel, mostly to implement macros: https://github.com/search?q=repo%3Atorvalds%2Flinux%20%22(%7...
Comment was deleted :(
IIFE exist, but are cumbersome to type/read in most languages. C++ is probably the winner by syntax and semantics here.
false positive rate would be way too high
The key idea is not to have identical blocks, but to have a way to overview changes in similar code, similar by origin and design. It’s a snippet diff tool, not a typo autocorrector. There’s no false positives cause if “common foo” has zero diff in all cases, it probably should be “foo(…)”.
Now if only c++ could guarantee copy elision from lambda returns...
There is a lot of guaranteed copy elision since C++17, what exactly do you mean?
It depends if using C++17 and later versions.
You can do this in...
# perl
my $x = do {
…
5
};
;; Rebol/Red
x: do [
…
5
]
Regarding compound statements returning values: There are a number of languages which have that, including Rust. Ironically, it made me wish for a reversed form of the construct, i.e. something like
{ ...; expr } --> x;
// x is a new variable initialized to expr
I feel like this would help readability when the compound statement is very large.Can you help me understand why this would be beneficial, other than avoiding using the word "function"?
I guess you’re asking about the block part — it’s a minor syntactic convenience and not the main point of the comment. It avoids the word function/lambda or def-block and related syntactic inconvenience like parentheses around and at the end and interference with ASI (when applicable).
You're looking for blocks-as-expressions, e.g. the following is valid Rust:
let x = {
whatever;
5
}; // assigns 5 to x
In my opinion, there is value in functions that have only one caller: it's called functional decomposition. The right granularity of functional decomposition can make the logic easier to understand.
To prevent unintended uses of a helper function in C, you can make it static. Then at least nothing from outside of that translation unit can call it.
> The fly-by-wire flight software for the Saab Gripen (a lightweight fighter) went a step further...
I would love to hear some war stories about the development of flight software. A lot of it is surely classified, but I'm fascinated by how those systems are put together.
Comment was deleted :(
I think the major problem with this is scope. Now a variable declared at the top of your function is in scope for the entire function.
Limiting scope is one of the best tools we have to prevent bugs. It's one reason why we don't just use globals for everything.
You can artificially create scope. I often write code like:
Foo f = null;
{
... stuff with variables
f = barbaz;
}
Is the point that any var declared in between the braces automatically goes out of scope, to minimize potential duplication of var names and unintended behavior ?
The worst I've seen is old school C programmers who insisted on reusing loop variables in other loops. Even worse, those loop variables were declared inside the loop declaration, which old C standards allowed to visible outside of it.
So they would have stuff like this
for(int i=0; i<10; i++) { ... }
for (;i<20;i++) { ... }
Later versions of C++ disallowed this, which led to some interesting compile failures, which led to insistence of the old stubborn programmers that new compilers simply not be usedNow you have to make `f` nullable and you run the risk of not initialising it and getting a null pointer.
You can't do it in C, but in functional style languages you can do this:
let f = {
let bar = ...;
let baz = ...;
let barbaz = ...;
barbaz
};
Which is a lot nicer. But if you ask me it's just a function by another name except it still doesn't limit scope quite as precisely as a function.In C++ you can do:
auto f = [&] {
...
return barbaz;
}();
with the side benefit that you can also make the use of state explicit inside of [] instead of using wildcard capture.Given that it's neither reused nor parametrized, I'm not sure why you see this kind of pattern as a "function by another name", though. Semantically it's more of a namespace if anything.
That literally is a function. I guess the important difference is you can easily confirm by inspection that it is only called once?
If that's an important property maybe it would be worth supporting an annotation on normal functions to enforce that. I guess you could easily write a linter for that.
Syntactically it is, but semantically it's really more of an isolated block IMO because not only it's called only once, but that call happens immediately (so no back-and-forth control flow unlike regular functions), and the lambda is not passed anywhere as a value either.
GCC and clang (and maybe others) have 'statement expressions': https://godbolt.org/z/sqYnbh4Ej
Link to the Wayback Machine cache/mirror, in case you're also experiencing a "Bad Gateway/Connection refused" error
https://web.archive.org/web/20241009062005/http://number-non...
> No bug has ever been found in the “released for flight” versions of that code.
I thought that at least his crash was a result of bad constants in flight software: https://www.youtube.com/watch?v=SWZLmVqNaQc
The first comment appears to agree with me.
I’m not even pretending I understood Carmack’s email/mailing list post but if more intelligent/experienced programmers than me care to help me out, what exactly is meant by this he wrote in 2007:
_If a function is called from multiple places, see if it is possible to arrange for the work to be done in a single place, perhaps with flags, and inline that._
Thanks,
This is a heavily simplified version of what I'm suspecting he's trying to portray, key this wouldn't be useful for utility functions like string manipulation but more business logic being used across similar functions:
def processOrder():
# Some common processing logic
print("Processing the order...")
def placeOnlineOrder():
processOrder()
print("Sending confirmation email...")
def placeInStoreOrder():
processOrder()
print("Printing receipt...")
# Calls from different locations
placeOnlineOrder()
placeInStoreOrder()
Could become: def processOrder(order_type):
# Common processing logic
print("Processing the order...")
if order_type == "online":
print("Sending confirmation email...")
elif order_type == "in_store":
print("Printing receipt...")
# Unified calls with different flags
processOrder("online")
processOrder("in_store")
That... looks decidedly worse. Now you have fewer functions that need to be concerned with multiple unrelated things for no reason.
Come to think of it, execute-and-inhibit style as described here is exactly what's going on when in continuous deployment you run your same pipeline many times a day with small changes, and gate new development behind feature flags. We're familiar with the confidence derived from frequently repeating the whole job.
I have been super picky about what JC says since he moved the ID engine from plain and simple C99 to c++.
Comment was deleted :(
Can someone explain what inlined means here? It was my assumption that the compiler will automatically inline functions and you don't need to do it explicitly. Unless it means something else in this context
It means not using explicit functions, just writing the same code as little inline blocks inside the main function because it allows you to see all the things that would be hidden if all the code wasn't immediately visible.
To the other point though, the quality of compiler inlining heuristics is a bit of a white lie. The compiler doesn't make optimal choices, but very few people care enough to notice the difference. V8 used a strategy of considering the entire source code function length (including comments) in inlining decisions for many years, despite the obvious drawbacks.
Well there's also compiler directives other than `inline`, like msvc's `__inline` and `__forceinline` (which probably also have an equivalent in gcc or clang), so personally I don't think you need to make the tradeoff between readability and reusability while avoiding function calls. Not to mention C++ constevals and C-style macros, though consteval didn't exist in 2007
__forceinline is purely a suggestion to the compiler, not a requirement. Carmack's point isn't about optimizing the costs of function calls though. It's about the benefits to code quality by having everything locally visible to the developer.
It's an interesting view because I find neatly compartmentalized functions easier to read and less error prone, though he does point out that copying chunks of code such as vector operations can lead to bugs when you forget to change some variable. I guess it depends on the function. Something like
Vector c = dotProduct(a, b);
is readable enough and doesn't warrant inlining, I think. There's nothing about `dotProduct` that I would expect to have any side effects, especially if its prototype looks like: Vector dotProduct(Vector const& a, Vector const& b) const;
That's a pure function, which he says should be the goal. It's impure functions that he's talking about.
Hey, aren't you that guy who the FBI is investigating for crypto related fraud??!
(2014)
First discussed here back then: https://news.ycombinator.com/item?id=8374345
Thanks! Macroexpanded:
John Carmack on Inlined Code - https://news.ycombinator.com/item?id=39008678 - Jan 2024 (2 comments)
John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=33679163 - Nov 2022 (1 comment)
John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=25263488 - Dec 2020 (169 comments)
John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=18959636 - Jan 2019 (105 comments)
John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=14333115 - May 2017 (2 comments)
John Carmack on Inlined Code (2014) - https://news.ycombinator.com/item?id=12120752 - July 2016 (199 comments)
John Carmack on Inlined Code - https://news.ycombinator.com/item?id=8374345 - Sept 2014 (260 comments)
2007
How does a program work when its disallow "backward branches". Same thing with "subroutine calls" how do you structure a program without them?
You can do a lot with a program that looks like:
while(1) {
if (condition1)
...
if (condition2)
...
// etc
}
Subroutine calls can be eliminated by inlining everything, using macros to make the code more manageable. Loops can be simulated using macros that expand to multiple copies of the code, one for each step.One advantage is that the program will never get into an unbounded loop because the program counter will always advance towards the end of the main loop.
Well, you have one backward branch at the end of the program, and you inline your subroutines. I'm pretty sure you've written shaders for ancient GPUs that had similar limitations? And anything you can do in hardware you can do without subroutine calls, and in hardware the loop starts again on every clock cycle.
It allows one backward branch. Think of it like hand-rolling your OS scheduler for processes/threads. You also have to track your "program counter" yourself. As a silly example:
typedef enum state {EVEN, ODD} state_t;
state_t task1 = EVEN;
state_t task2 = EVEN;
while (1) {
switch(task1) {
case EVEN:
// do even things
task1 = ODD;
break;
case ODD:
// do odd things
task1 = EVEN;
break;
default:
fprintf(stderr, "WTF?\n");
exit(1);
}
switch(task2) {
case EVEN:
// do even things
task2 = ODD;
break;
case ODD:
// do odd things
task2 = EVEN;
break;
default:
fprintf(stderr, "WTF?\n");
exit(1);
}
}
For every "process" you've unrolled like this, you have to place it into its own switch/case or call out to a function which has similar logic (when subroutines aren't disallowed). If the process is short enough you let it execute all the way through, bigger processes would need to be broken apart like above to avoid consuming an entire cycle's time (especially important in real-time systems).My browser says "The connection to number-none.com is not secure". Guess it is only a matter of time until HTTPS becomes mandatory.
Comment was deleted :(
> do always, then inhibit or ignore strategy
can anyone expound on this? I'm not sure what he's exactly referring to here
Comment was deleted :(
There is actually a major problem with long functions - they take a long time to compile, due to superlinear complexity in computation time as a function of function length. In other words breaking up a large function into smaller function can greatly reduce compile times.
That honestly feels like a minor problem, and not something to optimize for. Also an aggressively inlining compiler will experience exactly the same problem. AFAIK at least clang always inlines a static (as in internal linkage) function if it's used only once in the translation unit, no matter how large it is.
Visual studio doesn't do that inlining. And it is a significant problem, I have had to refactor my code into multiple functions because of it.
It might be a significant problem, but not in the code, but the compiler. Fair enough, you are working around a compiler issue.
If you consider any superlinear complexity a 'compiler issue' I guess.
It absolutely is, if it makes compile times unreasonable for reasonable code. Compilers have to make trade-offs like this all the time, they can't use overly excessive optimizations.
I dunno. O(n^2) is for sure a bug. But O(nlogn) I think is reasonable.
O(nlogn) is probably reasonable. Why break up a long function then if you are experiencing O(nlogn) scaling of compile time on function size?
Because it can still result in compile times I find excessive. For example breaking up a function that takes 5 seconds to compile into a bunch of functions that take 1 to 2 seconds in total.
If you are willing to make code worse to micro optimize compile times (not even sure this is true) then you should not use any modern language with complex type checking (rust, swift, C#, etc).
Writing a medium to large program in C++, you really need to fight long compile times or they can get out of hand. That affects the way you write code quite a lot, or it should at least. I've heard Rust and Swift also suffer from long compile times.
Agreed.
But For C++ template combinatorics are going to dominate any slow down due to function length.
How much of this is specific to control loops that execute at 60hz?
None.
> The real enemy addressed by inlining is unexpected dependency and mutation of state, which functional programming solves more directly and completely. However, if you are going to make a lot of state changes, having them all happen inline does have advantages; you should be made constantly aware of the full horror of what you are doing. When it gets to be too much to take, figure out how to factor blocks out into pure functions (and don.t let them slide back into impurity!).
Some years ago at job foo I wrote a Ruby library that was doing some stuff. Time was of the essence, I was a one-man team, and the trickiness of it required a clear understanding of the details, so I wrote a single ~1000 LOC file comprising of the entirety of the namespace module of that library, with but a couple or three functions.
Then a new hire joined my one-man team. I said: apologies for this unholy mess, it's overdue for a refactoring, with a bunch of proper classes with small methods and split in a few files accordingly. They said: not at all, the code was exceptionally clear; I could sit and understand every bit of it down to the grittier critical details in under an hour, and having seen it written this way it is obvious to me that these details of interactions would not have been abstracted away, but obscured away.
I have worked with many developers and I have seen them follow two distinct paths when encountering complex code.
There's one camp that wants to use abstractions and names, and there's another (in my experience, smaller) camp which prefers to have as few abstractions as possible, and "every gritty detail visible".
I think both strategies have advantages and disadvantages. The group that likes abstractions can "ignore parts of the code" quickly, which potentially makes them "search" faster. If there's a bug that needs fixing, or a new feature that needs to be added, they will reach the part of the code that will need modifications faster.
The detail-oriented people can take a bit longer to identify the code that needs modification, but they also tend to be able to make those modifications faster. They also tend to be be great "spelunkers". They seem to have a "bigger cache", so to speak. But it is not infinite. They will eventually not be able to hold all the complexity in their heads, just like the first group. It will just take a bit longer.
I am firmly on the first group and that is how I write my code. I have been fortunate enough to encounter enough people from the other group to know not to diss their code immediately, and to appreciate it for its merits. When working in a team with both kinds of personalities one has to make compromises ("please remove all of these 1-line functions, Jonathan will hate them", and "could you split this 3k lines function into 2 or 3 smaller ones, for easier review?").
Some might consider me part of the "second group", but I'm perfectly fine with abstractions and I create them all the time.
I do however have a problem with indirections that don't really abstract anything and only exist for aesthetical reasons.
Not every function/method is an "abstraction". Having too many one-line methods is as bad as pretending that functions with 2k/3k lines are appropriate in all cases.
Your final ten words of the comment are a perfectly concise explanation of the problem; thank you! And it drives home something I often forget about why code units should do Only One Thing.
Thing is, a lot of developers see long code and think "this is a Bad Thing" because of dogma, but in practice, a lot of developers never actually wrote anything nontrivial like that.
> Minimize control flow complexity and “area under ifs”, favoring consistent execution paths and times over “optimally” avoiding unnecessary work.
If your control loop must always run under 16ms, you better make sure the worst case is 16ms rather than trying to optimise best or mid case. Avoid ifs that skips processing, that's good for demo but doesn't help reaching prod quality goals. Sometimes it doesn't bring the benefits you think, sometimes it hides poorly optimised paths, sometimes it creates subtle bugs. Of course always use your own discernment...
That would be very different in a typical cloud app where the goal is to keep CPU, memory and network usage as low as possible, not much caring about having a constant response time on each REST endpoint.
All the code which is not in hot path may conform to any rules, and typically is designed according to something like SOLID, to make understanding and maintenance as simple as possible (and suitable to any average coder).
All the code which performance, memory cost, etc. is critical, should be adjusted to fit into required confine even if it will violate all other tenets. This often results in combination of opposite approaches - anything that does well.
Finally, one just profiles the code and fixes all most spending paths. This is what now any average programmer can do. What it canʼt do - and what Carmack has been doing for decades - is to predict such places and fixes them proactively at architectural level; and to find tricky solutions that average joe-the-programmer hasnʼt heard ever.
Mostly, all of it. People who are not writing that kind of loop probably should not do any of this. Optimize for code clarity, which may involve either inlining or extracting depending on the situation.
One benefit that I can think of for inlined code is the ability to "step" through each time step/tick/whatever and debug the state at each step of the way.
And one drawback I can think of is that when there are more than something like ten variables finding a particular variable's value in an IDE debugger gets pretty difficult. It would be at this point that I would use "watches", at least in the case of Jetbrains's IDEs.
But then yeah you can also just log each step in a custom way verifying the key values are correct which is what I am doing as we speak.
The clean code people are losing their collective minds reading that. lol
Why's that? Uncle Bob seems pretty clear that most of your code should be free of side effects, and that necessary state mutation should be isolated to one place. Carmack is saying the same thing.
He's definitely not talking about 5 line functions and DRY.
Oh good, a FP post. I love watching people argue over nothing.
Here’s the actual rule, do what works and ships. Don’t posture. Don’t lament. Don’t idealize. Just solve the fucking problem with the tool and method that fits and move on.
And do not try to use this comment threat to understand FP. Too many cooks, and most of the are condescending douchebags. Go look at Wikipedia or talk with an AI about it. Don’t ask this place, it’s all just lectures and nitpicks.
Ironically, this comment adds nothing to the post and is needlessly belligerent and condescending.
“Comment threat” is a nice one.
This isn't actually an FP post.
(2014)
Ten years ago - a long time in coding.
It's at least twenty new web frameworks, but maybe not so long in low-level stuff. You can probably rely on C99 being available now more than you could in 2014.
This is the real religious war among programmers -- it's a genuinely consequential question: someone who favors abstraction and modularity is going to absolutely hate working in a codebase with pervasively inlined code.
It's clear that Carmack's article is addressing a particular sort of C++ codebase that might be familiar to game developers, but isn't familiar to a lot of us here who work on web applications and backend distributed systems. His "functions" aren't really what we think of as functions: they're clearly mutating huge amounts of global state. They sound more like highly undisciplined methods on large namespaces. You can see that from the following quotes:
> There might be a FullUpdate() function that calls PartialUpdateA(), and PartialUpdateB(), but in some particular case you may realize (or think) that you only need to do PartialUpdateB(), and you are being efficient by avoiding the other work. Lots and lots of bugs stem from this. Most bugs are a result of the execution state not being exactly what you think it is.
> if a function only references a piece or two of global state, it is probably wise to consider passing it in as a variable.
In the world of many people here, i.e. away from Carmack's C++ game dev codebases of the 2000s with huge amounts of global mutable state, the standard common sense still applies: we invented structured programming with functions for profoundly important reasons: modularity and abstraction. Those reasons haven't gone away; use functions.
- In a large codebase you do not need or want to read the full tree of implementation in one go. Use functions: they have return types; you know what they do. A substantial piece of implementation should be written as a sequence of calls to subfunctions with very carefully chosen names that serve as documentation in themselves.
- Make your functions as pure as possible subject to performance considerations etc.
- This brings a huge advantage to helper functions over inlining: it is now easy to see which variables in the top-level function are being mutated.
- The implementation is much harder to understand in a single function with 10 mutable variables, than in two functions with 5 mutable variables. I think ultimately that's just a fact of combinatorics; not something we can hold opinions about.
- But sure, if the 10 mutable variables cannot be decomposed into two independent modules then don't create spurious functions.
- A separate function is testable; a block inside a function is not. It wasn't really clear that the sort of test suites that many of us here work with were part of Carmack's codebases at all!
- It is absolutely fine to use a function if it improves modularity / readability even if it only called once.
who read this in John Carmack's voice?
Comment was deleted :(
Crafted by Rajat
Source Code