hckrnws
This is one of the reasons I like Clojure. There are very useful dialects with broad overlap between:
Browser / JavaScript environments -> ClojureScript
General Purpose (JVM) -> Clojure
Fast Scripting -> Babashka (although I've used ClojureScript for this in the past)
C/C++ Interop (LLVM-based) -> Jank (new, but progressing rapidly and already useful)
I can largely write the same expressive code in each environment, playing to the platform strengths as needed. I can combine these languages inside the same project, and have libraries that have unified APIs across implementation. I can generally print and read EDN across implementations, provided I register the right tag handlers for custom types (this is one area jank still has to catch up). Reader conditionals allow implementation-specific code as needed.
I'm really excited about Jank giving me a good alternative to JNI/JNA/Panama when I need my Clojure to touch OS parts the JVM hasn't wrapped.
One thing I'll note is we tend to use languages from different levels in different settings (front end, back end, systems) and we spend an awful lot of time writing glue code to get them to talk to each other.
A major advantage of the proposed approach is automated FFI and serialization/deserialization between languages in the same language set. RustScript would be able to accept a struct or enum from Rust or RustGC, and vice-versa. You could have a channel with different languages on either end.
You can also see that we _want_ something like this, e.g. we bolt TypeScript on top of JavaScript, and types onto Python. If JavaScript (or python) were designed so they could be more easily compiled (notably, no monkey patching) then they would support level 2 as well.
I have been thinking of level 2 or 1 languages that support the higher levels. This is a really good framing. (The problem with going the other way is the implementation decisions in the interpretter often constrain how the compiler can work, e.g. CPython is dominant because all the libraries that make use of the CPython FFI, and similarly for NodeJS. It is easier to interpret a constrained language than to compile a dynamic language designed with an interpretter in mind).
Back when I did some high perf Python, I’d define my data at C structs and bump allocate those structs in a list using the cffi.
It is not unlike defining your data model for SQL so that you can have sane data access.
This a 100%. It's madness that languages are effectively siloed from each other.
I think Peter Naur's description of levels of computation is a better one for considering an actual layering of levels of abstraction:
> Each level is associated with a certain set of operations and with a programming language that allows us to write or otherwise express programs that call these operations into action. In any particular use of the computer, programs from all levels are executed simultaneously. In fact, the levels support each other. In order to execute one operation of a given level, several operations at the next lower level will normally have to execute. Each of these operations will in their turn call several operations at the still lower level into execution.
The old term "problem-oriented languages" seems to still be quite useful. Programming languages are always focused on allowing the programmer to solve a set of problems and their features hide irrelevant details.
These language sets seem like a helpful grouping of features that suit particular problem domains but I don't think it works as a taxonomy of levels of abstraction.
Especially with LLMs to assist we don't gain much anymore from making everything one syntax, one language, etc. Projects like Dotnet Blazor/ASP.NET or Python Streamlit/Dash IMO are forced and are more trouble than they are worth. The OP suggestion, where everything is Rust, has the same problem; it's too forced.
We should embrace the domain-specific niceties; there are room for lots of languages, they can iterate more quickly, try new things, and specialize syntax to the domain.
> One language could combine the 2nd and 3rd level though. A language that can be interpreted during development for fast iteration cycle, but compiled for better performance for deployment. There isn’t such a language popular today though.
I'm not sure if Dart counts as "popular", but it otherwise fits this bill. It has a JIT and can startup pretty quickly and interpret on the fly. You can also hot reload code changes while a program is running. And it can ahead-of-time compile to efficient machine code when you're ready to ship.
This is a better taxonomy of what a language is rather than the dated concept of “High-level” vs. “Low-level”.
Comment was deleted :(
Where would Haskell go?
Erlang and Elixir?
Perhaps together with Agda (compiles to Haskell, has FFI to it, is more higher-level), some not-pure ML, and maybe Rust or ATS?
Comment was deleted :(
Haskell is listed.
I’m a strong supporter of adding an automatic GC to Rust. Although it seems difficult to justify as RustGC code wouldn’t be trivial to convert to traditional Rust. But going in the opposite direction should be trivial.
> Now let’s address level 4. Big players sit at this level, perhaps the most popular languages by headcount of their programmers. The problem with a lack of static typing is that it’s hard to work on such code in groups and at scale. Every successful business started with those languages eventually rewrites their codebase to use one of the “lower level” languages because big codebases written by many people are hard to maintain and modify without the support of a static type-checker. They are still great languages for solo, small projects, especially if the code can be easily automatically tested.
This is total made up nonsense. I've worked in Python for over a decade, and at multiple successful companies that have been running quarter-million plus line Python codebases for 8+ years.
Proponents of static typing like to sound alarms that it's impossible to scale dynamic codebases when they lack the experience in those languages to know people solve scaling problems in those languages.
I'm not hating on static languages, but I think they involve more tradeoffs than proponents of static typing admit. Time spent compiling is pretty costly, and a lot of codebases go to great lengths to somewhat bypass the type system with dependency injection, which results in much more confusing codebases than dynamic types ever did.
Meanwhile, many of the worlds largest and longest-maintained codebases are written in C, which is only half-assed type checked at any point, and is much harder to maintain than dynamic languages. The idea that projects reach some point of unweildiness where every one of them gets rewritten is just not correct.
I might have gone a bit easier on this if the author hadn't said "Every successful business..."--the word "every" really is just way too far.
EDIT: I'll also note that just because a language isn't statically typed, doesn't mean it gains no benefit from type checking. JavaScript and Python are not created equal here: JavaScript will happily let you add NaN and undefined, only to cause an error in a completely unrelated-seeming area of the codebase, whereas Python generally will type check you and catch errors pretty close to where the bug is.
TLDR; Let's rewrite everything in 3 languages: Rust, RustGC, and RustScript!
Ugh.
That's missing out on the nice idea of four (or five) language "levels"
> 4: Interpreted, dynamically typed: JavaScript, Python, PHP
> 3: Interpreted, statically typed: Hack, Flow, TypeScript, mypy
> 2: Compiled with automatic memory management (statically typed): Go, Java (Kotlin), C#, Haskell, Objective-C, Swift
> 1: Compiled with manual memory management (statically typed): Rust, C, C++
> There is a 0th level, assembly, but it’s not a practical choice for most programmers today.
and it's also missing out on the generic hypothesis that a language is needed between levels 2 and 3, which is interpreted for fast turn-around times but also compilable for fast run time.
Crafted by Rajat
Source Code