I don't think it's the "least amount of magic". A lesser amount of magic would be what rust has: let s = include_bytes!("./file.txt");
As a bonus, if you typo the "include_bytes!" macro, as say "includ_bytes", you get a compiler error.
With the go one, if you typo it as "//gp:emebd hello.txt", then you get no compilation error, just an empty string.
If someone compiles your program with an old version of the go compiler, they get no error, just a totally broken program.
Meaningful comments are some of the worst magic you can add to a language IMO.
It's far simpler to reason about compiler-checked preprocessor instructions or macros, both of which are valid alternatives the go team could have added just as easily.
The go authors could do what they originally did with generics: only let the compiler/stdlib have macros.
Also, go users have _already_ started parsing comments for magic strings, so it's not like setting an example of using comments is stopping users from doing this crap too.
A macro language which is defined in a clear and consistent way definitely seems less magic than a macro language which has no real definition, other than examples of comments.
Actually, it's not. The compiler using the comments for non-commenting purposes makes me puke. It's a dirty workaround and degradation of the language.
If you want a new compiler feature, add a new statement.
I do agree, I feel like they’ve used this hack to sneak changes into the language which technically don’t break their Go 1 guarantee. I hope they tidy that up in Go 2 and add proper macro commands with a syntax the compiler checks.
Otherwise these magic comments will grow into their own unaudited meta language with no formal spec, just a few usage examples.
This and struct tags (another stringy metalanguage) are the bits I’m not keen on in Go.
> I feel like they’ve used this hack to sneak changes into the language which technically don’t break their Go 1 guarantee.
The go 1 guarantee only says that existing go programs will continue to compile correctly on new releases. It doesn't say that new programs will compile correctly on old releases.
I think Julia makes clear how nice a good macro system can be. From simple things like @btime to the domain specific languages added by libraries like jump, macros just make a language way better. The only hard part is that once you have a good macro system, you will get lots of new users try to use it who take shouldn't.
>Go is still my favorite language when it's the right tool for the job.
Forgive my ignorance, when would you recommend go? As in what problem domains have you seen it excel at?
I have thought about trying it in the past and even tinkered with a project for a bit but didn't quite find it useful enough to be worth the tooling cost (at the time). Would like an excuse to check it out again.
It's easy to learn and and great for getting something useful deployed pretty quickly. In both undergrad and grad school, I often chose Go and my projects -- using the same algorithms as classmates -- were finished sooner, ran faster, had less bugs, and used less resources.
Go is pretty good for almost anything server-side. I wrote a web server in Go that a lot of businesses use in production. It's a very practical "get things done" language and is generally associated with developer satisfaction/happiness. Almost anywhere you use C/C++, I would try to use Go.
It's progbably easier to enumerate what Go is not good at: native GUIs, C interop, some kinds of data science (though I used it as much as Python in college), and some kinds of very low-level programming (though it can be good for embedded systems, if you can stomach large binaries).
Go is very opinionated and if you try to shoehorn it into some existing paradigm, you will probably not enjoy the experience. If you follow the happy-path, you'll have a much better time and for me at least, I decided that the things I liked about OOP were actually more elegantly addressed with Go. Notably decomposing inheritance into polymorphism (interfaces and callbacks), composition, and automatic delegation ("struct embedding")).
You don't. Elaborate abstraction is often a mistake.
Which aspects do you like? Personally I have grown to dislike the pointless abstraction and obfuscation introduced by huge inheritance hierarchies, so Go is a breath if fresh air in that regard in just doing away with it all. I haven't missed classes at all.
Go's structs with associated methods and property/method promotion aren't that different from classic OOP. Too much is made of the differences, I think. You can recreate most basic OOP code in Go .... if you really must.
I do think it’s very different; types are not classes and interfaces are not like interfaces in java, obj-c for example, being defined at point of use.
If you try to recreate inheritance in Go it will be frustrating and not very useful.
The shape of the code ends up very different and embedding is not used even half as much as inheritance (which is the default position in most other oop languages). That has profound impacts.
Go supports a slimmed-down version of OOP compared with Java/C# via property and method "promotion". There are no static properties/methods and no abstract classes but you have interfaces with Ruby-esque duck typing. I've omitted generics as that's due in Go 2.0. I find listing methods beneath their corresponding struct definition to be quite adequate when OOP is the best fit .... which isn't that often. I just finished implementing most of the functionality in Matt Zandstra's "PHP Objects, Patterns, Practice", 5th edition, in Go and didn't miss much.
Golang is OOP, regardless of what some people may claim. It's just missing inheritance, which golang's implementation of composition mimics it close enough.
Composition is not a substitute for inheritance and is used an order of magnitude less in Go code than inheritance in other languages, which touches everything.
The important point is it is used far less - inheritance is impossible to get away from in most OOP programs, but you can't say the same for embedding in Go programs, it is used, but not as often.
Go can do encapsulation and polymorphism. Go doesn't do inheritance, of course. Channels can also help simulate message passing and actor-style code. What's missing?
IMO, If you compare Go to an OOP language that runs in a VM there are some things that will fall short but the industry has worked around it. It's not cool to have a beefy Smalltalk or Java VM, with the dynamic runtimes that let you auto-export objects as remote endpoints... you have small binaries exporting HTTP or RPC services running in containers and interfacing with service discovery and fabric systems. The "object orientation" is now at an abstraction sitting above the running binary.
>It's not cool to have a beefy Smalltalk or Java VM, with the dynamic runtimes that let you auto-export objects as remote endpoints... you have small binaries exporting HTTP or RPC services running in containers and interfacing with service discovery and fabric systems. The "object orientation" is now at an abstraction sitting above the running binary.
I'm not sure why an application running in the JVM in a container is any different. Can you elaborate?
Running a JVM application in a container is not all that different to running a Go application in a container. But the design of the JVM runtime and its tooling is vastly different than Go. The JVM is more complex but it can do more things. Go is simpler and can get you to a working product quicker.
If someone asks me to build something and they tell me it will run in docker/kubernetes, have the standard containerized CI pipelines, and communicate with other services over some standard protocol like HTTP; doing it on the JVM is doable and probably easier than ever but I would just think about all the cool JVM features I wouldn't get to use (hot code deploy, runtime modularity tools, all the neat bytecode hacks, even spring which everyone but me seems to hate).
If I don't get to use those features then I would rather grab a language like Go, since it doesn't front load so many runtime complexities. (and if this was 10-15 years ago maybe I'd say "Go doesn't front load so many complexities AND performance penalties")
Do you know any good GUI library in go (as you mention native GUIs as a weakness.
Besides that what were you're problems with the C interop? In addition to the lack of up to date documentation for more complicated examples, I had huge problems using the C interop layer with more complicated C++ libraries (mostly because the C interop has its problems with C++).
I have been quite happy with the Gtk bindings for Go (https://github.com/gotk3/gotk3). As Gtk has a C Api, it is a good match for Go, actually Go makes Gtk nicer than calling it from C in my eyes, as the GObject wrap to Go types and you can just call methods on them vs. the C syntax of passing the GObject as the first function argument and having the object type in the function name.
If you install the Gtk libraries on the Mac, Go programs written with gotk3 just compile and run there too.
C++ doesn't have a stable ABI, so C++ interop is usually done by wrapping a C++ library with a C interface first. This is true for most compiled languages afaik.
GUI libraries often rely on specific language features so they are hard to wrap in other languages. That's why good GUI libraries don't exist in ANY language other than the ones they were written in.
I think GUI libraries which offer C bindings are easy to wrap in other languages, that is why I like Gtk a lot. In my eyes, it works especially nice with Go. Tk should be easy to wrap for Go to, but usually one would prefer Gtk, when there is a choice :).
I have great success with https://fyne.io/ used it to build my home grown MFA app (written in Go, work on iOS, Android, Desktop). granted it takes a bit of binding in C to work on iOS.
"Go is pretty good for almost anything server-side."
Absolutely! REST API servers, data processing jobs, messaging servers, etc etc. I use Go now for virtually everything server side. It is fast, reliable, great tooling, promotes unit testing, and now that I have hit that "comfortable with the language" spot, I have never been more productive.
Node.js is absurdly complex in typical deployment scenarios leading to serious difficulties when you're trying to create production apps. Let's just look at the common case of transpilation from TypeScript:
- Did your TypeScript properly import all of the @types packages required for it to compile properly? They need to be installed alongside the source packages and match the version numbers. They are often maintained separately and may not match the underlying implementation of the code.
- How do you run this in production? Do you check the compiled TypeScript into the git repo? Do you run with ts-node? Do you do different things in production, development, and test?
- Do all of your native dependencies support the node versions you're using? How do you install the correct version of node? If you ignore these issues by using Docker, how are you managing your images and containers?
The list of problems requiring solutions goes on and on. And of course those solutions exist, but this is a summary of why node projects are not "easy to learn". Don't get me wrong, Node can still be the right tool for the job, in particular if you're sharing a lot of code between an app and something that will run in the browser.
The advantage of nodejs over go for web apps is the live reload / recompile tool chain. golang has some really nice apis for back end servers, single binary, no runtime deployment is really great. But golang lacks in that live reload feedback loop.
I do a lot of golang but whenever I want to do something web browser facing, I cave in for nodejs. Especially in the exploratory phase when building front / back end together.
With the compile speed of go i mostly could get into that same style of development as with dynamic languages. I think with generics this will get even closer, because of the plethora of libraries that will be released when generics land.
That's my point - not in the stdlib if you have to `npm add`. It's not just the libs you import that are affected. Npm is arguably the size it is because the libraries it contains are not written against a stdlib. Certainly not front-end JS libs. That has a quality effect on the whole ecosystem.
I know it's not magic, but is so nice when I build my go utility on my Mac with a linux target, copy it and its config to a linux server, and it runs. After dealing with setting up java, node, and python envs for so many years, go is a breath of fresh air on the deployment side.
I've been doing native applications something like forever and all those apps are exactly the same way. Just copy and enjoy. I just do not understand fun of "modern" apps where the amount of tooling and configuration will drive one straight to insanity.
I like different languages and Rust most particularly, but when it's about getting things done, in a reliable way and within a short timeframe, I couldn't find yet a better trade off than Go.
Not GP, but I choose Go for webservices as well as devops tooling. I use it whenever possible. I've been very happy with the decision to use Go over the past 4 years at my company.
Arguably, that's more because of the way you can pretty much completely rely on io.Reader and io.Writer being used pervasively throughout the ecosystem (3rd party libraries and all) rather than any specific "Go" special sauce, but, it is what it is.
The special sauce is you get work in from the network, split it into small pieces for a bunch of worker go routines and then make some of that work do down stream calls in easy to write CS 101 style network code; the go routines/channels exert back pressure exactly as you specify and the scheduler spreads the work across all the cores without you having to lock anything. And they have libraries for a bunch of modern things. Pretty good libraries too, written by people that care about error checking.
How do you apply back pressure? As far as I can tell Go will spawn a goroutine per request (well that's how the standard http lib and grpc seems to do it).
I would like to compute a load factor and refuse new requests. Or do some basic rate limiting. But the coroutine is spawned before I have a say...
In general, rate limiting of connections shouldn't take place in the server itself, it should be in some upstream load manager. If your system is too small to bother with that, it's almost certainly too small to worry about blowing out the goroutines. This is not a statement about Go, it's true in general.
Even if you consult something per request, goroutines are fast enough to look at something and then decide not to do the work that, again, by the time that's a problem, the solution is to throttle upstream anyhow.
I see so it seems that there is a curve of load handling behavior per language/lib/framework.
As the load increase you enter the territory of queueing (add latenxy) up to a point. Then reject/drop requests. If the rejection is slower than the OS/hardware can provide the software crashes.
For Go it appears that the curve is an almost straight vertical line to oblivion.
But of course if you plan to scale and use a dedicated software to pace requests, there is no point into adding this complexity to the Go libraries.
Maybe what I lament here. Is that this tradeof made by Go is not more advertised. Such that when the day comes that the dreaded out of memory killer shows up, you are ready to fight back.
I’ve considered Go for writing command-line apps - mostly for conversion between domain-specific formats and standard ones (e.g. proprietary image format => png).
I think when considered in isolation, Go would be perfect for this. However, I haven’t used it because my existing code in this area is in C++ and I’m not sure how well the two languages interoperate.
Go interoperates pretty well with C - so you would have to create a thin C layer on top of your C++ code. This is one problem I constantly have with C++, it is pretty much its own world.
I have wrapped a large C++ library for Go it was a pretty straight forward process. I created a file with C callable functions that get passed pointers to the C++ objects and invoke the corresponding methods on that object. On the Go side pointer gets wrapped in a struct type for each C++ class and in the methods on that type I call the C functions. Works great.
How well does it work the other way round - calling Go from C/C++? IIRC when I first looked into doing that, it wasn’t really possible - only threads that were started by Go could run Go code. In particular, `main` had to be in Go and couldn’t be in C++. Does that restriction still apply?
I have created shared libraries with Go which can be called from C, no problems. Those shared libraries can start their own goroutines quite fine and can call back into the C application. Usually it is a good idea to call back into the C application from the thread the Go function called from C was running.
I like it because it creates a small binary executable that is easy to deploy. Other stuff like Java or Python have big VMs or interpreters that are error prone and take up room. Also if you want a small container a go binary is small.
I love Go for its native capabilities to do things with servers etc like ssh, networking. I wrote some Go code for a small project to connect to external servers using SSH and it was a lot of fun and a breeze.
Agreed. I'm kind of regretting using Node for my current project. Every time I look at package-lock.json, I die a little. I really like the way Go projects tend to have very few dependencies.
Yeah. To clarify, I'm not suggesting it's either good or bad -- just that it's a big deal. To some, it's controversial.
I personally think Go's proposed implementation of parametric polymorphism a great feat of modern software engineering, and a testament to the open source process.
I'm looking forward to them, and definitely agree it's a great feat of software engineering ... though I'm not sure about "testament to the open source process". Go is open source, but most of the core developers are Google employees, and definitely the folks that worked on generics (Ian and Robert, mainly) are full time at Google.
I don't actually think that's a bad thing: I love it that Google pays people to work on Go full time, and don't mind the language being spear-headed by a tight team with a vision (who happen to work at a commercial entity). I'm impressed with their direction, and ability to say no.
Yeah, let's be real -- just as with Go Modules and the "try" proposal, there is no "open source process" when it comes to major language changes; the decision is always ultimately made by the core team, not the community. And I agree with you -- this arrangement is fine! I strongly prefer it that way. The Go devs have an excellent track record of getting it right, even when it went against the prevailing wisdom.
Personally, I'm opposed to adding generics, but if we have to...I trust the Go devs to do it better than just about anyone else.
Thanks. I could take them or leave them. I've used generics a lot in C++, but as long as I can continue writing Go1 programs as I have been for the last 10 years, I'll be happy.
Oh, come now, calling C++ templates generics belittles their power. :p Honestly, the one thing I miss in any language is the complete power of C++ templates, but I definitely understand others' reluctance to include something like them in their language. Something that allows me to write simple generic functionality a la Java generics and not have to write the same code multiple times is good enough for me in go.
Luckily I do have some ability to use C++ at my current job, so I can get my fix again there :)
One of the features of Go is that everyone writes code the same way, which means anyone can easily read anyone else's code; there's only one paradigm; idioms and conventions are strong across the community. This property exists precisely because Go isn't very expressive, but the generics proposal will make Go more expressive and in all likelihood the ecosystem will become more diverse accordingly. Whether this is a positive or negative depends on how much you value expressiveness versus standardization.
> As announced in the Go 1.15 release notes, Go 1.16 drops support for x87 mode compilation (GO386=387). Support for non-SSE2 processors is now available using soft float mode (GO386=softfloat). Users running on non-SSE2 processors should replace GO386=387 with GO386=softfloat.
This change makes perfect sense to me (you'd have to be running a 20+ year old CPU to still need x87 float support, and plenty of distros have already dropped it), but I'm curious as to whether it was partially motivated by Go's decision to implement their own optimizing compiler and surrounding ecosystem. Modeling x87 is a real pain, one without obvious benefits when outside of a mature surrounding framework (like LLVM or GCC).
Can someone with knowledge of Go's compiler internals opine on this? It'd be interesting to hear which bits are sources of historical pain/relative ease.
1) While 387 support isn’t a huge maintenance burden, it does take time away from performance and feature work and represents a fair amount of latent complexity.
2) 387 support has been a regular source of bugs (#36400, #27516, #22429, #17357, #13923, #12970, #4798, just to name a few).
3) 387 bugs often go undetected for a long time because we don’t have builders that support only 387 (so unsupported instructions can slip in unnoticed).
4) Raising the minimum requirement to SSE2 would allow us to also assume many other useful architectural features, such as proper memory fences and 128 bit registers, which would simplify the compiler and runtime and allow for much more efficient implementations of core functions like memmove on 386.
5) We’re exploring switching to a register-based calling convention in Go 1.16, which promises significant performance improvements, but retaining 387 support will definitely complicate this and slow our progress.
The 5) didnt make it into go1.16 but will be focused on again for go1.17.
It was dropped because it had bugs, no-one ever really supported it, it was impossible to find hardware to run it continuously. There are many reasons it was dropped.
I would love to see Go have first class support for watch/hot reload while developing
Its not first class, but I have been happy using reflex [1] for non-http and gin [2] for http. Combined with Go's fast compile they work as well as a hot reload.
To do this well (for servers, which is the most common case) you need to keep the port open (and delay requests while recompilation is in progress) or clients can see transient errors. So it's probably not a language level concern, but a protocol level one (it's very solvable for HTTP for example).
I built https://github.com/superhuman/lrt which tries to solve this problem in a go-like way (no configuration required, minimal log noise, and reliability/simplicity as the primary design goals) for Superhuman.
This should be pretty straight forward without special support. Do the embedding and create a function that returns the fs.FS with your files in it. Put that in a file with a build tag or something, // +build !dev and then put the same function that returns an fs.FS that points at the filesystem in another file and add the build tag // +build dev or something along those lines. Now you can do go build -tags dev to get the filesystem versions, or build normally to get the "release" version where it's embedded.
Modd watches file changes and rebuilds, while Devd enables livereload, letting me make changes in my text editor and then see the rendered changes in the browser, side-by-side, in near real-time.
This is for go web development but I'm pretty sure these two tools are language-agnostic.
Obviously there's lots of candidates here. I've recently switched from entr or fswatch to watchexec [0], because it can restart a server, so something like this gets me where I need to be:
I have a small project that uses Go's built-in webserver libs to display a status page. I've been storing the HTML/CSS for that in Go strings, which breaks color formatting in my editor and generally feels ugly. This embedding approach looks awesome, I can't wait to give it a try!
Shameless self plug, but I've had that itch and decided to scratch it with a tool[0] that would be universal and not limited to Go - albeit written in it ;)
The go:embed directive and embed.FS interface are great, as expected. If you're developing an AGPL or similarily licensed application it's incredibly easy to distribute source code with the application.
I'm back to programming in swift after half a year of doing go only, and i'm more and more sceptical on the benefits of adding generic programming to the language.
I didn't take the time to really read the spec for go generics, but i really hope the gray beards didn't compromise on letting developers shoot themselves in the foot with overly complex design.
Nowaday i would recommend every developer to go through a 3-month coding session with go, solving real problem, to really appreciate how much one can do with just basic tools and an elegant design (with a few diving session in the stdlib codebase, which is just a pure gem).
This is by far the most important aspect of the language.
> I'm back to programming in swift after half a year of doing go only, and i'm more and more sceptical on the benefits of adding generic programming to the language.
That's fine, you won't have to use them, or even use code that depends on generics.
And I won't have to remember a dozens of silly slice tricks each time I have to implement complex data pipelines with different struct types.
If "you won't have to use it" is something people accepted then wouldn't everyone stick with kitchensink languages like C++ instead of coming up with new "clean" ones like Python, Java, C#, etc? :-P
> If "you won't have to use it" is something people accepted then wouldn't everyone stick with kitchensink languages like C++ instead of coming up with new "clean" ones like Python, Java, C#, etc? :-P
No because C++ still requires some amount of manual memory management, unlike the rest of languages you listed.
I use generics on daily base! Even a basic cache requires generic support. I don't want to litter my code with castings and ifs when there exists a decent solution to do all of the manual undertaking for you. That is what compilers were invented for rather than just generating plain code. You can obviously ignore them if you don't need them; but I can give you gazillion cases where not having generics and doing `interface{}` based casting makes situation messy and repeated on code rather than simpler.
> I'm back to programming in swift after half a year of doing go only, and i'm more and more sceptical on the benefits of adding generic programming to the language.
I'm not sure how the generics implementation in Swift has any bearing on how generics will work in Go.
Especially when the implementation is completely different and have nothing in common with Swift. Go took the ADA route with incomplete type instantiation (with the bonus of type inference).
No, it's just people complaining about generics for the sake of it at that point.
Yeah, it's kind of interesting how two extreme camps have evolved out of the "controversial" decision not to include generics in Go 1.0:
* Camp 1: generics will always be horrible, Go is right not to include it, it's too complex, Go is meant to be simple, etc.
* Camp 2: I will never use Go because it doesn't have generics, serious languages have generics, etc.
When in reality, both camps are pretty much wrong. Yes, generics are useful in a lot of places. But they also add no value in a lot of places. It's really on you to pick the tool you want to use. If you really want generics, pick C++ or Java instead. If you couldn't care less, Go is an option. It's much more fluid than this binarization of the topic has made it out to be.
Is anyone using Go’s support for compiling to WebAssembly? I know it was added experimentally a few versions ago - is it still being worked on? Is it production-ready? There’s no mention of it in these release notes.
Go makes pretty bad WebAssembly. It only embeds in Javascript, and it's at least 2MB for Hello World. You can't really use it in server-side engines like wasmer/wasmtime.
TinyGo is far better, but only supports a subset of Go's stdlib.
I'm playing with TinyGo, and using GPRC communication with the outer engine as a poor substitute for WASM interface types.
It’s still supported and developed, but it’s not an area where most activity is currently focused. It is very stable, even more so when it was first added, since bugs are being fixed and regression tests are added. You can see what issues are open and closed at https://github.com/golang/go/issues?q=is%3Aissue+label%3AArc.... The Wasm specification is also progressing over time, sometimes unblocking some opportunities in the Go implementation.
I’ve been using Go in the frontend in many (most?) of my personal projects since the early days of GopherJS, and by now I’ve compiling most of that frontend code to Wasm. I’m very happy with the experience and results, and my top wishlist item would be improvements to the binary size for faster first-time loads.
While I believe it is probably production ready, due to how Go is implemented with implicit coroutines/yielding, generated WASM from Go is larger and has many more instructions than a lower-level languages that map more directly to machine code. For most, this tradeoff is likely negligible.
I'm pretty disappointed by the io/fs documentation. Not a single example, and the overview doesn't explain in what situations you'd even want to use the package.
"The go get -insecure flag is deprecated and will be removed in a future version. [...] To permit the use of insecure schemes, use the GOINSECURE environment variable instead."
I'm not a fan of using pervasive environment variables. It feels like cluttering my own, well, environment. I very much prefer using explicit command-line options, or even configuration files. Does anybody share the same opinion?
Huh? They're removing a flag, and recommending use of a environment variable. The environment variable can easily cover all cases in which fetching dependencies can occur, whereas the flag would have to be glued on to a bunch of `go mod` subcommands. There's no need to export this to your environment, you can just use it as a one-shot, similar to the flag:
Yeah, that's exactly what I'm opposed to. I know the in-line environment variable syntax, and I actually use it, but somehow I feel uneasy whenever I need to do that. Possibly related to my own aesthetic preference, I don't know. I just like "bolted-on" flags more.
I'm curious -- what have you needed from Unicode 13 that wasn't in the previous version of Go? (Presumably new codepoints still work fine, as long as you're just passing characters through, and not relying on the categorization functions in the unicode package.)
I do care about categorization though. Here’s an identifier parsing library of mine (https://github.com/smasher164/xid) where I can’t test that codepoints are in some category without knowing the version.
If unicode was an external library, it would be my responsibility to update this identifier parsing library.
Nice! I suspect you're in a minority of developers who need this level of control, though. I don't think I've ever used "unicode" directory, though I reach for "unicode/utf8" often enough.
Very nice release, i work with both D and GO, and it's been a pleasure, server stuff? i pick GO, everything else? D and sometimes Rust, just to spice things a little bit and be up to date with current trends, it's always a good thing to be curious and try new things
This seems, at least on the facade, like a bad idea. Binaries being immutable is one of the more concrete things we know about a system. It's why hashing a binary is effective in the first place.
I read this feature as beneficial for folks who do not need to scale frontends and want to embed resources in the binary to that end. This was already possible before but it looks like they just gave us an official channel for doing so.
Self modifying executables are both a bad idea and generally flagged by virus scanners. So embedding the db in the executable I would say is a pretty big no no.
I don't exactly know how production-ready it is, but there's https://pkg.go.dev/modernc.org/sqlite, which is an automated transliteration of SQLite's C code to Go, so that it can be used with CGo.
It's the calm before the storm.
But I love that. :) 10 years in, Go is still my favorite language when it's the right tool for the job.
So actually, this is an exciting release IMO. Highlights:
- darwin/arm64 support (M1 chips)
- Go module quality-of-life improvements
- Built-in support for embedding files
- File system interface! (io/fs)
- Linker optimizations!
- io/ioutil -> io
- Lots of juicy crypto/* package improvements. Filippo and team are crushing it!