Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Proposals for Go 1.15 (golang.org)
159 points by dgellow on Jan 29, 2020 | hide | past | favorite | 128 comments


I really admire the Go team for sticking to their guns and taking their time before making language changes. Let's avoid another C++.


C++ is 35 years old. The problem is that it has 35 years of backwards compatibility. Go will not avoid this problem by taking time to make changes, it can only be resolved by being prepared to break backwards compatibility.


It's quite an amazing feat that C++ has maintained backwards compatibility.

Watching the ~11 years of Python 2 to 3 adoption has been somewhat painful, and many libraries have had to offer support for both across much of that time. (And I believe at time of writing Google isn't even 100% ported to 3 yet - please correct this if I'm wrong).

I couldn't be happier that Python 3 is finally king though, and major projects like Django are able to go entirely to 3 now.

Breaking changes in languages can be important and good, but they really can affect adoption of the new version by existing users, which is tough as often major users hold some sway in the ecosystem.

JavaScript has the no breaking changes problem even worse, because you have to support (within reason) all major interpreters existing in the wild, and cannot easily change the code, so "transpiling" has become the normal.

Microsofts support lifetime for IE11 also means this isn't changing soon. Lots of old browsers in the wild (even though it has improved dramatically over last few years).


It is much easier to do breaking changes in a compiled and statically typed language, as long as it doesn't use a preprocessor. Because you can do automatic upgrades. Some Python 3 changes were not fixable by 2to3 which meant all source needed to be checked manually line by line, vastly increasing the cost of upgrading. Worse, Python 2 wasn't forward compatible with Python 3 in many cases, which required libraries like six to get a transition period through the back door. The whole process was managed horrifically.


C++ has not maintained backwards compatibility by doing automatic upgrades. I don't think that would even count.

Backwards compatibility for a language means that old source code still works without having to update it, even if you can do so automatically.

Also worth pointing out that C++ has broken backwards compatibility in a very small number of cases, e.g. the `auto` keyword has changed meaning. Nobody ever used it before so it wasn't a problem in practice.


I wasn't giving C++ as an example of a language that maintains backwards compatibility through breaking changes using source upgrades, I was actually thinking of Go when I said that. Obviously C++ does have a preprocessor, which means upgrades cannot be done automatically in general because e.g. how do you know which macros in an #ifdef are "supposed" to be defined and which are "supposed" to be undefined etc? Instead they have to maintain backwards compatibility in the spec.


clang-tidy has support for doing code upgrades, but doing that in a ISO standard language with several implementations, including compiler specific language extensions, is an almost impossible task anyway.


C++ has introduced breaking changes, though: https://stackoverflow.com/questions/6399615/what-breaking-ch...

They've just been careful to try to minimize the impact.


JavaScript has made breaking changes though. Strict mode was introduced for ES5, and includes breaking changes, and now all engines in use implement it (it was in at least IE9). The difference between that and the python situation is that JavaScript allowed both versions to coexist in the same codebase.


Strict mode was opt in though, and code would still evaluate without it.

There's also a lot of new syntax though, which does have high costs in cases like generators, and shipping the regenerator runtime because you need your code to run everywhere.

To my knowledge they have extended syntax and added modes, but not actually made any breaking changes.

A mode was a way to avoid making breaking changes.

What I missed from my comment is not just about the engines in use, but also that all old spec confirming JavaScript should still run, in all new environments.

That is backwards compatibility.


> parseInt("012")

Modern browsers will say 12, old browsers (IE 8 and down) will say 10.


Great example, I guess I forgot that a few changes were made that otherwise did surprising things.

They also changed to a (more) deterministic ordering of object keys and I wonder how much software behaved differently / broke as a consequence...

My "favourite" JS quirk is some of the IE versions where it would break if you called console.log without the console open, as it was undefined.

So many quirks, I think I may have slightly overstated the compatibility element.


Great example, I guess I forgot that a few changes were made that otherwise did surprising things.

They also changed to (at least more) deterministic ordering of object keys and I wonder how much code worked


I suppose if you happened to previously have code with the string 'use strict'; at the top, and did non-strict things below, it would break. In a very small scale, that does break backwards compatibility.


I'd love to find out if there was anyone affected in reality. It seems improbable, as random string statements with no assignment is a pretty niche way to comment code, or surely it would have been mistake.

But there's so much poorly written JS on the web that there is a chance!


> Watching the ~11 years of Python 2 to 3 adoption has been somewhat painful

this portrayal of the python 2 to 3 migration does not represent the majority of the community! and yet we keep hearing it over and over because "large" codebases were not migrated on time.

this migration was a software engineering problem. i hope by now, people have learned to write dumb code and avoid clever tricks whenever they can. performance and clever tricks get tied to languages, OSes, & hardware versions. python is no exception!

not saying don't do them. just saying know what you are getting into.

again, the issue is not entirely due to breaking changes. but python made it too easy. c++ for example would have given people such a hard time that they wouldn't have even bothered. python's was too permissive.

i have seen the python 2.7 codebase that shipped with the original appengine. it always felt like traveling to a different world whenever the debugger gets into their code...


Problems of Python 2 to 3 migration were mostly not about clever tricks. They were largely about making Unicode strings and byte strings incompatible (as they should have been from get go). Much of Python 2 code mixed them up, and that was a source of actual bugs. Hence the need to fix manually.


It's hard to overstate how big of a breaking change that was. Python is basically scriptable C. In c, strings aren't really a thing, but they are first class in Python (since everything is effectively a dict). And char's are your typical string-like data structure. So in py2, it made (some) sense at the time to let str and byte[] types intermingle. I don't agree with that choice, but it wasn't unreasonable (much like the null pointer).

This led to all manner of playing fast and loose with str as byte[] usage. I've seen inline asm and even machine code in python.

Now it's the new millenium and oh look, ascii-char won't cut it as your language implementation of strings.


Backward compatibility (although there's a spectrum here -- not every breaking change breaks a lot of codebases) is virtually always more important than any feature you can think of. Breaking it might cost a lot of work, and you have to be fairly certain that the cost is more than offset by the benefits from the change, and over a relatively short time period. Given that the cost of breaking changes is easy to perceive, and it's been years if not decades since we've seen programming language features with substantial, tangible benefits, breaking compatibility is almost always the wrong decision.


The important thing when breaking backwards compatability is that it can be:

- automatically done

- isolated within a module

As long as these are true, the issues caused by breaking compatability are minimal.

It's really the Python model where it's all-or-nothing and you de-facto need to remain compatible with two releases as everyone transitions that's the problem.


> - isolated within a module

This only exists in fairy land. In practice, mixing code from different versions is always the root of subtle bugs and ABI/API mess. For me in front of a big chunk of legacy code, I'd rather wrap them in a separate process and communicate with the remaining part through RPCs if possible.


It doesn't have to be like this. Clojure is over 10 years old and has more or less achieved its vision of never having a breaking change in the core language. This comes at a cost (many rejected proposals, very very slow acceptance of them) but it's still proof that you can avoid breaking changes if you are sufficiently cautious. (It probably also helps that Clojure has a full-fledged macro system that lets users write libraries that would have had to be compiler plugins or language proposals in other languages.)


That sounds like an example of it being exactly like this (since neither Go nor C++ have had any breaking changes either AFAIA), or am I not understanding something?


Sorry, I didn't make the point I wanted to, which is that GP (on my reading) seems to think that breaking changes are necessary and unavoidable for the long-term health for a PL. I'm saying Clojure seems to be perfectly fine even without any breaking changes in over 10 years, and that it seems like it'll continue like this indefinitely.


Its market adoption also seems to be stagnant, so.



Also practices and paradigms were different. Go leverage some form of historical experience in how to evolve the language.


What does "backwards compatibility" even mean? That you can combine new code with old code (a.k.a. ABI compatibility)? That you can compile old code with a new compiler (modular compiler, old frontend + new backend, should work)? That you can mix old code and new code in the same file (why would you want to do that, presumably new code is also better, otherwise what's the point of changes in the first place)?


In the context of C++ it means that you don't have to make changes to your code when a new language version comes out. Because old broken features are preserved for eternity even if severely dysfunctional. Most of the hard work of learning C++ is learning all the "gotchas" that will sneak in and cause undefined behaviour or worse.

That's why people have a love/hate relationship with it. The language is full of BS but the path of least resistance is to learn the ins and outs of the BS rather than to do a full rewrite of your huge codebase in a sane language. Learning the ins and outs of C++ is a one time hit, after that you can use it quite happily. So new projects get started in C++ and the cycle repeats.


I completely agree. Although it's refreshing to have an official team leader mention generics as a priority rather than the passive-aggressive stance that used to be the norm a few years back (aka "show us why you need it, and give us a perfect design if you think that's possible).

I think the "we're working on it, but it's going to take a lot of time before we get something good enough" current position is the perfect middleground.


AFAIK the position expressed in the Go FAQ (https://golang.org/doc/faq#generics) has always been the official position on generics: "Generics are convenient but they come at a cost in complexity in the type system and run-time. We haven't yet found a design that gives value proportionate to the complexity, although we continue to think about it." There's nothing passive-aggressive about that - most proponents of generics only see the advantages (i.e. more expressiveness), but fail to see the downsides.


I think the "we're not doing it because it adds complexity in the type system and runtime" was something most developpers have a very hard time understanding, since generic programming exists in pratically every single PL created since C++.

My understanding not just from the FAQ but also from reading conversation in the forums, was that they thought the feature itself had such a high potential of being misused as a crutch for bad designs that they thought not having generics was actually a feature.

And to be honest i almost agree 100% with that perception. The problem is that it leads to archaism and copy pasting that make a few lines of codes here and there look just gross (although perfectly understandable).


I think the "we're not doing it because it adds complexity in the type system and runtime" was something most developpers have a very hard time understanding, since generic programming exists in pratically every single PL created since C++.

Plus, history has taught us that retrofitting generics to existing languages with their ecosystems can be difficult and painful (see Java and to some extend C++).


Java's generics are certainly awkward, and lots of code would look better if generics had been real from v1.0. I'm looking forward to Java's specialized generic impls (e.g. ArrayList<int>) that seem heavily inspired by the route C++ takes of compile-time class-instantiation.

But out of curiosity what's difficult/painful about C++'s "retrofitted" generics (templates)? There's a lot to not like about templates, but as far as I know they've remained relatively unchanged and have always been just as difficult and painful (and powerful) as they are now. Are you referring to the decision to make SFINAE a de-facto way to constrain parameters?


Concepts I guess, so much pain would have been saved if something like that existed in C++98.


> Plus, history has taught us that retrofitting generics to existing languages with their ecosystems can be difficult and painful (see Java and to some extend C++).

I believe that C# also had retrofitted generics, but the reputation there doesn't seem nearly as bad. But I believe the generic containers there were intentionally not compatible with the original non-generic version.


The major difference is that the CLR is aware of generics rather them being a compiler trick as they are with the JVM. Don Syme (father of F#, who also worked on the implementation of generics in the CLR) did a talk explaining the history and how they almost shipped with erased generics like the JVM. On the other hand, it's been claimed that erased generics have made things easier for implementing other languages on the platform, so there are tradeoffs I guess.

Having used both I would take the CLR implementation any day.


Same here, but I still have high hopes for Valhalla.


Not at all, Microsoft just decided not to delay .NET 1.0 release instead of having them fully implemented.

https://mattwarren.org/2018/03/02/How-generics-were-added-to...


It is even older than C++, CLU, ML had it first during the mid-70s, Ada and Modula-3 got it first in the early 80's, and C++ STL work actually was initially done in Ada.


You can obviously abuse C++ templates, because they are more (or less, depending on how you look at it) than simple generics. However, in what way can parametric polymorphism be misused or be a crutch for bad designs? I can't think of an example that wouldn't be horribly contrived.


Well, parametric polymorphism in the case of go would probably just be a performance optimisation over function taking interface as parameter, right ?

i usually see monsters when you combine generics with objects and inheritence, but you're right that since go also doesn't provide those...

i think when talking about generics in go we also mean struct accepting generic components. Not just functions


Sure, and I still don't really see a problem with it. Yes, it makes the compiler more complex, but I can't think of any ways it could be abused more than the existing language features. And it would make a lot of programs written in Go simpler.


Because go is very used in data plumbing code, i found that many functions can actually work fine at the byte array level, and that it leads to a better design.

It would be very tempting with proper generic support to try to have every function work with the topmost types, just because we assume it provides more type safety. I could imagine a struct representing a "User" byte array, or other atrocities.


it's a mystery, especially to anyone who has ever seen JSON handling code in Go, that they would think generics are a crutch for bad design.


While it was the official position was not passive aggressive. There has been passive aggressiveness coming from the team over it. One of the reasons many people wanted generics so that they could support map/filter/reduce in a type safe manner. This is the response we got from Rob Pike https://github.com/robpike/filter

For the longest time the answer about generics was "There are no plans for generics. I said we're going to leave the language; we're done"

And the people who have been advocating for generics understand that have downsides. That trade of different forms and placement of complexity (the lack of generics creates complex code in duplicate code for write arounds).


If you look at the questionnaire[1] they ask for new changes, it's clear they still want a justification, and a design, for new proposals. And that proposals which "state a problem rather than a solution" will be rejected.

It's just that they've been convinced that generics are justified. And they have concrete proposed solutions under evaluation.

[1] https://github.com/golang/proposal/blob/master/go2-language-...


> It's just that they've been convinced that generics are justified.

I wonder if anybody’s actually changed their mind on this, or whether the opinion of the Go team has shifted simply because it’s made up of different people now.


As far as I have followed, I think it is more that while a fair amount of work had gone into prior designs for generics in Go, the general feeling of the core Go team was the prior designs did mesh well with the rest of the language, whereas the more recent design seems like it could work.

From this blog post[1] from last year from the core Go team:

We’ve been thinking about generics since work on Go began, and we wrote and rejected our first concrete design in 2010. We wrote and rejected three more designs by the end of 2013. Four abandoned experiments, but not failed experiments, We learned from them, like we learned from check and try. Each time, we learned that the path to Go 2 is not in that exact direction, and we noticed other directions that might be interesting to explore. But by 2013 we had decided that we needed to focus on other concerns, so we put the entire topic aside for a few years.

Last year we started exploring and experimenting again, and we presented a new design, based on the idea of a contract, at Gophercon last summer. We’ve continued to experiment and simplify, and we’ve been working with programming language theory experts to understand the design better.

Overall, I am hopeful that we’re headed in a good direction

[1] https://blog.golang.org/experiment


There was no change of mind. They always had generics in their minds. See the Go FAQ: https://golang.org/doc/faq#generics

"Generics may well be added at some point. We don't feel an urgency for them [...] we continue to think about it. [...] The topic remains open."

These sentences have been in the FAQ since Go 1.0. Maybe people should start to believe them.

They have gathered experience with the current language, their generics design drafts have improved over the years, and now they feel like they have something that might fit.


They have been iterating on the contracts proposal here:

https://go-review.googlesource.com/c/go/+/187317


There's been a feature I've wanted in C++ (concepts) that is finally coming after 14 years. I don't think expediency is the problem with C++ language evolution.


Exactly. We're still waiting for string split/join.


Yeah, kind of blows my mind how many advanced features they add, but basic string handling stuff that you always use (split, join, startsWith, etc.) is never added.


Well, it is a matter of who is willing to submit a paper and champion the idea.

I think the problem is that those features are an uphill battle with the crowd that wants performance above anything else, given the implicit allocations that they might need.


For the record, std::string::starts_with and ends_with are in C++20. Split and join aren't, though.

(I think join is not too tough to write, but I don't really know how you'd write split in a generic enough way for everyone to be satisfied.)


Join is relatively trivial using std::stringstream. I'm optimistic that split will be less contentious with C++20 ranges since you can just range-based-for loop over the split tokens in-place and/or collect them into a container using range mechanisms : best of both worlds.


absl/strings is really great in the interim: https://abseil.io/docs/cpp/guides/strings


I'm really glad they don't have C++ style generics, and I hope whatever way they do it if/when they do, it keeps that Go minimalism.

I'd like to see the Interface system more powerful. You can already do some very un-gothonic things with map[string]interface{}. I'd like to be able to compose functions already based on an interface, to implicitly implement other types, rather than spelling out each function (maybe you can already, I'm still only a few months in to Go).

I liked the concept of "try" but I agree with the peanut gallery, the syntax needs work. But many gophers (it seems) agree there should be a shorthand for "I got an error, bubble it up to the caller" aka "if err != nil {return nil, err}"

Or maybe not. In some ways, the verbosity of err != nil everywhere makes me look for ways to solve problems with less error returning.


I don't miss generics either, not in the least. The interface system, Go's particular approach to it, takes me a long way ...

I won't ask for any new feature for today or tomorrow. But for day after tommorow, within a longer time research horizon, I wonder whether compiled and distributed Go applications can be made to interact along the lines of interfaces within an application now. May be its my struggle with Kubernates (as a newbie) that makes me want: Can there be secure language mechanisms supporting clustering, scheduling etc in a distributed enviroment with an external agent armed only with prior package interface definitions of a deployed Go application.

Please don't sacrifice performance, and fast compilation times. That module system is looking good!


string(int) to result in the string representation of the int value rather than the code point int represents comes at surprise and is orthogonal to the other conversion functions.

Please don't do it.

That direction of conversion, if done right, requires respecting locale which string(int) is fully unaware.


The change is to warn people against using `string(int)`, not to change its behavior. The goal is to remove the conversion, not make it replace `Itoa`.


I wouldn't mind one major thing from C++: A standard.


If you're talking about Go, they have an official specification (https://golang.org/ref/spec), which is probably what you mean when you write "standard", and the two official implementations (the Go compiler and the GCC-Go compiler) are based on this spec. Which is more than other up-and-coming languages (which I won't name right now) have to offer ;)


ISO/ANSI would be my preference. That's still not the best model of language governance, but at least it's not down to one company. Also a decent sign of stability.


So you like to pay for specs?


No, and I don't like paying for membership and voting rights, either. Never mind the pervasive corporate sponsorship of most members on working groups, regulatory capture etc.

But I think it's still better than ad hoc "RTFC" standards or BFDLs.


Better than not having a standard at all...


I’m not sure what you mean: https://isocpp.org/std/the-standard


I think the idea is having either ECMA or ISO Go.


I see, I misread the comment I was responding to. For some reason I got the opposite: "C++ is missing a standard", instead of "Go is missing a standard".

Makes sense then.


Several years ago, I very don't like the way and the pace C++ introduces new features. But now the way and pace has become normal, I change my mind some. I think it is the time I should re-use C++ for some projects. It would be some restricted if these projects are written in Go, for lacking of some key features.


Which features and why?


Specifically, generics and immutable values. Lacking of generics lead to much repetitive code. I don't like using exported variables as final values, so I need to use functions to return the final values, which is verbose and not elegant. Lacking of read-only values also causes some unnecessary performance losses.

Too network/tools oriented is also a reason. Porting C/C++ gfx libraries is not an easy job, especially multiple C/C++ gfx libraries are involved. And the performance is not high as C/C++ even if the porting is done.


> I don't like using exported variables as final values, so I need to use functions to return the final values, which is verbose and not elegant. Lacking of read-only values also causes some unnecessary performance losses.

This sounds like you're fighting against the language. I think a lot of developers do this. I think one should either embrace the language or find another (you did mention moving to C++).


I do this to adapt it, to make code more secure and serious. I embrace other features in Go but not this one. This one makes Go some like a non-serious language.


In what way does it make Go "non-serious"? And, why do you care? I care about the suitability of a language for the problem I'm trying to solve, not it's "seriousness". What concrete difference does this make to programs that you're writing?


Variables means modifiable. Using variables as final values is bad. This is so simple that no explanations are needed.


I've mentioned this before on HN [1]. I strongly hope 1.15 will finally address the math/big.addMulVVW issues that really slow down RSA computations, directly affecting PGP and TLS.

[1] https://news.ycombinator.com/item?id=21968913

Edit: I screwed up some initialisms.


Is RSA mandatory for your use case? Both PGP and TLS support ECC, right?


You are correct that the specs support ECC. Go's implementation is not complete (see https://godoc.org/golang.org/x/crypto/openpgp/packet). More importantly you can't guarantee clients will be able to use ECC. See this issue (from my initial link): https://github.com/golang/go/issues/20058


Too bad the try proposal didn't make it. It was a good improvement on the great but too verbose error management system.


I think the common argument against try is that it was a macro in a language without macros. That’s pretty confusing behavior and you can’t expect anyone to understand that without reading the specification.


You mean a builtin, like panic().


Panic is a special form (a “keyword”); it is implemented by the compiler and not implementable within the language itself.

Try was defined as a syntax transform. Calling it a macro seems reasonable.


panic is a built-in and so was try:

https://golang.org/pkg/builtin/#panic

"The panic built-in function stops normal execution of the current goroutine. [...]"

It's not among the list of keywords: https://golang.org/ref/spec#Keywords

"Q: Why is try a built-in?

A: By making try a built-in, there is no need for a new keyword or operator in Go. Introducing a new keyword is not a backward-compatible language change because the keyword may conflict with identifiers in existing programs. Introducing a new operator requires new syntax, and the choice of a suitable operator, which we would like to avoid. Using ordinary function call syntax has also advantages as explained in the section on Properties of the proposed design. And try can not be an ordinary function, because the number and types of its results depend on its input"

https://github.com/golang/proposal/blob/master/design/32437-...


The Go documentation refers to panic as a "built-in function" but this is a white lie in the documentation. It is not a function; this will not compile:

    var _ func(interface{}) = panic
It is a special form that happens to be spelled like a function.

The distinction the Go documentation is drawing between "keyword" and "built-in" is largely about Go's internal parsing mechanism (i.e. can you have a variable named try, or panic, which you can). When Go says "keyword" they mean what would be more formally a "reserved word". Hence my preference for "special form", but this is not well-known outside the Lisp community.


try doesn't seem like it would have actually been able to replace that many error-checks in practice, since many of them do something other than just blind returning.


This proposal by deanveloper is the best I've seen against try. (apologies for indentation change - not sure why hn can't preserve it)

https://github.com/golang/go/issues/32437#issuecomment-49893...

________________________________________________________________

I actually really like this proposal. However, I do have one criticism. The exit point of functions in Go have always been marked by a return. Panics are also exit points, however those are catastrophic errors that are typically not meant to ever be encountered.

Making an exit point of a function that isn't a return, and is meant to be commonplace, may lead to much less readable code. I had heard about this in a talk and it is hard to unsee the beauty of how this code is structured:

  //
    func CopyFile(src, dst string) error {
      r, err := os.Open(src)
      if err != nil {
        return fmt.Errorf("copy %s %s: %v", src, dst, err)
      }
      defer r.Close()
    
      w, err := os.Create(dst)
      if err != nil {
        return fmt.Errorf("copy %s %s: %v", src, dst, err)
      }
    
      if _, err := io.Copy(w, r); err != nil {
        w.Close()
        os.Remove(dst)
        return fmt.Errorf("copy %s %s: %v", src, dst, err)
      }
    
      if err := w.Close(); err != nil {
        os.Remove(dst)
        return fmt.Errorf("copy %s %s: %v", src, dst, err)
      }
    }
  //
This code may look like a big mess, and was meant to by the error handling draft, but let's compare it to the same thing with try.

  //
    func CopyFile(src, dst string) error {
      defer func() {
         err = fmt.Errorf("copy %s %s: %v", src, dst, err)
      }()
      r, err := try(os.Open(src))
      defer r.Close()
    
      w, err := try(os.Create(dst))
    
      defer w.Close()
      defer os.Remove(dst)
      try(io.Copy(w, r))
      try(w.Close())
    
      return nil
    }
  //
You may look at this at first glance and think it looks better, because there is a lot less repeated code. However, it was very easy to spot all of the spots that the function returned in the first example. They were all indented and started with return, followed by a space. This is because of the fact that all conditional returns must be inside of conditional blocks, thereby being indented by gofmt standards. return is also, as previously stated, the only way to leave a function without saying that a catastrophic error occurred. In the second example, there is only a single return, so it looks like the only thing that the function ever should return is nil. The last two try calls are easy to see, but the first two are a bit harder, and would be even harder if they were nested somewhere, ie something like proc := try(os.FindProcess(try(strconv.Atoi(os.Args[1])))).

Returning from a function has seemed to have been a "sacred" thing to do, which is why I personally think that all exit points of a function should be marked by return.


At first glance I can tell where the code exits. It happens whenever there is a "return" or a "try".


That's a very good point.


I thought the whole point of go was to remove extraneous features and provide a completely minimal programming language. The argument being that all those extra features are a source of bugs.

Nevertheless, modules and try... it seems like it’s just an effort to add them all back in again.

Surely if this happens go will be indistinguishable from all the other languages it was designed to be different from?


Try was rejected and modules are awesome in my opinion. Easily the best dependency management I’ve worked with, across Ruby (Gems), Python (PIP), Java (Gradle/Maven) etc.

Up to this point, the ratio of useful enhancements to unnecessary cruft has heavily skewed towards usefulness.

The post links to a new questionnaire the core wants all change proposers to answer - the questions are pretty good and force a bit of intense soul searching for everyone asking for a change. That will probably stifle ideas a bit, but resisting change by default is what we want out of this language anyway.


Those are all awful dependency management examples.

I won't say Go dependency management is terrible, but it's certainly not awesome. At least to someone who has used PHP (Composer), Rust (Cargo), JavaScript (NPM), C# (NuGet).


I don't think NPM is an example of a good dependency management system. It works, but it doesn't spark joy. Some issues I've run into:

- npm install ignores package-lock.json and uses package.json. The work-around is to use npm ci. https://stackoverflow.com/a/45566871/30900

- Flakiness. An acceptable solution to npm difficulties is `rm -rf node_modules; npm i`. Admittedly, this has improved a lot in recent years.

NPM also inherits the design preferences of the JS ecosystem.

- Simple packages have deep dependency graphs.

- Functionality is spread across multiple packages, sometimes at a granularity of a function per package.

- If you want types, you roughly double the number of packages you need.


Half the issues you mentioned are due to the ecosystem/community (one liner packages and deep dependency isn't fault of npm but it might be an unintentional result of how easy it was/is to publish and reuse packages) and the other half I don't notice by using yarn/pnpm.

Getting types is optional and only required if you use typescript which you don't have to. It does improve the editor experience for vanilla js but those are put under dev dependency.

There are a lot of things that can be improved though.

Lot of packages put their config inside package.json which is honestly messy. The whole script part is a bit restricting. Better approach would have been to follow how mix (elixir) does it. Json is limiting as a format, no comments.

Like you mentioned, it inherits the mentality of js ecosystem. It doesn't feel part of node but a separate piece of its own.


Why are gems awful? It’s one of the best imo


Gems are fine-ish... they rubygems infrastructure is really slow, though. Maybe github packages will be better. And the ton or native code compilation sucks a bit, especially when compared to Go. CGO isn’t all roses, but it’s still a bit less common because you can get comparable performance with pure Go.


Why is gradle/maven significantly different than Nuget as far as dependency management goes? The only major difference is Gradle and Maven also handle a lot of the build management as well.


I was curious - what's wrong with Maven that is improved in Go modules?

I personally found Maven much more friendly, since there are no odd interactions with your source-control solution, you get all relevant details in the Maven pom.xml. I also find this idea of relying on semver, especially with Go's insistence on renaming packages for major version changes, to be very unpleasant and brittle, especially for internal packages.


Each library needs to be build a JAR, which isn't the case in Go - in Go you just put in the code URL (git repo / branch / sha1) and you have it. Also locks the dependency to that sha1, and crypto verifies it. So you get all the benefits of building an artefact, hash verification and central repo without having to do any of the work.


> in Go you just put in the code URL (git repo / branch / sha1) and you have it

It seems you're thinking of the old Go dependency management.

The current Go dependency management, Modules, means that you put in the unique name of the Module (which is its URL) and the Semver-compatible version. Then go mod can resolve the exact code you need.

Sure, it's still source-code based, so you don't need to build a JAR file. Of course, that also means you can't have external dependencies that you pull in - you must have everything required to build your go module in that source code.

Go mod also checks version incompatibilities based on Semver and chooses the lowest specific version which matches everyone's Semver dependency specification.

Your source-control server must also know how to act as a go mod repository (it needs to respond to some go mod specific HTTP calls, as far as I could tell).

Now, when you want to publish a new version in Go, you don't build and publish a specific build to some extra repository. Instead, you need to tag some commit in your repo with a Semver version.

If you want to publish a new major version, you need to do much more than that, since any version higher than v1 will impact the name of your package in import statements in Go code (import "github.com/mymod/mypack" will become import "github.com/mymod/mypack/v2" in any code using your module, including internally).


Actually I find it a disadvantage, I don't want to deal with source code from other teams/companies, rather binary artifacts totally separated from their toolchains.


I find it a huge advantage, in fact the lack of blobs may be my favorite go feature. I'm sure stodgy old companies that want to preserve "IP" through security through obscurity, dislike this model.

But as a dev? I can drill down to the core (C-b in Goland, jump to definition) of ANY import. Even the entire go toolchain is in go.

ABI/version management of artifacts is a nightmare, every single time.


That is what debugging symbols are for.

Using binary dependencies doesn't preclude having access to source code if desired.


Same with Java unless they stripped symbols. Libs wouldn't be stripped though.


One of the reasons I use go is the production of static, self contained binaries. Wouldn't binary artifacts make that much more difficult?


Why should it? Static compiling has been a thing since compilers exist.

Dynamic linking only became mainstream in the mid-90s.


You can build self-contained Java JAR files for a while now.


To your points about integrity verification: maven does this too if you pass the `-C` flag.

One of the mayor benefits of maven/cargo repositories is that they are configured to be immutable. No issues with deleted repos or deleted/overwritten tags. Once your dependency is published it's there forever.


Why would I want to use your lib if you can't even trivially build it? haha.

That said, Gradle has git source dependency support as well.


One idea I like is to be able to depend on a branch. In Java it would be nice to depend on a branch, rather than dealing with snapshots.

Obviously that comes at the cost of coupling with the VCS. Maybe its not worth it. Interesting idea though.


The idea of a “branch” is not particularly novel, and I think any useful VCS will have an equivalence. I’d be perfectly fine with relying on that as much as I do any other aspect of a package manager.


Isn't that much harder to do with Go mod than with Maven?

With Maven, you can define your own versioning scheme and easily include the branch as a component of the version "number".

In Go mod, as far as I can tell, you have to have a Semver vMAJOR.MINOR.PATCH version, which is much more difficult to adjust for short-lived branches.


Programming languages are products just like anything else in the software industry.

Either they adapt or eventually fade away.

Hence why we get this reboot cycles where new languages get introduced as revolution against the establishment, and a couple of years later are just as feature rich as the ones they were "fighting" against.


By "revolution against the establishment", I take you to mean "revolution against the complexity of existing tools". Meaning, a simpler tool. You can build a tool that will do 80% of what the existing tool does, with 20% of the complexity (or maybe even 90/10). And that's great... until you need the ability to do that last 10 or 20%. Then the simple tool has trapped you.

But by then, you've got a lot of code in the new tool. So what you want is a way to do whatever part of the last 10 or 20% of power that you need for your problem. "It's just a small addition!" But there's someone else who needs a different part of the last 10 or 20%, and wants to add that part...

And so you wind up with the new tool becoming as complex as the old tool. And then, as you say, the cycle repeats.

I think that if a tool is going to be an "80% of the power at 20% of the complexity" tool, and remain that, then it has to have an escape mechanism. You've written your 100,000 lines of simple code, and you need 50 lines in a more powerful tool, well, there's a clean way to use code written in a more powerful language for those 50 lines. Then the language can remain one that just has 20% of the complexity (if those in charge of the language can maintain their vision and their stubbornness).


One nice thing about Go is the existence of cgo. Yeah, it's discouraged, and rightly so, but you have that option. The ol' "Give it to C, C will do anything".

The other is IPC. Go is so dang easy at concurrency, managing data flow, async IO, etc, that I find it really lends itself to working as a cog in a larger machine, usually distributed. Don't like solving problem X in Go? Solve it however you want and just talk to your Go process.

So you have 2 escape hatches which were much less tenable as overall approaches even 10 years ago. So hopefully Go can stay lean and mean. I think it also helps that unlike other systems languages, Go doesn't have any intent on being a catch-all language. Graphics, hard real-time, drivers? You ain't gonna reach for Go. Light scripting, data science, machine learning? Also probably not Go.


Yes, I do agree, specially because those 80% are not the same for everybody when one starts to target the language into domains it wasn't originally thought of.


> Either they adapt or eventually fade away.

The problem is devs are like, "This is a great language! I wish it had all these other feature from this other language I've been using."

Well, just go use that other language.


But they want those features without the complexity of the other language. So "just go use that other language" doesn't satisfy them, either.

But at least your response makes it so that they're whining about two languages, instead of just one...


Exactly.

Sadly seems inevitable that all languages will eventually become bloated due to this.


That is the "eventually fade away part" of my remark.


Speaking from a personal perspective, modules feels like a real improvement, mostly because GOPATH always felt a bit iffy.

I share your concerns, especially around generics.


As much as I hate to say it...I completely agree.

I'd love to see the reasoning behind breaking minimalist principles; otherwise it looks like complexity drift to me.


The proposals don't change the language, it's just a few very sensible warnings and the tiniest, sensible extension of constant expressions.


>Nevertheless, modules and try... it seems like it’s just an effort to add them all back in again.

Keyword is "extraneous features". 10 years of hard experience showed neither of modules nor a better error handling story (not necessarily "try") are "extraneous".

On the contrary, extraneous is what we get when everybody implements their own ad-hoc solution for those.


I agree. I strongly desire the Go team to keep the language as tiny as it is.


Same here. There are tons of other languages to choose from if you don't like what makes Go unique. People seem obsessed with making all languages homogeneous rather than appreciating what makes them different.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: