This is very cool and intressting. I have to think about something to do with core.async.
This is also a good example why macros are just awesome. Go is a language design to work well with goroutins and channels, but the clojure code looks just as good and readable. You could simply not have such idiomatic use of these concepts without macros.
Or am I wrong, can a flexible language like scala or python be extended to look as good for that usecase? I dont know enougth of the tricks scala people use, I cant juge if it is possible.
>but the clojure code looks just as good and readable.
But don't mistake this for having the same runtime characteristics- For what its worth the Computer Language Benchmarks Game shows Go as generally being faster, using much less memory and less code.
That is true. I have not looked at the much of the clojure code and non of the go code but both probebly are written in a very unidiomitic style that one does normally not use.
Also with clojure, the actual timeconsuming calculations can be made with java and that should be at least as fast as go (with a bit more memory).
So all in all I value the architectural things much more then pure speed. Go simply has a diffrent target then Clojure.
Agreed on go being faster and using less memory, however, in practice, I think idiomatic clojure would have less code then compared idiomatic go, when writing similar programs.
Are the first two examples equivalent? In the go version, an anonymous function is being declared and then called with the value of the outer 'i'. In the clojure version, it appears that the value of 'i' is part of the closure for that function.
The go version does what it does to avoid a race condition, because the goroutines are being spun up in the background and it's highly likely that it will take longer to spin up at least one of them than it will to finish the loop, so without it you'd likely get output of all nines.
If the clojure version doesn't have that problem, then I think it's a somewhat telling indication of how it's actually working.
TL/DR: The Go version has to work around the race condition because the runtime is doing things "right", is core.async?
It's a difference of the language. In the Go version, i is a mutable variable that can be read and written to. In Clojure, it's an immutable bound value. The i of one iteration is not in any way directly linked to the i of another iteration.
The i is scoped differently in clojure than in golang so it doesn't have the same potential pitfall. I think golang people consider their scoping here a mistake (and have tooling to check it?).
Are the clojure threads actually lightweight? Thread/sleep is blocking so you'd need to occupy 10 threads right? If you wanted to sleep without blocking a real thread would you use an executor service to write to a channel after delay and block on that?
From what I've read/seen, the go blocks are lightweight thread-like processes multiplexed onto a thread pool. You may be correct about using Thread/sleep, ideally I would have used (timeout ...) and then pulled off the channel. However, I didn't want to introduce the concept of channels too early in the post, so I felt Thread/sleep worked as a compromise.
Thread/sleep will block the entire thread. The thread pool could create more threads, but core.async uses a fixed thread-pool, so, yeah, don't use Thread/sleep in a go block. A timeout channel would be the way to go, or, you can use an alternate core.async implementation, which is a (small) part of the Pulsar project: https://groups.google.com/forum/#!topic/clojure/1xxxTti6Vi0 (I'm the main author)
We will have full API and semantic compatibility when version 0.2 is released next week. Pulsar also has cluster distribution and an Erlang-like actor framework.
Because Pulsar uses instrumentation at the bytecode level, you have more freedom within Go blocks. You wouldn't use Thread/sleep in the Pulsar implementation, either, but Strand/sleep will do the job. It detects whether you're in a go block (implemented as a fiber in Pulsar), in which case it suspends the fiber but doesn't block the thread, or within a normal thread, in which case it will simply call Thread/sleep.
Yah, for demo purposes of wanting to show something taking awhile thread/sleep is very convenient. I have no idea, time.sleep in golang might actually block a real thread too. I'm just curious if blocking in a go block could starve your thread pool or if there is some magic being done by the macro to even correct for that?
Isn't this a major difference between core.async and go?
If you perform a blocking operation (Thread/sleep there, but could be, for example, a socket read?) will you run the risk of exhausting the thread pool?
The go runtime handles this by detecting how many threads are in blocking syscalls and spawning more as needed (which I guess could be undesirable).
If I've understood correctly, this is perhaps best summarised as "go makes your sync code run well in goroutines, with core.async you can block everything with sync syscalls in your go block"?
Or perhaps just "go will adjust the size of your thread pool dynamically, core.async requires it to be big enough"
Small bug irrelevant to the main point of the article:
The very last golang example has a leak. If the timeout does occur, main will exit after a timeout and the spawned go routine will hang on the channel push.
I think the easiest fix is to make the channel buffered.
"make(chan string)" => "make(chan string, 1)"
Not sure if there is a more idiomatic golang way to accomplish this.
Quick question for anyone here familiar with core.async:
Would it be possible (and if so what would be the simplest way) to implement something like Python's generators and `yield` statement in Clojure using core.async? I'm thinking something like:
func generator(values ...interface{}) func() interface{} {
c := make(chan interface{}, len(values))
for _, v := range values {
c <- v
}
return func() interface{} { return <-c }
}
Oh for sure, more as an exercise in curiosity than anything else. (Although I am also playing with generator- and iterator-like abstractions as part of a resource-scoped foldable stream abstraction to help process big files.)
Looks like I managed to answer the 'is it possible?' part of the question anyway -- something like this:
(defn range
[n]
(let [c (chan)]
(go
(loop [i 0]
(>! c i)
(if (< i n)
(recur (inc i))
(close! c))))
(fn [] (<!! c))))
Yes, but I wouldn't do that. Every time you pull an item out of the channel, you actually submit a task to execute in a separate thread pool (to produce the next number), block your own thread, and wait for the task to complete on its thread and then wake your thread up. That's a lot of task-switching going on just to generate the next consecutive number :)
OTOH, you could use async's coroutine code (used to implement go blocks) to create generators, but because you have lazy sequences, that's not necessary either.
Yep I figured this would probably not be very efficient.
About lazy sequences: sometimes you want to avoid the allocation of lots of intermediate cons cells when mapping/filtering/etc. The reducers framework for example manages to avoid this sort of cost when the data structure supports something faster than first/rest recursion. I'm interested in extending reducers to work nicely over large files and other sequences which don't fit in memory, and some kind of iterable or generator-like abstraction could play a useful role in this.
(Actual coroutine-based generators might not be necessary, but would allow for a neat outward-facing API)
you could do it with channels, but also by extending the go macro itself. The mini compiler behind the go macro is designed to be extensible. In the test suite for core.async the compiler is run through a series of tests using a "runner" implementation of the go macro: https://github.com/clojure/core.async/blob/master/src/test/c...
I've had generators working several times with an approach like this, but as it doesn't really fit with the rest of the library, I removed them.
All that being said, these APIs are internal and could change at any moment.
Aha, thanks for the pointers, that helps. I suspected something simpler might be possible using these ioc macros but was frankly a bit scared off by them at first :)
It'd be nice if the coroutine/inversion-of-control stuff was given a stable public API at some point, because those macros seem very powerful and neat in themselves and might have other interesting uses aside from channels.
Golang is verbose, as modern languages go. In its idiom, it's actually a little more verbose than some modern, tight C (though less verbose than most C code, and much tighter than Java).
It's the difference between "programmable programming languages" (which the lisps are, as are haskell and ruby) versus "languages for programming in" (exemplified by go and java).
Go code is very terse compared to the general run of "languages for programming in". And if you know the language, you can just read it.
On the other hand, you've got some serious digging to do if you want to understand a clojure macro. Metaprogramming like that is seriously brain twisty.
On the other hand, you've got some serious digging to do if you want to understand a clojure macro. Metaprogramming like that is seriously brain twisty
I wish this meme would stop. Your "if you know the language, you can just read it" applies just as much here as it does there.
I think you can be pretty proficient in writing regular Clojure and still find macros "twisty". Most Clojurians would agree that "The first rule of Macro Club is Dont Write Macros".
Well, I wish that meme would stop too. It makes macros sound all freakazoid, when really they're just another technique.
The obvious and natural and historically standard guidance is "Don't write a macro when a function will do."
The concrete example is a perfect use case for a macro: In other languages you have to understand and change the compiler implementation to achieve the same effect. That's most likely more work.
Another interpretation is that you can read "programmable programming languages" at the level of the problem domain, rather than at the level of the base language. (Assuming the problem domain is encoded sufficiently decently in a DSL).
Depends on what you compare it to. Go doesn't have macros and idiomatically prefers C-style for loops to higher-level abstractions (except for the built-in "range" construct) for iteration. So in general Lisps like Clojure offer more opportunities for terse code. (Which style is easier to understand is of course a never-ending debate.)
It's a bit more verbose than popular dynamic languages like ruby, python, clojure, etc. but a lot less verbose than Java, C#, or C. To me it feels roughly on par with javascript LOC-wise, though it has much stronger constructs.
That does not follow. The more verbose code is, the more prone to error and less maintainable it is. Verbosity is never valued in prose; it's superfluous, pedantic and boring. Why should code be any different?
Perhaps you don't value verbosity, but fans of Flaubert, Balzac, Henry James, Joyce, Poe, Dostoyevsky and Dickens would disagree with you. Your absolute statement is false when looking at most of the valued literature over the last few hundred years.
The more verbose code is, the more prone to error and less maintainable it is.
I think this is true, but only to a certain point. Past that point the code becomes more difficult to understand and less difficult to maintain (for example all variables with one-letter names, few new lines). So there's a continuum there between terse impenetrable languages, and verbose impenetrable languages on either extreme, and you disagree with the OP about where on that continuum Go falls. It's all a matter of opinion and frankly is more subjective than objective and depends on things like the standard library and culture of the language far more than the syntax.
There are also many other factors in being maintainable and error prone - verbosity is only one of them. So saying that Go is more verbose than language X doesn't really tell us much about how prone to error or maintainable Go is compared to language X on its own. For what it's worth, I find it comparable to languages like C, Ruby or Python in terms of verbosity, which feels about the right place to stop being terse to me. YMMV.
>Perhaps you don't value verbosity, but fans of Flaubert, Balzac, Henry James, Joyce, Poe, Dostoyevsky and Dickens would disagree with you.
I don't value verbosity and I do appreciate the works of those authors: I do not consider them verbose. Verbosity is not a measure of the absolute length of a document; it's relative to a minimum expression of an idea. Per this definition, all verbosity is unnecessary because it conveys no additional information.
What a curious statement. James did not consider the 'minimum expression of an idea' a virtue, he's almost infamous for his verbosity, as are many of the others in that list. I'll leave you with something from the start of The Ambassadors as an example:
The principle I have just mentioned as operating had been, with the most newly disembarked of the two men, wholly instinctive--the fruit of a sharp sense that, delightful as it would be to find himself looking, after so much separation, into his comrade's face, his business would be a trifle bungled should he simply arrange for this countenance to present itself to the nearing steamer as the first "note," of Europe.
I don't consider that verbose at all. It expresses a great deal more information than you're giving it credit. It would not be easy to pare down without destroying this information. Thus, it is very close to the minimum expression of the intended idea.
One mans verbosity is another's great literary detail.
The same is true for programming languages, the fact some languages make you two say 2 lines, to achieve what another language does with one, can itself be a benefit or not. It might force the developer to make the same mistake twice, otherwise resulting in a compiler error. Or it might allow extra space for a mistake to creep in.
It is the perspective of the users and the context that determine if something is right or not. This project has apparently 14,000 classes, with about 3,500,000 lines of code. It is verbose, but it is also very manageable. A smaller more 'intelligent' less static language would make working with this a lot harder.
terseness is not a feature Go is striving for. That being said, it's still more terse than most other statically typed languages, simply due to type inference. You do have to write out some loops that other languages provide syntactic sugar for... people from those languages call that verbosity. Gophers call it clarity :)
Well, it's not so much syntactic sugar as having the power to abstract them away via generic functions like map, which I'd argue are both more clear and less error prone.
I'll add the classic response of "once you let your editor just handle it for you you forget about it entirely". Because, really, truly, when you code in lisp you're just traversing a very simple syntax tree. If you use emacs/paredit you literally stop typing and start using keyboard commands to "descend down the left branch" or "prune upward 4 times" or "delete this entire branch".
This is also a good example why macros are just awesome. Go is a language design to work well with goroutins and channels, but the clojure code looks just as good and readable. You could simply not have such idiomatic use of these concepts without macros.
Or am I wrong, can a flexible language like scala or python be extended to look as good for that usecase? I dont know enougth of the tricks scala people use, I cant juge if it is possible.