> I like their TLI ideas, but I am honestly a bit alergic against designers who lack the phantasy of realizing that sometimes a UIs complexity is a reflection of the complexity of the problem it helps to solve in conjunction with the level of control you want to give to the user.
I think what's key here, and why I like the direction of the work in the linked post, is that none of the affordances of the UI they're augmenting -- the command line -- are being taken away,* merely made more visible based on context. I think that gets at a hole in the notion that UI complexity mirrors domain complexity -- in the case of Premiere, surely the entirety of what's on the screen doesn't mirror the complexity of the workflow you're executing right now, it's much closer to the sum total of everything you might ever do. Not that a UI based on progressive disclosure would be trivial or maybe even possible to implement effectively for something like a professional video-editing application, but it's worth considering whether something like that is merely a local maximum in the space of all possible UIs.
* I guess in the case where you know exactly what you want to do and the added context might distract or slow you down, that's an affordance being removed, but I think it's outside the scope of what we're talking about here -- especially because, given the scope of the command line, you will inevitably have to do something with it that involves an invocation that isn't right at the tip of your fingers.
I agree that they did the right thing for the domain of the TLI, as your comment captured.
And existing editing GUIs might very well be a local maxima — however we designers need to be careful in quickly condemning existing solutions that we don't understand by calling them "unfocused", "slow" or "ugly".
It is the responsibility of any designer to be more aware than the rest of tradeoffs like these — making it look good is merely a nice to have you can achieve as well. To decide that for a domain as complex as editing demands at least some degree of awareness of the common problems an editor needs to solve, as your software or UI will have to solve them as well.
Sometimes this comes with a learning curve, as your user will have to learn how to tell the computer what they want. They will have to learn some language that allows them to express this, if they don't want the computer to take the wheel and just assume what they want. And that very language is what makes it complex and hard to learn, and there is not much way around it. Might there be a different language that lets you express more things with better ease while beeing easier to learn? Sure. But will it still be harder to learn a language which allows you to express a lot, compared to a language which allows you to express only little? Yes - as well.
That beeing said: I they made all the right usability choices on their own thing there, my comment was more an aside stemming from my own values as a designer.
I suspect I'm in the minority on this, but I would love for k8s to have hierarchical namespaces. As much as they add complexity, there are a lot of cases where they're just reifying complexity that's already there, like when deployments are namespaced by environment (e.g. "dev-{service}", "prod-{service}", etc.) and so the hierarchy is already present but flattened into an inaccessible string representation. There are other solutions to this, but they all seem to extract their cost in terms of more manual fleet management.
Hey - I'm a member of the multitenancy working group (wg-multitenancy). We're working on a project called the Hierarchical Namespace Controller (aka HNC - read about it at http://bit.ly/38YYhE0). This tries to add some hierarchical behaviour to K8s without actually modifying k/k, which means we're still forced to have unique names for all namespaces in a cluster - e.g., you still need dev-service and prod-service. But it does add a consistent way to talk about hierarchy, some nice integrations and builtin behaviours.
Do you want to mention anything more about what you're hoping to get out of hierarchy? Is it just a management tool, is it for access control, metering/observability, etc...?
Any reason why you put your link behind a URL shortener besides tracking number of clicks?
Since there are no character limits to worry about here unlike Twitter, better to put up the full URL so the community can decide for themselves if the domain linked to is worth clicking through or not.
Hey, thanks for asking! My interests in it are primarily for quota management -- in my experience, this is inevitably a hierarchical concern, in that you frequently run into the case of wanting to allot a certain cluster-wide quota to a large organizational unit, and similarly subdivide that quota between smaller organizational subunits. Being able to model that hierarchy with namespaces localizes changes more effectively: if you want to increase the larger unit's quota in a flat namespace world, for example, there's no way to talk about that unit's quota except as the sum of all of its constituent namespace quotas.
Thanks! We're not currently planning on implementing a hierarchical resource quota in HNC, but HNC is trying to define a definition of hierarchy that could certainly be used to create a HRQ. Give me a shout if you're interested in contributing.
In my experience, almost anyone who insists on synchronous, in-person standups is doing so because they want them to function this way, whether consciously or unconsciously.
I also suspect that standups with no active brevity constraint frequently come about not because of a lack of leadership but because they're actually a means for disordered thinkers to feel productive, which I think goes part of the way towards explaining why you see such irrational attachment to standups even among ICs.
> Errors are on your face, instead of having exceptions performing invisible gotos to somewhere far up in call tree. Implicit error handling is more code, but your error handling is going to be much more robust.
Errors as values is a great idea. Errors as values without sum types or pattern matching, though, gives you twice the tedium and a tenth of the benefit, so once again Golang's slavish adherence to "worse is better" really just makes things...worse. The real problem here, though, is that those invisible gotos still exist: because of how limited Go's type system is, it's very easy to cause a panic with e.g. a nil pointer. Of course, the answer is to rigorously check the cases where a pointer could be nillable, but that trivially contradicts the idea that, when reading Go code, you can feel confident reasoning locally about its resilience if it handles all potential error values appropriately. IMO, this false sense of safety is even worse than just making exceptions first class citizens of the language, and it belies the notion that Go is especially maintainable.
Accessing nil would be a programming error, not normal errorhandling and code logic should inherently avoid the scenario, not hide or do coverup silently. If it breaks, let the process die.
The cost of simpler tools is you need to target better, simpler designs and refactor. This requires more from programmers but produces better code.
I have a music degree but have also engaged in a lot of self-study, so hopefully I can still make good recommendations for someone interested in learning on their own. The general approach I'd recommend is trampolining between on-paper theory and ear training, with the specific goal at each stage being the ability to identify by listening the structures that you're learning about on paper. The text I'd recommend is Aldwell's Harmony and Voice Leading, which is very focussed on the techniques of classical Western concert music but is still, along with its exercises, the best bet I can think of for building a solid foundation of knowledge that you can bring to other musical styles.
The most important thing I can recommend while starting out is to focus your ear training on hearing scale degrees (e.g., if I play a C major chord and then play a random note from the C major scale, you should be able to identify which note that is out of C, D, E, F, G, A or B). A lot of naive ear training resources recommend learning to hear intervals (e.g. a perfect 5th, which is the interval between C and G or E and B), but intervals only really have meaning in the context of a key, and so understanding them in that context is much more important. A good app for training with this on iOS is Politonus: it'll play a few chords to establish a key, and then it'll play a random note from the key and prompt you to guess which one it was. As you improve with this, you can have it play multiple notes at a time, or even notes outside of the key.
The Midwest is far ahead of at least the West Coast on this. I saw much more variety of brewing methods, beer styles, and flavor profiles when I lived in St. Louis than I do here in the Bay Area. A lot of the great breweries from the other side of the Rockies (Left Hand, Odell, Bell's, etc.) are either difficult or impossible to find products from here, too.
One of the biggest conceptual moments in my life was taking an intro class to complex analysis in high school (not through my school itself, this was at a nearby university that runs a weekend program for interested applicants). It's probably not a novel intellectual framework for anyone who's spent time learning math from a theoretical point of view, but the guy who taught it opened with an (admittedly ahistorical, but that wasn't the point) tour of the development of numbers, starting with the intuitive case of counting, moving on to algebra and the question of what type of number could satisfy an equation like x + 5 = 2, and so on. He wasn't taking any philosophical position re the question of discovery vs. invention, but merely inviting us to consider the particular case of operations over a set not being algebraically closed and what it looks like to extend that set to support algebraic closure in a rigorously defensible way. Reading Spivak's Calculus, with the way that it starts off its path of inquiry by showing that, beginning with the basis of a totally-ordered field, the axioms at hand don't suffice to demonstrate the existence of e.g. a number that satisfies x^2 = 2, made me feel right at home again. It's like a detective story.
To stick to the more theoretical interpretation of "computer science," and with a focus on theory of computation/programming languages:
- Discrete Mathematics And Its Applications, Kenneth H. Rosen (this is more foundational, but it's definitely targeted at a CS audience and touches on things like automata theory)
- Types and Programming Languages, Benjamin C. Pierce
If you're able to, would you mind linking to some resources about the case you describe in point three where the kernel need deal with an invalid stack pointer? I'm very curious about the underlying causes.
Also, regarding point two, is the cumulative result of those factors you describe (other NMIs being blocked, lack of kernel control, the unreliable heuristic) that a diagnostic NMI may end up never being run?
> If you're able to, would you mind linking to some resources about the case you describe in point three where the kernel need deal with an invalid stack pointer?
Look up SYSCALL in the manual (AMD APM or Intel SDM). SYSCALL switches to kernel mode without changing RSP at all. This means that at least the first few instructions of the kernel’s SYSCALL entry point have a completely bogus RSP. An NMI, MCE, or #DB hitting there is fun. For the latter, see CVE-2018-8897. You can also read the actual NMI entry asm in Linux’s arch/x86/entry/entry_64.S for how the kernel handles an NMI while trying to return from an NMI :)
To some extent, the x86_64 architecture is a pile of kludges all on top of each other. Somehow it all mostly works.
> is the cumulative result of those factors you describe (other NMIs being blocked, lack of kernel control, the unreliable heuristic) that a diagnostic NMI may end up never being run?
No, that’s unrelated. A diagnostic NMI causes the kernel’s NMI vector to be invoked, and that’s all. Suppose that this happens concurrently with a perf NMI. There could be no indication whatsoever that the diagnostic NMI happened: the two NMIs can get coalesced, and, as far as the kernel can tell, only the perf NMI happened.
Once all the weird architectural junk is out of the way, the NMI handler boils down to:
for each possible NMI cause
did it happen? if so, handle it.
If no cause was found, complain.
Amazon’s thing is trying to hit the “complain” part. What they should do is give some readable indication that it happened so it can be added to the list of possible causes.
Functors enable the Fix functor and, from there, the whole universe of recursion schemes, so I'm not sure that I agree that they're not useful on their own.
Sure but virtually nobody cares about how to finitize a recursive function when they are trying to learn a new programming paradigm. Recursion seems to work just fine in languages that don't have any of these bells and whistles.
"Hey you can implement Fix" is like saying "now you can program in green" for most readers.
Oh, certainly. My assumption was that the GP meant that functors aren't useful on their own _in general_, rather than in the particular context of someone just getting into the typed functional paradigm. And, of course, recursion does work just fine in other languages, but (and I'm saying this more for posterity than as a retort since I assume you're well aware of this point) recursion schemes offer a layer of abstraction over direct recursion that eliminates the need to manually implement various recursive operations for each of the data structures you have at hand. As with a lot of the higher-level constructs in languages like Haskell, that level of abstraction may not have any practical benefits in most domains, but it's nice to have when it does offer a benefit in a particular domain.
I think what's key here, and why I like the direction of the work in the linked post, is that none of the affordances of the UI they're augmenting -- the command line -- are being taken away,* merely made more visible based on context. I think that gets at a hole in the notion that UI complexity mirrors domain complexity -- in the case of Premiere, surely the entirety of what's on the screen doesn't mirror the complexity of the workflow you're executing right now, it's much closer to the sum total of everything you might ever do. Not that a UI based on progressive disclosure would be trivial or maybe even possible to implement effectively for something like a professional video-editing application, but it's worth considering whether something like that is merely a local maximum in the space of all possible UIs.
* I guess in the case where you know exactly what you want to do and the added context might distract or slow you down, that's an affordance being removed, but I think it's outside the scope of what we're talking about here -- especially because, given the scope of the command line, you will inevitably have to do something with it that involves an invocation that isn't right at the tip of your fingers.