I agree. The pop product is primarily a persona and a brand rather than the art itself (though it can be both), but it's not like that's not ever true of niche artists, it's just packaged differently and on a different scale.
The issue is that systems don't account for the diversity in how people are motivated and what parts of systems they are sensitive to and how they are sensitive to them.
By default in the dominant culture, most systems come down to individual incentives for individual drive and shame dynamics for collective drive, and that covers a decent chunk of how people are motivated, but leaves out people who are motivated differently and actively harms people for whom these are paralyzing.
Right, but people with lower incomes spend, and mostly on necessities, I think the idea is that most of those necessities would become more expensive (naturally or artificially due to price-fixing) if the poorest suddenly had more financial power. In the system as it stands, it seems to me like it'd just result in a bunch of money going to grocery giants and their suppliers, landlords, medical, etc.
Most of those are commodities, where the price is set by the cost of marginal supply.
Housing prices should go down. Housing is expensive in places with jobs and cheap in places without jobs. UBI gives people the freedom to move from the former to the latter.
Yeah this is the downstream effect I had in mind. You could say we'd increase supply to meet the demand but that hasn't really worked out with housing for example
Just only ever speak in a language of your own invention that uses both cryptographic and steganographic techniques which you invented while colocated, maybe.
How sure are we that something fishy isn't going on with the models and the alignment research teams and the answers the model is giving? Like maybe Claude's alignment made it worse at trying to mask as Allied Magacomputer than GPT and that's why they're up in arms?
Ontological version is even more interesting, especially if we're talking about a singularity (which may be in the past rather than future if you believe in simulation argument).
Crude form: winning is metaphysically guaranteed because it probably happened or probably will
Refined: It's metaphysically impossible to tell whether or not it has or will have happened, so the distinction is meaningless, it has happened.
So... I guess Weir's Egg falls out of that particular line of thought?
As capable as they get, I still don't see a lot of uses for these things, myself, still. Sometimes if I'm fundamentally uninspired I'll have a model roll the dice, decide what I do or don't like about where it went to create a sense of momentum, but that's the limit. There's never any of its output in my output, even in spirit unless it managed to go somewhere inspiring, it's just a way to let me warm up my generation and discrimination muscles. "Someone is wrong on the internet"-as-a-service, basically.
Generally, if I come across an opportunity to produce ideas or output, I want to capitalize on it for growing my skills and produce an individual and authentic artistic expression where I want to have very fine control over the output in a way that prompt-tweak-verify simply cannot provide.
I don't value the parts it fills in which weren't intentional on the part of the prompter, just send me your prompt instead. I'd rather have a crude sketch and a description than a high fidelity image that obscures them.
But I'm also the kind of person that never enjoyed manufactured pop music or blockbusters unless there's a high concept or technical novelty in addition to the high budget, generally prefer experimental indie stuff, so maybe there's something I just can't see.
Yeah, that makes sense. If people don't see uses for AI, they shouldn't use it. But going out of the way to imply that people who use AI cannot think is pretty stupid in itself imo. I am not sure how to put this, but maybe to continue with your example, I like a lot of indie stuff as well, but I don't think anyone who watches, say, Fast and Furious, cannot think or is stupid, unless they explicitly make it the case by speaking, etc.
So my issue is that you shouldn't dismiss AI use as trash just because AI has been used. You should dismiss it as trash because it is trash. But the post says is that you should dismiss it as trash because AI was involved in it somewhere so i feel that's a very shitty/wrong attitude to have.
I actually do think that people who prefer content of fidelity over content of intent are making a mistake, yes. I don't think they're incapable of thinking, I don't care to apply any virtue labels to this preference, but they are literally preferring not to think.
LLMs can only produce things by and for people who prefer not to do the work the LLMs are doing for them. Most of the time I do not prefer this.
Like, there was a 2-panel comic that went around the RPG community a bit back where it was something like "Game Master using LLM to generate 10 pages of backstory for his campaign setting from a paragraph" in the first panel and "Player using LLM to summarize the 10 page backstory into a paragraph" in the second. Neither of these people care for the filler (because they didn't produce or consume it) so it's turned the two-LLM system into a game of telephone.
I think that's what Posthog might be trying but as per the above there may be a fine line between funny and annoying and/or between useful and useless.
Diversity is good for populations. If you have a tiny pool of individuals with mostly the same traits (in this case I mean things like culture, education, morality, ethics, rather than class and race - though there are obvious correlations) then you get what some other comments are describing as being effectively centralized planning with extra steps, rather than a market of competing ideas.
reply