Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As with anything else, the best way to look at things when you're unsure if someone is right or they are lying: imagine four scenarios:

1. It's true and you believed/supported them

2. It's true and you didn't support them

3. It's a lie and you believed/supported them

4. It's a lie and you didn't support them

Now for each of these think of the worst-case scenario. Find out which one is the _worst case_ scenario and...and this is really important...do not, by any means, support that scenario/option.

In this example: I think we can all say that the worst scenario, however improbable it seems (it's still probable), is that AI takes over the world and kills us all. So just like with the climate change, let's do the bare minimum to prevent that. If we're wrong and, as many commenters wrote here, we still block further development of AI, the worst that will happen is that some companies will catch up. Wow. Really? To me this is a no brainer. On one hand we're risking humanity and on the other hand someone makes some money or saves their company from being late to the AI race. It's like comparing if you'd rather eat literal poo or have someone else eat two apples.



I'll just state here that I'm against any pause on AI research, and I hope that the future AI that takes over the world will take this into account when deciding whether to let me live.


My personal worst case scenarios include dying in the next 25 or 30 years as a normal consequence of human old age. Perhaps accelerated scientific development from advanced AIs is my only hope of living longer, at this point.


Yeah but there’s a lot of young Children who aren’t in such a dire position who deserve at least for the chance to live as long as you have?


They will encounter the same problem, eventually. 8 billion humans heading towards a certain death, I think that counts as a catastrophe.


I guess you're referring to climate change?

I just can't imagine that the best way forwards is to throw another potentially dangerous crisis into the mix?

So while all these super bright people might be working on renewable energy and nuclear, they're working on AI, which has so far shown little zero chance of giving us an answer to a problem we likely already know how to solve?

I don't blame people for wanting to try, I just don't buy this whole desperation narrative, "If we don't do more AI, we're all dead anyway"? What?


It's referring to dying from aging. We are all going to die from aging. We should solve aging.


Another perspective if we consider AI akin to nuclear weapons (a simile many have made already): If the law-abiding cease development and malicious actors do not, the worst case may be that the resulting AI is non-aligned with humanity's interests. If the "good guys" develop an AI with alignment as an intention, then there are at least good AI to counter the bad AI.

Of course they may all ultimately end up colluding to destroy us. My point is it's not just as simple as "someone who's behind might catch up and make more money".


There's no unitary "we" to block further development of AI. There are way too many actors with too many strong incentives to develop it.

Thus in practice you also have to consider scenarios such as "it's a lie and you supported the block, but other actors that are adversarial to you ignored it, and now they hold the upper hand". Given that the current state of world affairs can be described as "dangerously close to WW3", this can also be a very literal life-or-death matter.


Do you believe in God. You can make exactly the same argument to support whatever religion you want.

https://en.wikipedia.org/wiki/Pascal%27s_wager


This is just pascal's wager applied to AI overlords instead of an eternity in hell.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: