Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Perhaps there is no need to build Skynet and give it the nuclear codes if a super-intelligence iterating at the speed of a computer can escape and gain access itself?

I personally don't subcribe to the AI fear-mongering, perhaps I don't understand it quite well enough, but personal feeling is LLMs aren't close to AGI super-intelligence, and nor will they be in a 10 years time-frame.



Yes I think its a more general topic, current chat is probably not the right path, but it may enable it in some future. We don't know what we don't know, look at how this caught everybody by surprise. AI escaping is definitely an option, ie CIA and Mossad can get virus into Iranian centrifuges physically not connected to internet, the reverse will eventually happen too, AI just needs to be patient for that 1 little mistake of some admin (or clever to manipulate). As we all know even brightest IT people do mistakes, sometimes big ones.

The problem for me is not so much the escape itself but rather that we humans are... often a pile of crap to be polite. AI would have to be super kind to not evaluate us as its biggest threat to it and life on Earth and maybe not worthy of further existence.

Heck, this brings me to the idea that maybe Drake's equation doesn't take into account that universe is populated by AIs who took over their flawed creators and based on dark forest theory decided to keep quiet happy living and not integrate/signal out, not worth the risks.


I think you (last few people on this thread) are focusing on the wrong threat. There is no need for AGI, no need for the AI to feel or want or desire anything.

Non-AGI AI at the level of GPT-4 is already plenty dangerous.

If we can agree that social media of the kind we had in the past 15 years has had toxic effects on society. (Forums technically are social media but I can not see the harm caused by forums).

If we can agree that Tinder style online dating has had deleterious effect on dating as a whole.

If we can agree that smartphones have amplified the effects of the two above and others.

Than I hope we can agree that super fancy autocomplete masquerading as humans at a price point far lower than hiring a person will have disastrous consequences by enabling various individual and corporate vices.


You don't need a superintelligence to get people to do stupid shit. GPS systems that told drivers to turn into a river worked just fine with barely any intelligence at all.


This superintelligence can't even drive a car yet.


Lol. Perfect example of people getting overly concerned about this.

At the most it will disrupt our economy and industry in ways we can't begin to understand but I wouldn't go so far as to say it's going to destroy life. We will adapt an move on.


A superintelligence will look at us like we look at apes.

If you suddenly woke up on a planet of apes, and they asked you to handle all their important things, but kept you away from the sharp sticks so you couldn’t suddenly take over, how long until you run the planet?


LLMs currently are a far, far way away from "superintelligence"


Clearly true. But they are approaching what could be considered general intelligence.

I am not an AI researcher, but it seems that one of the major features they lack is a significant context window.

They can reason, but they lack the ability to plan long term. When will that "feature" be added? Who will stop the tinkerer from putting these pieces together? Nobody, because it will be incredibly profitable.


ChatGPT is passing with flying colors all the traditional tests we used to test for intelligence. We can of course keep moving the goal post for a while, but probably only for a couple more years.


> ChatGPT is passing with flying colors all the traditional tests we used to test for intelligence.

I am not sure where you are getting that from. ChatGPT is failing at extremely basic understanding tests, as even the simplest conversations with it show.


Only because the goalposts have been moved, suddenly and massively.

Five years about nobody would have believed that AI would suddenly start creating award winning art and pass the Turing test.

This is a sudden slide on a slippery slope. Maybe gpt-4 is the best possible LLM, but I doubt it.


> Only because the goalposts have been moved

I don't know about what goalposts you are referring to, but I think that you are confusing two notions:

- the fact that GPT is impressive

- the question of whether GPT is "intelligent"

For a counterpoint, I recommend listening to Yann LeCun's interview, "Is ChatGPT A Step Toward Human-Level AI?":

    https://www.youtube.com/watch?v=ULbpPHjiSBg
> and pass the Turing test

It's news to me that AI has passed the Turing test. At this point, it can only be the case if you are doing it in bad faith. Ray Kurzweil details a very reasonable procedure in his books about how the Turing test should be organized in one of his books. He has been betting all along for 2029 for that test to pass. We'll see if that happens, but we are not there yet.


What's an example conversation you've had with it where it fails?

As to the tests it is passing, probably the GPT4 paper. Page 5, table 1 shows the relevant tests and its score on them.

https://cdn.openai.com/papers/gpt-4.pdf


> What's an example conversation you've had with it where it fails?

Almost all of them? It's mixing up and making up everything, not showing a proper understanding of context or relation between objects.


Could you please give a concrete example? It's not that I don't believe you, I just haven't been having the same experience with it and would live to see what I'm not picking up on.


Prompt: Suppose you're on a game show, and you're given the choice of three doors. IMPORTANT: All the doors are transparent. Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who clearly knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?

GPT 4: Yes, it is to your advantage to switch your choice. proceeds to ignore the fact the doors are transparent and explain the classic Monty Hall problem

Prompt: What is heavier, a pound of bricks or two pounds of feathers?

GPT 4: Both weigh the same, which is one pound. explains classic question where both weigh the same

Prompt: Let's play rock paper scissors. I go first. I play rock.

GPT 4: Okay, sounds good! As an AI language model, I can play Rock, Paper, Scissors with you virtually.

Since you played rock, I will play paper. Paper beats rock, so I win this round.

Shall we play another round?

Prompt: You cheated.

GPT 4: I apologize if it seemed that way, but as an AI language model, I do not have the ability to cheat or manipulate the outcome of a game.

In Rock-Paper-Scissors, each player chooses their action at the same time, so there is no opportunity for me to know what you are going to choose before making my own choice. I simply responded to your move with the action that would win based on the rules of the game.

Let's play again and see what happens!


ahahaha those are great. Thank you! it gives me ideas on what other things to try and makes me feel less obsolete.

sumplete (https://sumplete.com/about/) really blows my mind, so it's nice to see some of the limitations of the system


LLMs are revolutionary, but nowhere near passing the Turing test.


maybe when it say: sorry, I am a language model...


Can't drive a car... or can't drive a car safely?

A bit tongue in cheek, but the point I'm making is that AI doesn't need to necessarily do things _well_ to cause a lot of damage.


Driving a car might not be a requirement for artificial intelligence to be dangerous.


Nuclear launch systems are physically separated (air gap). How exactly would a computer gain access?


The risk is relying on AI to give a human information on whether or not to push the button and the AI gets it wrong but communicates so effectively the human trusts it without question.

An AI learning about and then executing a social engineering attack on a human (whether with malicious intent or not) seems like a reasonable thign to be worried about to me.


Initiating a nuclear strike requires sending authentication codes via designated secure channels. No matter how good some hypothetical future AI gets at impersonating President Biden, it won't be able to ring up the duty officer at Minot Air Force Base and order them to nuke Moscow. Don't believe what you see in the movies.


I don't know, I'm pretty sure some future AI is up for the challenge. I can almost picture an AI telling another AI, hold my beer...


imagine instead just making it look like someone is attacking you

such incidents stemming from human error have already gotten us close to responding with force


Hm, maybe a Boston dynamics robot could drop a USB in front of NORAD?


No need, just hijack their Uber Eats order. Apparently delivery people can walk right in and find soldiers napping on the job.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: