The "breakthrough" AI of today is deep learning on massive amounts on data applied to two areas - speech/NLP and vision. How this generalizes to a super intelligence that can take over the planet is so strange. Does a child need to look at billions of images to figure out what a chair or cat is? Will this AI figure out how to select a good business partner?
The problem is that you ask someone who is good in one field (say electric cars or theoretical physics) to opine on something like AI. The correct response is to say that you don't know anything about AI. But, the ego of being an public intellectual prevents that. So, what is the safest option to not seeming dumb - say something like we need to make sure safeguards are place to protect AI from becoming dangerous and kill everyone.
Meanwhile people who actually build these systems know that these systems not are generalizable to variety of tasks (like humans) and they are not intelligent. Best case, they augment humans in their tasks.
>Does a child need to look at billions of images to figure out what a chair or cat is?
Of course! Not exclusively images of cats or chairs, but children have absolutely seen billions of images by the time they start to exhibit discernibly human-level intelligence.
assuming you see an image every 400ms which is given blinking and activation of neural pathways a good approximation.
Billion images per that rate is equivalent to 12 years of never stopping to watch
there have been systems that learned to generalize after seeing couple of examples not thousands (digit recognition)
child can see just one animal and label it as a monkey, an algorithm could probably do the same with more algorithmic machinery, but we are still not there
>The "breakthrough" AI of today is deep learning on massive amounts on data applied to two areas - speech/NLP and vision.
What of IBM's Watson? It's one of the premier AI projects in the world, its specialty is neither of those two areas, and it utilizes deep learning on massive amounts of data.
>How this generalizes to a super intelligence that can take over the planet is so strange.
No one's really saying it does, just that recent progress in AI has accelerated such that it seems probable that even more progress is imminent.
>The correct response is to say that you don't know anything about AI.
I really tire of this constant appeal to authority, it's arrogant at best. By that same logic, Nick Bostrom shouldn't have even written his latest book due to a lack of technical knowledge.
Watson has a very superficial understanding of what is being asked. It would be able to tell you the answer based on the data it was trained on but it wouldn't really understand what the question meant or what the response meant. I'm sure you'd have a very difficult time having a conversation with Watson or using it on anything other than Jeopardy style questions.
I never claimed Watson had anything beyond a superficial understanding, nor did I claim it was somehow generally intelligent.
My sole point was that it's a state-of-the-art AI project that utilizes deep learning, and its primary specialization is neither vision nor speech/NLP.
>... or using it on anything other than Jeopardy style questions.
Given how many verticals IBM is building Watson for, they would likely disagree strongly with that statement.
It's designed to interact with humans using natural language it's designed to break down normal speech and build a cognitive process around it to get answers.
I've always thought of Watson to be an automated data analysis platform at its core. However, after doing a bit of research, you're essentially correct.
Watson is technically multi-faceted, but the majority of its focus right now is NLP. In the future, I imagine that will change as they improve upon Watson's vision and structured data support.
I don't agree with the definition of "evil" used in this article.
Eichmann, for instance, didn't kill the Jews because he was "wicked", he did it because he was following orders. That's evil enough and he hung for it.
A while back we wargamed the idea of an "Evil Teddy Ruxpin" that would want to harm you with all its might but wouldn't have much might so it wouldn't be dangerous. It might be fun to battle with, but we figured it wouldn't be safe because it could always start a fire.
Okay, I'll bite. I'm not particularly well-versed in AI issues, but this article is of the end-is-nigh variety and HN tends to be a technologically optimistic community, so I'm hoping someone can debunk this and give us reason to be optimistic rather than terrified of our future as human batteries in The Matrix.
Especially given how we reproduce without bounds, consume all the resources available, and rather than dying we simply find other resources to consume to maintain our steadily growing population.
For the hundred thousandth time, a generation of humans will be confronted with the necessity of incorporating into their society a group of beings which they created and which they love, hate, fear, trust, and, most of all, barely understand. For a million years, that group was "their own biological children," but over the next few decades, that group might also come to include "their mental children: AIs."
In other words, we're damn good at this. We'll make it. :-)
not really, because AI isn't really our species, and given all of the other creation we've been doing - having to do with other animals, taking care of them, nurturing them, and eventually supporting 150 billion of them dying each year
plenty of times when we had a chance to incorporate what science calls "homo something" (neanderthals and the like) members of our family, we've exterminated them due to looking weirdly and behaving differently from us - just because we had some more superior characteristics
we couldn't respect our fellow evolutionary brothers, how can we respect something that was a pure materialistic creation
it took centuries for slavery to be abolished, there's still parts of the world where it's accepted
it'll take a lot for us to shift from seeing that cleaning lady robot that we chat with every single day, as something that is equivalent to us in spirit and that it deserves to have a right to freedom of movement, right to avoid pain (physical or psychological).
people will consider AI as pure machines, while on the other hand they will accept their biological machine as a miracle, both machines are giving spark to intelligence but one will be more worth than another
damn, we have the needs for water, food, shelter, companionship, procreation, we understand the world around us, otherwise we wouldn't survive, the same is true for almost every other non-human animal, and we have no problem with their massive deaths
i really hope that our morals, mentality will get the same exponential shift that technology brings, the past evidence seems to point to bad stuff, but I guess we're just on the knee of the curve :D
I do not think we get AI in the near future. What we have at the moment are more like 'Intelligence Amplification' tools for humans to use/direct.
But there are 3 concerns:
(1) Maybe, although the chance is small, it is so bad that prob x payoff is large enough that we ought to worry/think about it?
(2) If it happens that jobs lost to AI are not replaced by other work (hard to say) then we have unemployment & social problems.
(3) The current deep learning breakthroughs in image recognition, speech recognition, etc, make it much easier to process all that surveillance data that is being gathered. When surveillance tools, and e.g. drones as well, can be controlled by small numbers of humans, you should be worried.
Historically, governments have usually required the support of a fair fraction of their populace in order to stay in power. Ordering soldiers to shoot their fellow citizens has always been risky for governments. Soon that might not be the case.
In the past, a nation's power depended on its level of technology, its capital equipment, and the number and skills of its population. There was an incentive to have a skilled, well fed, and content populace.
Maybe a large part of the populace will no longer be 'needed'?
I guess you could counter points 2 & 3 by saying "Yes, but our democratic institutions are strong and our politicians are caring and intelligent - our societies will deal with these changes."
For myself, (3) scares me. You should be afraid of ending up like the Scottish Highlanders turfed out of their homes by Chiefs who replaced them with sheep, or like the cart-horses who were replaced by the internal combustion engine (and were shot). There is no need to fear an AI taking over, it is humans you need to be afraid of.
> 6. Once artificial intelligence gets smarter than humans, we've got very little chance of understanding it
Is that really so...? My gut feeling is that it's probably not. I don't know exactly where this gut feeling comes from, but I think the underlying reasoning goes something like this: Richard Feynman was a hell of a lot smarter than I am, but I can still understand his ideas. Of course an AI could construct incredibly long mathematical proofs, and similar, that no human could verify, but that wouldn't be much like the difference between man and ape. Is there really an entirely different way of understanding the universe out there, one that is radically more productive than ours? I doubt it.
Another way to put it I guess is: I'm simply not sure the marginal utility of (raw) intelligence is that great. In fact I remember once telling my friends that my life would be so much better if I was just a little smarter. It was meant as a joke.
Yet another way to think about it is to ask what's holding back our understanding of the universe. I'd say it's not really "intelligence" at all, but rather "money". Take gravitational waves for instance: Einstein predicted them some hundred years ago(!) and they where only detected just now, after spending I don't know how many millions/billions of dollars...
But either way, this is probably one of the most interesting philosophical questions of our time.
Wasn't it Hofstadter who hypothesized that an intelligent system will not have access to its own lower levels? I can't put my hands on the exact quote at the moment. If anyone remembers, I'd appreciate the info.
> It's entirely possible that the reason we've never met aliens is because they invented artificial intelligence before they could build spaceships capable of interstellar travel, and that discovery caused their extinction.
This is really not so clear because it would require AIs to never invent space-travel, measurable large-scale structures or signals, which seems unlikely (assuming these sort of things are possible in the fist place). If astronomical evidence of these sorts is physically impossible, then there is no need for explaining the Great Filter with AI in the first place.
The problem is that you ask someone who is good in one field (say electric cars or theoretical physics) to opine on something like AI. The correct response is to say that you don't know anything about AI. But, the ego of being an public intellectual prevents that. So, what is the safest option to not seeming dumb - say something like we need to make sure safeguards are place to protect AI from becoming dangerous and kill everyone.
Meanwhile people who actually build these systems know that these systems not are generalizable to variety of tasks (like humans) and they are not intelligent. Best case, they augment humans in their tasks.