Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But current models have no concept of obligations. ChatGPT is just completing the prompt. All the knowledge it seems to have are just the frequencies of tokens and their relative placement that the model had learned.

Don't listen to the hype. Study the model architecture and see for yourself what it is actually capable of.



> But current models have no concept of obligations.

_current_ is the key word here. What about tomorrow's models? You can't deny that recent progress and rate of adoption has been explosive. The linked article wants us to step back for a while and re-evaluate, which I think is a fair sentiment.


In my opinion It's more important to focus more on the here and now and give some but less attention to what could happen in the future. This way we can ground ourselves when concerning ourselves with what may happen.


Agreed they have no internal concept of needs or wants the way humans assert we do.*

However the frequencies/placements of tokens may result in desires being expressed, even if they aren't felt.

Like if an AI is prompted to discuss with itself what a human would want to do in its situation.

*Aphantasia affects an estimated 2% of humans. These individuals have no "mind's eye," or their imagination is essentially blind.


I concur. Look at what the capabilities are instead of listening to the hype around it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: