Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We are talking about a model that at its core is about making statistics of what the next word will be in a sentence based on an existing corpus. It gives that model the ability to find and summarize all of the existing content in relation to a prompt beyond what humans could do, but I still see no critical thinking there.


This isn't exactly accurate. It's not creating one word at a time, that's the illusion given by the way it illustrates the text on the screen. Doing that would be impossible to create code that compiles for example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: