Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are Reddit threads showing the jailbreaking of ChatGPT to be more vulgar, and then there are threads showing ChatGPT running its own generated code. Putting 2+2 together, couldn't a jailbroken ChatGPT create and run malicious code?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: