Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An expert that bullshits part of the time. You get to figure out when it's the expert and when it's the bullshitter.


It also will never be possible for a transformer model trained on words and optimized to put valid sentences together to even "know" WHEN it is "right" or "wrong". A lie or falsehood is just as valid a sentence as a fact.

When and with what subjects it makes shit up is also heavily dependent on training data, and the result is straight up a black box. What good is a fact generator that can't be trusted?


I see this argument all the time. Why are you assuming that this technology just "stops" at the LLM level?

If I'm openAI or Google or whatever, I'm definitely going to run extra classifiers on top of the output of the LLM to determine & improve accuracy of results.

You can layer on all kinds of interesting models to make a thing that's generally useful & also truthful.


Your last word is a bit of a weasel word. There is no currently known way to get truth except to try something out and see what happens. Even the marvelous GPS routing takes feedback from people driving down the routes and succeeding or failing. Add more layers but without some equivalent of arms and legs, it won’t be able to be sure about truth.

The nice thing about the easy to bamboozle GPT4 is that it can’t hurt anything, so its flaws are safe. Giving it these arms and legs is where the risks increase, even as the reward increases.


What even is a 'fact generator'? And why do you think an LLM is meant to function as one?

If you ask Wolfram Alpha - something which I think is actually meant to be a fact generator - "Which is the heaviest Pokemon?" it will happily tell you that it is Celesteela, and it weighs 2204.4lbs.

Is that a 'fact'?

It certainly 'true', for some definition of the word true. The game Pokemon exists, and in it Pokemon have a weight. Of all the official Pokemon, that one is the heaviest. Wolfram Alpha has given you an accurate answer to your question.

But it's also completely made up. There's no such thing as a Pokemon, and they do not actually have weights.

So sure, transformer models can't be relied upon to generate facts. But so what? There's a lot more to the world than mere facts.


In JRR Tolkien's terms the pair are called "slinker and stinker", but in that case they are both kind of evil, whereas I don't see GPT's as anything more than correlation of large sets of integers. A correlation that yields incredible utility (for me), when it isn't stinking.


>you get to figure out when…

A usefull skill to train. Increasingly so…




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: