Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There used to be a Decision Science class at Stanford, where this exact scheme was used. Students were warned all the time to never indicate 100% certainty on any question, because if you ever did that and turned out to be wrong, you would fail the entire course because of that one question: even if it happened to be a minor homework assignment. I always thought this was a great way to teach people the lesson that you should (almost) never claim 100% certainty in anything, and that you should view knowledge through a probabilistic perspective.


Unless you're arguing with someone that will take any missing confidence as sign that you're completely wrong.


That's equivocation; the sort of confidence that people are looking for and critical of lacking is not the same thing as mathematical confidence. Mathematical confidence is basically entirely inhuman; we do not think that way naturally and generally can't think that way even if trained.

There is no contradiction in being 90% confident of something and acting entirely humanly confident in it. Whether it is wise depends too much on context to make a snap decision.


>> "Unless you're arguing with someone that will take any missing confidence as sign that you're completely wrong."

This is 90% of people.


No. You have to stick to your guns that uncertainty is a valid disposition (maybe).


Just say that you're 101% sure, so the score will underflow into a positive score--or at least NaN--if you're actually wrong, and you will also get one bonus point for additional confidence from any person who does not understand math.


Yes but that guy over there says he's 110% certain you're wrong, so I'm going with his answer.


whyyy can't you delete your own posts? bah.


I'm disappointed that you don't score positive infinity for getting an answer correct with 100% certainty. Pass the entire course if you find one question you're really sure of. It would stop the teachers from putting in softball questions.


Probability doesn't work like that. Likelihood has an upper bound at 1, and log likelihood has an upper bound at 0. Likelihood just represents the probability you assigned to the actual outcome. You can't have a probability higher than 1.


You can't have a probability lower than zero, but you're still able to obtain negative infinity points. Why is there an upper bound but not a lower bound?


That's just the way log probabilities work. Log(0) is negative infinity, but log(1) is just 0.

You can have infinitely high scores if you use log odds though: http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/

But that can only happen if you get every question 100% right. Whereas getting a single question 100% wrong is easy to do.


Why is log probability the correct way to evaluate this?


It's convenient, because multiplying probabilities together quickly gets very very small. Typically smaller than can be represented in floating point even.

Whereas logarithms make multiplication just addition, and they get small very very slowly.

Personally I prefer geometric mean. Which is equivalent to the average log likelihood. It has an upper bound at 1 and lower bound at 0, and represents the "average" probability you assigned the correct answer.


I think I read a blog post about that somewehre.


Because it takes a lot of skill to be "always wrong" (you would have to know the right answer in order to answer incorrectly). No skill is represented by getting 50% right/wrong, and indeed it scores you 0 points.


I don't think that answers mikeash's question. He is not talking about always being wrong. He is pointing out that the incentives are not symmetric. If you say that you are 100% certain, and you are wrong, then you fail the class. If the incentives were symmetric, if you said you were 100% certain and you were correct, then you would automatically pass the class.

A potential answer is: this asymmetry is similar to incentives outside the classroom. Claiming 100% certainty and being wrong can be disastrous to your reputation, and likely much more negative than the positive benefits of claiming 100% certainty and being correct.


Getting 50% right/wrong while expressing 100% confidence in your answers scores you negative infinity points. Indeed, getting 99.9% right/wrong while expressing 100% confidence scores you negative infinity points. But getting 100% right with 100% confidence only scores you a finite positive number of points.


mikeash was not advocating for a probability higher than 1. Rather, he was advocating for an infinite reward for indicating a probability of 1. We could reframe his suggestion to be, why are the incentives not symmetric? (I think there are good reasons for this, but it's a valid question.)


I was just saying that's the way the math works out. It's not an arbitrary decision, it just happens that the natural way to score probabilities allows negative infinity from a single question.

You could make an arbitrary scoring metric that does whatever you want. But it wouldn't be principled, or have nice mathematical properties like this.


The math was chosen arbitrarily so every outcome is also arbitrary.


That would make the class too easy to game. Just answer one question you feel most confident about, with 100% certainty, and if you happen to be right, you can blow off the entire rest of the course and get an instant A+.


The thing is that every time you drive down the highway, you're often betting your life that the direction you are pointing the steering wheel is correct (X degrees too far and you've hit the concrete barrier). So effectively people choose 100 on a regular basis in real life.


I've had an unfortunate encounter with a concrete barrier when my vehicle stopped traveling in the direction I was pointing the steering wheel. Neither myself nor my passenger lost a life. My point is we are also placing side bets (and trust) in crumple zones, seatbelts, airbags, and angled dividers that redirect forward momentum (and car roofs to hold up when upside-down on the freeway). Thankfully those side bets paid off for me and my passenger.


The overconfidence effect [0] is real. I see it in both made up games (90% of the class thinks they'll score in the top 50% on a test, or 90% of people think they're better than average drivers) as well as real situations where people overestimate their ability to meet deadlines. In complex software development projects, this overconfidence has a lot of second and third order effects.

[0] https://en.wikipedia.org/wiki/Overconfidence_effect


> you should (almost) never claim 100% certainty in anything

Sort of off-topic but this is especially good advice when giving testimony. I gave deposition in a patent case once and, as I remember it, I was specifically instructed to qualify every answer with a statement of how certain I was. You can get in all kinds of trouble if you say "X is Y" and it turns out that X is actually Z. But if you say "I believe X is Y," well, no one can argue with that!


Good for a career in politics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: