Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Assuming your win rate goes from 2% -> 4%, you need around 200 samples to be 90% confident that something is up, 270 to be 95% confident, and 450 to be 99% confident, if you start with a Beta(2, 98) prior. Obviously results will change somewhat depending on the model.


From the article:

> Parthenocarpy:

> Well that explains me going from a 2% winrate to 17.95% in the span of two weeks

In that case, as soon as you got more than 1 win you should notice an abnormality (according to back-of-the napkin 8 trials to perceive the difference, using frequentist stat).

Also, that assumes you only get information from win/loss. In reality, every interaction (monster damage) should provide information (too easy kills, abnormal damage).


For that drastic a difference you need to play about 30 games on average to be 99% sure that p_win>0.02 in the Bayesian model.

Agreed that win rates are not the only data available, and that players probably should have noticed the difference. Attributing it to a bug is arguably a harder problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: