How our minds always over-generalize
Lightning talk - in English
We systematically overestimate the quality of our insights and overfit on very small sample sizes. How many boyfriends do people need until they find out that men are all the same? Two? Maybe three?
While this might have been a good strategy when dealing with sable tooth tigers, cave bears, and mammoths where you rarely got a second chance when you were wrong, this rarely is useful in the modern world.
We will look at examples from the Premier League, Machine Learning, and people in general to investigate when this approach might be really bad and when it does not matter too much.
Finally, we can talk about strategies how to avoid theses kinds of issues.