So, I’ve dismissed intuition on the basis of bad examples of it. Real cases of intuition aren’t like that, and usually are far from pernicious. For example, someone might have an intuition that you're a vegetarian, or an intuition that a third-grade class will respond well to a particular activity. These count as intuitions if the person can’t put their finger on exactly what led them to those conclusions, but they have an internal sense that it’s not just a guess.
This could seem to be simple fuzzy or ungrounded thinking, but machine learning is giving us a different type of model.
Sometimes the output from machine learning models is both correct and inexplicable. There are a number of related types of machine learning inexplicability, but a common one is the multidimensional nature of the relationships it finds. In a one-dimensional relationship, you’re comparing things according to a single property: The paints are either water- or oil-based. If you're simultaneously comparing them in terms of price, color, wall coverage, customer ratings, manufacturer, and how much they’ll glare, you're putting them in a multidimensional relationship. Machine learning doesn't care how many dimensions it needs in order to usefully sort matters out or to make accurate predictions. It will happily consider millions of dimensions if the machine has enough power. Try wrapping your puny earthling brain around a million-dimensional model!