Un Surprise

[This is not a post about quantum mechanics]

Imagine a world in which humans understood quantum mechanics.

Understood it intuitively, that is. Understood it in the same way that you can look at a face and recognize that it is not your mother. There are a lot of neurons in your brain working up a storm, going through the calculations that end up with you knowing it is not your mother. But “you” (the conscious thinking part of you) have no direct access to these computations, you are just aware of the end result.

Imagine that in a similar way, neurons evolved to carry out the computations underlying quantum mechanics. If you asked someone what would happen when setting up a fine grating just so and firing a beam of photons at it just so, they would say, “you’d get a diffraction pattern, obviously”. If you showed people an image of elementary particles interacting with some of the pieces missing, they would squint and say, “That looks funny”. But they couldn’t say how they knew it.

Feynman_Diagram
Well, obviously

Let’s call this imaginary world Q-Earth. Also for the sake of argument, let’s suppose that on Q-Earth the math for actual quantum mechanics has not been invented yet. People know of the phenomena of elementary particles, but do not have the formal tools to explain it yet.

With all these assumptions in place, the initial question is: What would coming up with a quantitative model of quantum mechanics look like, on Q-Earth?

Suppose one day, a clever Q-Earth researcher does come up with a model for quantum mechanics. It is horribly complicated and not fully worked out, and includes all sorts of things like “Matrices” and “Hermitian operators”[1]. This clever person, let’s call her Quinn, is terribly excited about her discovery. After triple-checking her math, she’s finally ready to bring it up with some colleagues.

“I’ve got a formal model for all this fundamental particle business,” Quinn declares over lunch.

Some raised eyebrows.

“Oh? Does it make any predictions?” one of them offers. It’s Quinn’s office-mate, Quido.

“Yes! Look,” Quinn grabs a salt shaker and some napkins, “Let’s say I put a magnetic field horizontally here, and I send electrons through it like this…my model says you’d get this binary pattern as a result.”

Quinn looks up from her napkin-and-salt heap, beaming.

People are exchanging bored looks.

“That’s interesting,” Quido says, in a tone suggesting it isn’t, “But it’s not really surprising, is it? I mean, any of us could have told you that’s what would happen.”

Quinn stabs a napkin with her finger.

“It’s also what the math predicts.”

“Sure,” Quido shrugs, “But does the math make any surprising predictions?”

“I don’t know…What if we added a few blocking arrays and a horizontal field?”

Quido closes his eyes for a moment.

“You’d get a reversal of the pattern.”

“Right! But isn’t that odd? How did you know it would turn out that way?”

“It’s just obvious. I mean, how could it not, when you think about it? Anyway, what’s your alternative? Does your matrix-whatever-it-is, does it capture the variance better than some alternative model?”

“What, you mean like some Newtonian mechanics model?”

“No no no, that’s just a straw man. Everyone knows Newton doesn’t explain particle phenomena. I mean a serious alternative, you know?”

Quido notices Quinn’s dejection.

“I’m not trying to be an ogre here, Quinn, this is the sort of stuff reviewers will bring up too – if you ever get around to writing this up.”

===

The point of Q-Earth is this: A surprising prediction is a good measure for evaluating scientific models. If two models are otherwise equally appealing, the one that makes surprising (true) predictions should be preferred. There are also philosophy-of-science underpinnings to this idea.

But that does not mean surprise should be a required part of a model. Especially not when it comes to cognitive models of common human behavior. By definition, these models are trying to capture the kind of behavior that we can all do effortlessly, intuitively, without necessarily understanding how we are doing it. We should not expect these models to predict something bewildering, because by definition they are trying to formalize what we find natural. We should expect these models to predict the same sort of things a normal human would, while giving some underlying quantitative mechanical reasoning for the behavioral pattern, the sort of mechanical reasoning that can be implemented in a computer.

To give an example, consider intuitive psychology. This term refers to the way people are able to quickly and consistently come up with explanations for other people’s actions, explanations that are based on hidden psychological variables like goals, intentions and beliefs. People even attribute such mental motivations to simple two-dimensional shapes moving about for a few seconds.

heiandsim
“The circle is hiding from the big guy

People make these attributions effortlessly, and they can come up with post-hoc explanations for how they did it, but those explanations are no more convincing than those that people give for how they recognized an image of a horse. The goal of a formalization of this ability is being able to give the same predictions and explanations that people do, even for new situations the formal model has not seen before. Now imagine the frustration of showing someone how your model can take in some new situation (say, a circle that had shoved a square and then taking off at some speed, only to be blocked by a triangle) and predict what people would say (say, that triangle was reasonably thwarting the circle from escaping after its misdeed), and hearing “well that’s not surprising, is it? I mean, I could have told that’s what people would say.” It misses the point of what the model is trying to do in the first place.

===

In reality, such comments have not been made specifically about the example above. But this post is a thinly-veiled rant about quite similar comments that I have heard over the years – less about my work and more about related work, mainly because I’m not that important. Of course, as someone in the business of building cognitive models of normal human behavior, Q-Earth is a particularly self-serving example. We on Regular-Earth know Quinn is right, whereas on Regular-Earth we don’t actually know any of our cognitive models are right. A more accurate example of the current situation in cognitive science might be competing pre-Maxwellian electromagnetic theories (some of which were pretty wacky and/or wrong). But the basic point would have been the same: it would have been even trickier to come up with ‘surprising predictions’ of a formal explanation for electromagnetic phenomena, if our neurons could implement approximations to Maxwell’s equations.

Tomer Ullman is a post-doctoral associate in the Dept. of Psychology at Harvard and the the Dept. of Brain and Cognitive Sciences at MIT. 

[1] Matrix operations might not sound outlandish to some of us now, and they seem like an obvious and natural part of quantum mechanics. But keep in mind that when Heisenberg had the epiphany that led him to quantum matrix mechanics, he had no idea what a matrix was. That is, the math behind matrices had been around for a while, but physicists weren’t using it. Heisenberg had re-invented arrays and asymmetric matrix multiplication without knowing what he found. When he described his ideas to Born, it took Born a few days to recall a lecture from his student days about matrices, and Heisenberg complained he had no idea what a matrix is when it was explained to him. The point of bringing this up was that the Q-Earth situation is one in which the math to describe a set of phenomena may have been invented a while back, but it has not yet been applied to that phenomena. It’s also just a cool anecdote (see ‘Quantum’ by Manjit Kumar for more).

Leave a Reply

Your email address will not be published. Required fields are marked *