Article by Sam Eifling
As a kid, I was the sort of nerd who got serious about quiz bowl. During my senior year of high school, I was on a team that advanced to the state playoffs. In college, at a Big Ten university, I was on a team that traveled the Midwest playing other teams of fast-twitch buzzer-mashers.
Whereas some players had deep recall of Russian novels or the periodic table, I tended to skate by on loose-ends trivia: pop culture, sports, the occasional lucky stab at U.S. history. By the time I was old enough to drink, I was a solid bar-trivia player. In a weekly pub game, I once nailed down a win by correctly naming the capital of Uganda (Kampala) on the final question. A different night, a new teammate and I simultaneously blurted the answer "apogee" to a question about the moon's orbit. Smitten, I asked her out, and we dated for the rest of the summer.
Like I said: nerd.
That was years ago, though, before Google even existed -- long before everyone toted around wireless supercomputers that fit in our jeans. These days, any worthwhile trivia night strives to be at least partially Google-proof because huge swaths of the world's loose knowledge have been rounded up and cataloged by the most complex network of machines ever devised. The instant recall of facts, formerly a marker of elite intelligence or at least the image of it, has become an affectation. You want to know the capital of Uganda? Two keywords in a search bar is all you need to get the answer faster than you could even ask the question. Quick recall is now a parlor trick, like grabbing a live fly out of midair or uncapping a beer bottle with a folded dollar bill. An intelligence predicated on stockpiling facts is outmoded, naïve. Look what happened in the past 20 years to card catalogs, road atlases, and Rolodexes. The databanking that got you through multiple-choice tests no longer secures your relevance. Just ask a phone book.
But these are also heady days to examine the way you think, if you're willing: Neuroscience and the rise of artificial intelligence (more on that later) have given us new insights into the interplay between the mind and the brain, two interlocking (but sometimes competing) parts of ourselves.
For those of us who have long conflated a facile memory with actual smarts, though, analyzing our own thought habits is about as enticing as counting carbs or auditing credit card bills. Some routines are so entrenched that drilling into them requires a confrontation with the ego -- especially if you're the sort who considers themselves a good thinker. This most likely describes most people, in part because they give so little thought to the matter. If you weren't good at thinking, well, wouldn't that catch up with you? Surely, yes, of course -- ergo, there's no need to think about the matter any further. But if you did, being such a good thinker, would you not, assuredly, come up with a way to improve your thinking even further?
In his new book, Winning the Brain Game: Fixing the 7 Fatal Flaws of Thinking, Matthew E. May sets out a convincing case that no one much likes to examine the ways they think in part because we're all so conditioned to receiving cheap rewards for quick answers that we scarcely bother to do much real thinking at all. May explains that he's the sort of guy who's hired by companies large and small to stump workers and executives with brain teasers. This sounds like great work if you can get it, and the way May writes about these sessions -- breezily, almost like a street magician recalling audiences he has stumped -- makes him sound like a guy who genuinely has hacked into something fundamental about being a person in the 21st century: We have access to so much external knowledge that we've forgotten how to ask ourselves decent questions. School rewards answers -- fast ones. Work rewards productivity, which is predicated usually on finding paths of least resistance.
May's enduring thesis, and one that's hard to debate, is that we've been conditioned by a lifetime of what amounts to trivia contests to mistake the regurgitation of facts for the act of thinking. May argues that, actually, the rote recall of information -- or the obligatory regurgitation of possible solutions at top speed -- takes place somewhere outside the analytical mind. In other words, it is an act less intellectual and more glandular in nature.
"Our brains are amazing pattern machines: making, recognizing, and acting on patterns developed from our experience and grooved over time," May writes. "Following those grooves makes us ever so efficient as we go about our day. The challenge is this: if left to its own devices, the brain locks in on patterns, and it's difficult to escape the gravitational pull of embedded memory in order to see things in an altogether new light."
This strikes me as likely true. Those of us who went through American schools have been conditioned to rely on those patterned responses for decades. Looking back, the best quiz bowl players always buzzed in before the proctor finished reading the question.
In his day job, May prods groups in any project to reach for what he calls "elegant" solutions. By and large, those are the simplest, cheapest, least-intrusive, most-effective changes you can make to a system. Lesser solutions, he finds, tend to trade quality for speed. He insists that many of the reasons we fail to find elegant solutions are self-inflicted. We overthink a problem, or we jump to conclusions, or we decide after a few minutes of mumbly debate that we've come up with a solid B-minus answer, and then we're ready to move on to the next emergency. A less charitable author might describe those pitfalls themselves as "lazy," but realistically, they're the shortcuts we all use to navigate the zillion gnat-like tasks that drain our attention. You make mistakes and compromises because your brain has evolved over eons to value functional near-facts over perfectly crystalline truths. And often, the "good enough" is so-called for a reason. Duct tape and Taco Bell are revered for a reason.
In Winning the Brain Game, May describes a brain teaser he presented to a team composed of bomb technicians from the Los Angeles Police Department, the sort of group whose members regard themselves as unflappable thinkers and decision-makers. Here's the scenario May posed to them: You run a fancy health club that in its shower stalls offers fancy shampoo in big bottles that would retail for $50 at a salon. Unshockingly, these big bottles often go the way of a hotel bathrobe: Members take them home at a distressing rate, costing you. What solution can you devise that will be unintrusive, cheap or free, and protect your inventory?
Yes, sure, you could switch to travel bottles or force guests to check the shampoo out, but these will complicate operations at your otherwise immaculate and successful health club, so think harder.
May says the employees at the real-life club this problem is based on figured out an unintrusive and simple solution that cost no money. It is a solution any bright child could devise -- and yet, the bomb techs didn't arrive at it in their few minutes talking over the problem (and neither did I as I read the book). In a health club where people are stashing a big ol' bottle of fancy shampoo in their gym bags on their way out, it turns out merely uncapping the bottles, is one heck of a deterrent.
May writes that when groups tackle this problem, he sees all seven of the categories of thinking mistakes he lays out in the book. To summarize them as a holistic piece of advice for how to think smarter: Be more deliberate. Ask many questions before deciding on an answer. Do not accept a sloppy solution because it is easy. Do not talk yourself out of great ideas. Do not reject solutions because someone else came up with them.
All of this sounds rightly agreeable when laid out in those terms. No one thinks of themselves as a sloppy thinker, but then, such is the tautology; a careful thinker would already know the pitfalls in their own process. Even then, history is littered with terrible ideas that lasted for very long periods of time. As Carl Sagan wrote of the ancient Greek astronomer Ptolemy in Cosmos, "his Earth-centered universe held sway for 1,500 years, a reminder that intellectual capacity is no guarantee against being dead wrong."
The more you force yourself to think slowly, the more likely your brain becomes to engage that gear.
It's freeing to realize you're probably, profoundly, deeply wrong about something you believe very much. Freeing, because it gives you permission to think intently on what exactly that might be. We're all victims of our hard-wiring, you see, and May revels in citing studies in neuroscience and behavioral psychology that point to our flaws, as well as our ability to overcome them.
"The brain is passive hardware, absorbing experience, and the mind is active software, directing our attention," May writes. "But not just any software -- it's intelligent software, capable of rewiring the hardware. I could not have said that with confidence a few decades ago, but modern science is a wonderful thing."
This is, in a nutshell, the value of bothering to bother. The more you force yourself to think slowly, the more likely your brain becomes to engage that gear.
To help you engage your slow thinking, May builds his book largely the same way he sets up his seminars: around sinister Mensa-style riddles that make you aware of how inflexible you've let your brain become. Most are incredibly simple, which is what makes them so humbling. The favorite here is the classic "Monty Hall problem," a distillation of the crux of the show Let's Make a Deal. In a book called Winning the Brain Game, this particular puzzler feels like a required stop.
The old game show climaxed with a logic puzzle folded into a game of chance. You, the contestant, were offered the choice of three doors. Behind one door was a fabulous prize -- say, a car. Behind two doors were booby prizes -- in the classic arrangement, goats. When you chose a door, the host, Monty Hall, would pause before revealing what was behind it. He would open one of the remaining two doors to show you a goat. He'd then ask: "Do you want to stick with your original door, or switch?"
Strangely, this innocuous question, raised many times over the years but most notably in a 1991 Parade Magazine column, creates genuine havoc. May takes glee in recounting the fallout from the solution offered by columnist Marilyn vos Savant -- that one should always switch doors. Professional mathematicians at the time wrote in to upbraid her for numerical illiteracy, insisting it was a 50/50 proposition. Even after vos Savant was vindicated and previously incensed Ph.D.s wrote in with mea culpas, the spat echoed for years. When The New York Times revisited the logic problem in 2008, for instance, the paper built an online video game for readers to play for goats and cars, to keep score over many tries. And sure enough, you click on enough doors, you learn to switch.
The reason could scarcely be simpler. When you choose one door, you leave two doors for Monty. At least one of those doors must by definition have a goat, and at the turn, he'll always show you a goat -- but then, you had to know he always has a goat to show. There's a two-in-three chance that you didn't pick the car when you chose your door. When he offers to trade the closed door for your closed door, he's effectively giving you both of the doors you passed on with your original choice.
Two for one. A two-thirds chance of winning. By switching doors, you raise the possibility of winning a car by 100 percent. And still this strikes many people as counterintuitive. When you hold onto that first door, it somehow seems more likely to hold a car. The decision to stay, May writes, is easy and lets you rest without scrutinizing the actual odds.
A Harvard University statistics professor, Persi Diaconis, told Times reporter John Tierney in a 1991 story about the fracas that "[o]ur brains are just not wired to do probability problems very well, so I'm not surprised there were mistakes."
Such a simple little trap is the Monty Hall problem, and yet its very name was coined in a 1976 paper written for the journal American Statistician. This tiny puzzle is taken very seriously. Your intellectual capacity is no protection against being wrong.
At some point in the near future, robots will handle a lot of the rote chores (and even deep intellectual efforts) that sap us on a given day. Even now, artificial intelligence (AI) researchers are grappling with the ways computer intelligence built to perform a specific job might hack that task, in a nearly human fashion, by rearranging its priorities to derive the largest reward under its programming. In a paper published this past June titled "Concrete Problems in AI Safety," a team of AI researchers, including three from Google, forecast both the workarounds that a hypothetical housecleaning robot would devise to satisfy its assignments and the pitfalls of those workarounds. Oddly, several of them sound like what any teacher or boss would have to deal with when working with a petulant or nervous teenager. How do you keep robots from breaking things or getting in people's ways as they rush to finish their jobs? How do you keep them from asking too many questions?
The most human concern, to me, is how we keep robots from gaming the rewards system.
"For example, if our cleaning robot is set up to earn reward for not seeing any messes, it might simply close its eyes rather than ever cleaning anything up," the researchers write. "Or if the robot is rewarded for cleaning messes, it may intentionally create work so it can earn more reward."
This is a complex question, one that examines much of what we take for granted as a basic social contract. Taken literally, though, it points to the problem of fixation, of setting monomaniacal goals. A cleaning robot that believes its use of bleach is a good measure for how much work it has done might simply bleach everything it encounters.
"In the economics literature," the AI researchers write, "this is known as Goodhart's law: 'When a metric is used as a target, it ceases to be a good metric.'"
The stated goal, in other words, is rarely the actual goal.
Yet we all set goals, and May's business is to help us figure out how to reach them. At times, May's framework betrays how accustomed he is to working for big corporate clients who no doubt respond best when employees and middle managers are told to ignore all limits on the way to greatness. May enrolls for this exercise a 60-something potato farmer named Cliff Young who, in 1983, entered an ultramarathon in Australia, a 542-mile run from Sydney to Melbourne. Shabbily attired, unsponsored and untrained, Young nonetheless managed to beat a field of professional runners by 10 hours over five days. Why? Well, he apparently had become ludicrously fit by scampering around his farm chasing livestock over the years. But to May's point, Young simply had no idea the conventions of the sport held that runners should sleep six hours a night during the race. May writes: "In fact, his naïveté in all likelihood enabled him to win in the manner he did -- because he didn't know it 'couldn't be done,' he was empowered to do it."
That's an amazing example, though it does overlook the many, many, many things considered impossible because they are, in fact, firmly impossible. More inspiring to me, and probably to schlubs everywhere, is the embrace of our natural limits. You free up a lot of mental and emotional bandwidth to do great things when you stop chastising yourself for not being the Cliff Young in this analogy. Yeah, you might wind up running seven-minute miles for the better part of a week and become a folk hero straight from the farm. But more often, you're going to be trying to figure out how not to make an arithmetic error or obvious typo in an email to a client when you're in the 10th hour of your workday, wondering whether you should cook dinner or just say to hell with it and stop at Taco Bell on the way home. We all bump up against our limits in different ways, and as it turns out, many of them are real.
Inevitably though, the simpler the problem you face, the more likely you are to get it right, and a small, correct thought can be infinitely more valuable than a large, incorrect one, even an incorrect one off by just a few degrees. The lesson I took from May's analysis: Shrink your problems to a size that allows you to think clearly about them. Do this by first asking very good questions. Then, as you build to an answer, be aware of the pitfalls your brain invariably will stumble into as a clumsy instrument of human apprehension. No thought forms in a vacuum; most are derived from the leftover crumbs of old thoughts.
I experienced this recently when driving to a wedding shower in a suburb of Chicago I'd never visited. I turned onto the street of the home I was driving to, saw about 10 cars parked around a driveway and the adjoining street, and thought, "This must be the place." It was inane of me to leap to that conclusion without so much as glancing at the house numbers. During a long day of travel in an unfamiliar setting, I reached for an answer that would be comfortingly simple. But in part because I had May on my mind, I was fully prepared to notice why I was messing up and to call myself on it.
Knowing when and why our brains take shortcuts (and why we let them) allows us to catch ourselves (our brains?) in the act. It also hones our intuition around when we are, as May terms it, "downgrading" or "satisficing" -- essentially, convincing ourselves to tap out early or just staying in our usual ruts.
It's comforting to know that human intelligence, like the artificial intelligences we're bringing into the world, is capable of being hacked. Most of what May proposes falls under the heading of habits to cultivate. One trick, though, sits right at hand for any stressful occasion. It begins with seeing oneself impartially, a tendency May traces back to Adam Smith's concept of an "impartial and well-informed spectator." In our best moments, most of us hope to be that spectator for ourselves, and one way to accomplish that is to treat ourselves as spectators. May cites a University of Michigan study that found people who addressed themselves in the second person or by their own names (e.g., "You got this"; "Sam totally has this") to psych themselves up for a speech did better and felt less anxiety than people who used the first person (e.g., "I got this").
In a sense, we are our best selves when we leave ourselves momentarily, look back in, and reassure everyone that, having done all we can, it's going to be fine, so long as we take our time.
Sam Eifling is an itinerant American reporter and editor who lives in Brooklyn, New York. His writing and documentary work has appeared in such outlets as the New Republic, Sports Illustrated, the Oxford American, Pacific Standard, Vice, the Associated Press, The New York Times, and The Tyee. His newspaper writing has won a Sigma Delta Chi from the Society of Professional Journalists and has been supported by a grant from the Fund for Investigative Journalism. A graduate of Northwestern University and the University of British Columbia, he enjoys beer and naps.