Linking Out Loud #10: The Ideological Turing Test

By David Laing

Dear readers,

Consider your position on abortion. If you’re pro-life, imagine you’re doing some research at a pro-choice event and you want to conceal your true position. Suddenly, someone hands you a microphone and a spotlight shines on you—the crowd wants to hear you say a few words about why it’s so important to defend women’s right to choose. (If you’re pro-choice, imagine you’re at a pro-life event and you have to say why it’s so important to protect the life of an embryo or fetus.)

Would you be able to pass for a proponent of a belief you don’t hold? Or would the true proponents know you’re a fraud because your arguments were too superficial or too one-dimensional for a reasonable person to believe?

If you can fool the believers into thinking you’re one of them, you have passed the Ideological Turing Test, a concept from the economist Bryan Caplan. (The idea is derived from the original Turing Test, which tests a computer’s ability to pass for a human.)

The Ideological Turing Test is one of the most-consulted concepts in my mental toolkit. It’s useful every time I’m in a disagreement. If I find that I can’t summarize my interlocutor’s position to their complete satisfaction, I try to back off on persuading them and focus more on understanding them. Of course, summarizing their position accurately isn’t the same as passing the Ideological Turing Test, because they already know my true position. But the Ideological Turing Test is a good target to keep in mind, because it’s strict enough that it could be empirically verified if I cared enough to set up the right conditions.

The Ideological Turing Test is not only socially useful, but also personally edifying. When learning about a controversial topic, I do my best to learn enough about each position on that topic that I could pass the Ideological Turing Test for each of them. If I haven’t gone through this exercise for a given topic, I try not to opine on it—at least, not very stridently.

Importantly, the Ideological Turing Test is not the same thing as finding better arguments for foreign positions than their proponents actually offer—a useful but distinct practice sometimes referred to as steelmanning. Instead, the goal is to internalize the perspectives of real people who see something differently than you do. If you can pass the Ideological Turing Test for those perspectives, you will reach a much deeper understanding of the issue than you would if you were to focus only on how the various arguments play upon your own mind.

Yours comprehensively,

David

Linking Out Loud #9: Deep Laziness

By David Laing

Dear readers,

Laziness is usually frowned upon. We get paid, thanked, and admired for producing: for transforming chaos into order, for giving more than we take, for creating value rather than consuming it. Society doesn’t reward us for slacking off. 

But not all laziness is bad. The kind of laziness we frown upon is shallow—guilt-ridden and opportunistic, like when we take a too-long break from work while scrolling through social media. Shallow laziness is the kind that you can’t quite settle into. But there is another flavor of laziness that is worth aspiring to—a kind that is wholesome, even exquisite. I’ve started referring to it as deep laziness, after reading Sarah Perry’s post on the subject.

To be deeply lazy is to be truly settled. Mainly, it is to eliminate the distance between your ideal and your actual self. It is not so much about relaxation or effort as it is about the absence of internal conflict. Where shallow laziness is failing to check the time and showing up late, deep laziness is walking exactly as slowly as you want to because you know you’re on time. Where shallow laziness is ordering pizza for the third night in a row, deep laziness is knowing how to make an easy and healthy meal that you love. Where shallow laziness is aimlessly browsing the internet all weekend, deep laziness is cozying up and watching the entire Lord of the Rings trilogy because you know that nothing else would make you happier.

These examples may suggest that deep laziness is a simple matter of getting your shit together enough that you can be self-assured in your indulgences. But it involves more than that. Perry recommends that we start by identifying the handful of behaviors that come to us most naturally, and then slowly elaborate on those behaviors in ways that preserve and reinforce the desirable feelings that the behaviors elicit.

For example, last year I started playing online blitz chess. The feeling that chess elicits for me is that of flow, a state of immersive focus on a problem. Flow is one of the feelings I enjoy the most, so in pursuit of deep laziness, I have slowly made a series of modifications to my chess-playing habits, both in my style of play and in the sub-variants of blitz that I play most often. Each of these modifications has made chess a better source of flow for me. As a result I feel more like my true self, more deeply lazy for chess’s presence in my life.

The point is not to do more things that elicit flow, per se, although I expect that many of you readers are like me in that flow is one of the feelings you pursue the most. The point is to reflect on the things you do when you’re bored, and ask, “what feeling am I chasing when I do this thing?” Having answered that question, you can let your behaviors evolve toward a richer, more resonant eliciting of that feeling.

Deep laziness is not deciding abstractly what sort of person to be, and then imposing your behaviors from the top down. Deep laziness is constructing yourself from the bottom up; it’s taking the self-enriching behaviors that are already present, and gradually deepening the grooves that they carve through your days.

Yours lazily,

David

Linking Out Loud #8: Theory of Constraints

By David Laing

Dear readers,

You have likely heard the expression, “a chain is only as strong as its weakest link.” This saying is the mantra of a management paradigm called the Theory of Constraints. The argument, which I first read in Tiago Forte’s short, provocative primer on the subject, goes as follows.

Premise #1: in every system there is a constraint that is more limiting than all others. Every chain has a weakest link, every hose has a tightest bottleneck, etc.

Premise #2: the most limiting constraint is what determines the throughput of the system. A chain can't support a weight heavier than what its weakest link can support; a hose can't pass more water than its tightest bottleneck can pass; etc.

Conclusion: the only way to increase the throughput of the system is to relieve the most limiting constraint. The only way to make a chain stronger is to strengthen its weakest link; the only way to make a hose pass more water is to expand its tightest bottleneck; etc.

If you take it seriously, this conclusion bends the mind. Have I been wasting my time by trying to make improvements to constraints that aren’t the most limiting one? Is my current ability at rock climbing limited entirely by my finger strength, such that no improvements in general fitness or mindset will make any difference? Is the speed of the software I build at work limited entirely by my choice of programming language, such that no improvements can be made by choosing different algorithms?

The Theory of Constraints doesn’t tell you which constraint is the most limiting, only that one of them must be the most limiting. Everything else is irrelevant.

My initial reaction to the Theory of Constraints was that if we want to improve our systems, we ought to spend much more energy identifying which constraints are the most limiting. We should be skeptical of the more common approach of making scattershot improvements wherever we happen to notice deficiencies in a system.

After thinking about it more, I’ve come to two conclusions.

One is that the Theory of Constraints only applies to systems which take the form of a sequence of dependencies. Hoses and chains are good examples of such systems, as are production lines and algorithms and recipes. But many of the systems we care about have independent components, so they require different models. Consider two highways:

  • A one-lane highway.

  • A two-lane highway that has a short segment with just one lane.

If you apply the Theory of Constraints to these systems, you might predict that they have the same throughput. However, in the two-lane highway, faster cars can pass slower ones at any point except when it drops to one lane. So the mostly-two-lane highway will have the greater throughput. Whenever a system has independent components like this, we need to think more broadly about how and where to make improvements.

My other conclusion is that even when dealing with a sequence of dependencies, we might sometimes be better off making scattershot improvements, without considering which constraint is the most limiting. One reason for this is that identifying the most limiting constraint is often costly. Another reason is that as long as we continually make improvements in multiple areas, we will occasionally make improvements to the most limiting constraint. At such times, we don’t just unlock the improvements from that specific constraint; we also unlock the pent-up improvements from all the other constraints that we relieved previously.

I trained as a competitive swimmer for eight years. Like many athletes, I often had the experience of hitting a plateau for eight months, in which I made seemingly no progress at all, and then suddenly breaking through the plateau and making a lot of progress all at once. Having reflected on the Theory of Constraints, I think I now have a better mental model for those seemingly incomprehensible patterns of stagnation and improvement. When you’re stuck at a plateau, it’s probably not the case that you’re making no improvements at all. Instead, you’re just making improvements that won’t manifest until one day when, by chance, you relieve your most limiting constraint. The floodgates open wide, and the cycle begins again.

Yours unlimitedly,

David

Linking Out Loud #7: Pushing and pulling goals

By David Laing

Dear readers,

At least once a month, I think of Scott Alexander's distinction between "pushing" and "pulling" goals. I'll let him define the terms:

A pulling goal is when you want to achieve something, so you come up with a plan and a structure. For example, you want to cure cancer, so you become a biologist and set up a lab and do cancer research.

A pushing goal is when you have a plan and a structure, and you’re trying to figure out what to do with it. For example, you’re studying biology in college, your professor says you need to do a research project to graduate, and so you start looking for research to do.

I often noticed these two types of goals when I was training to become a data scientist. Some of my colleagues approached their projects by asking, "What is a problem I want to solve?" Then they would look for relevant data. This is the pulling goal approach. Many other people, myself included, would start by asking, "What data do I have access to?" Then we would look for things to do with it. This is the pushing goal approach.

We pushing people would complete our projects more reliably than the pulling people, but our completed projects often felt like zombies—husks with no real substance to them. 

Alexander is critical of pushing goals, making the point that they often feel inauthentic and meaningless. I used to spend a lot of time brainstorming ideas for novels, so this passage hit close to home:

Sometimes on Reddit’s /r/writing I see people asking “How do you come up with ideas for things to write about?” and I feel a sort of horror. So you want to write a novel, but…you don’t have anything to write about? And you just sit there thinking “Maybe it should be about romance…no, war…no, the ennui of the working classes…or maybe hobbits.” I can understand this in theory – you want to be A Writer – but it still weirds me out.

Around the time I read this post, I also read Stephen King's memoir, On Writing. He describes how much he loves writing—not Being An Author, but the act of telling stories. King's novels are pulling goals, then. By contrast, I could see that my own stories were pushing goals—zombie projects with no inner spark. I decided to set aside my aspirations to be a novelist, at least for the time being.

One potential lesson here is that you should only work on projects that are immediately meaningful to you as ends in themselves. Write if you have things to say, or if you have stories to tell, but not because you want to be A Writer. Surely there is some wisdom in this.

Then again, can any project feel truly meaningful at all times and at every stage? Surely Stephen King experiences periods of writer’s block between novels, and needs to deliberately seek out his next source of inspiration. He must also have moments in his writing when he lacks the motivation to get through a particular chapter and needs to set a word target for the day. This would suggest that pulling goals may not be enough on their own.

Perhaps the trick is to identify your pulling goals and then set up smaller pushing goals that will support you along the way. I have found that identifying pulling goals isn't a trivial task that can be completed all at once, so my strategy is to collect them passively; a few months ago I started to compile a list of things I want that don't exist. The hard part is still achieving the goals, of course. But by continually collecting my pulling goals, I hope I'll at least aim at the right targets when I sit down to work. Overall, the fewer zombie projects the better.

Yours substantively,

David

Linking Out Loud #6: Prototyping at the speed of thought

By David Laing

Hello!

Research and development—R&D—are usually lumped together and abbreviated, so it’s easy to forget that they are distinct activities. This often causes us to misapply norms and practices that are appropriate for one but not the other.

I didn’t appreciate this fact until I saw Tom Chi’s talk from 2012, Rapid Prototyping X, in which he describes the R&D work that he led for Google Glass, Google’s previously discontinued, but now revamped, smart glasses. It’s fair to be skeptical of lessons derived from a product that is for many people a byword for failure, but I think that Chi’s points are insightful.

His advice is that when you’re trying to invent something, it’s fruitful to separate your process into two phases. In the first phase, research, your main objective is to maximize your rate of learning. In the second phase, development, your main objective is to maximize quality.

In particular, it’s valuable to start by identifying a medium for experimentation that allows you to explore the greatest number of potential avenues in the shortest amount of time. As Chi says, you want to be able to prototype “at the speed of thought.” In the case of Google Glass, this meant building dozens of form factors in a matter of hours using just modeling wire and clay.

Rapid prototyping allowed Chi’s team to learn something that no company had learned before in attempts to make smart glasses: if the added weight of the computer is distributed across the wearer’s ears rather than their nose, the glasses are much more comfortable, not to mention more stylish. Below are some examples of smart glasses before Google Glass. All of them came up with the wrong answer, after investing thousands of engineering hours.

It’s not just physical products that benefit from rapid prototyping. When developing software, it’s common for designers to create “wireframes” of the user interface—visual representations of the final product, with no actual functionality—before the developers write a single line of code. Many writers and thinkers like to hone their ideas in low-effort formats before developing them further. The blogger David Perell describes what he calls the “Content Triangle,” a method for gathering and iterating on ideas until the best ones bubble to the surface, like an evolutionary algorithm:

Too many of us make the mistake of skipping the earliest phases of research, investing too soon in ideas that could have been cheaply invalidated if we had found a medium for faster prototyping. Yet another reason why speed matters.

Yours prototypically,

David

Loading more posts…