Problems or Puzzles

On important matters, people often put more effort into figuring out the right decision. But the right decision is different depending on who puts in the effort. And what if right is defined as the option that won the experimental split test rather than what seems to be best for the overall system?

Part of product testing (and even product concept testing) depends on generating demand data by running variations in front of customers. Run lots of variations, find the ones that perform better based on the metrics you value, and repeat.

I recently read a description of large-scale product experimentation for a financial services company. The business’ product experimentation had direct financial outcomes — namely improved customer retention and customer lifetime value. But there were costs too.

The following quote is from the article “I Worked at Capital One for Five Years. This Is How We Justified Piling Debt on Poor Customers.”

“It was common to hear analysts say things like, ‘I just love to solve problems.’ But what they were really doing was solving something closer to puzzles. It’s clear to me, for example, that the janitor at my middle school solved problems when she cleaned up trash. It’s far less clear whether analysts at Capital One are solving problems or creating them. In either event, the work culture at this well-appointed lender of dwindling resort is pretty much designed to encourage former students of engineering or math to let their minds drift for a few years and forget whether the equations in front of them represent the laws of thermodynamics or single moms who want to pay for their kids’ Christmas gifts without having to default on their rent or utilities payments.”

How much of our productive effort goes toward solving problems — or puzzles — with skewed outcomes?

The author goes on to describe a series of social engineering tweaks business analysts made to test customer behavior. Successful experiments, including some that increased the fee-producing balances customers carry, resulted in more revenue for the business — which is usually what a business should do. Done at scale, both in dollars and the size of the customer group.

The experience described above made me think about the differences between problems and puzzles.

What is a problem?

I use this definition of problem: “a matter or situation regarded as unwelcome or harmful and needing to be dealt with and overcome.”

The world is full of problems. Yet, when you walk around, if you pay attention, you probably also notice that there are some that attract attention and sometimes solutions and there are also some that attract little attention and remain as “just the way the world is.” Some problems are sexy and some are not, irrespective of the magnitude of people affected or the ability to solve them. Many problems are messy, ill-defined, complex. The type of thing that if you improve one part may lead to worse outcomes somewhere else.

What is a puzzle?

I use this definition of puzzle: “a game, toy, or problem designed to test ingenuity or knowledge.”

A puzzle is a little different. The rules must be known and understandable. Achievable. Those who play must know if they solved it or failed to. Often, you get to try again and again.

Sometimes the sexy problems of above become redefined as puzzles, temporarily attracting superficial attention and outcomes.

Playing with a problem as if it were a puzzle can lead to chaos.

Playing with a puzzle as if it were a problem can lead to smart people — like those analysts above at Capital One — finding solutions that lead to good business outcomes, but outcomes that also saddle your customers with debt.

The problem – puzzle division reminds me that maybe this is why chess masters (used to playing well-defined, repeatable games) are not necessarily masters of strategy off the chess board.

What you measure

You get what you measure. But here the measurements are on the part of the businesses, not the customers. So is it natural for the business – customer relationship to get out of control?

From “Building a Change Capability at Capital One Financial

“The information-based approach Fairbank [the CEO] created thrives on data, experiments, and analysis… It results in a robust ‘‘test and learn’’ strategy, and it works like this. Terabytes of consumer data are analyzed statistically to generate potential risk profiles. For example, someone who responds to an invitation is a lesser credit risk than someone who calls up on the phone and asks for credit. Combined with guesses about how the environment is changing, a profile or hypothesis can be tested with an offer – interest rate, payment options, perquisites, or rewards – for credit services. Over the course of a year, Capital One’s managers can conduct over 50,000 of these ‘tests.'”

And this technique of large-scale testing is now common, whether the companies discuss it or not. From UVA Today:

“The term “big data” really isn’t new for us… We ran thousands of tests to learn about what customers wanted….

“We are always on the lookout for top technical talent, especially students in the STEM field. Specifically, in the IT group we’re looking for individuals with degrees in computer science, computer engineering or other IT-related degrees.”

But this isn’t exactly what happens with large-scale, iterative testing. Running large numbers of tests will also produce winning versions of ad copy and visual design that happen to convert at a higher rate because of something unrelated to what customers want. We’ve heard the example of something as simple as a color change to a buy button resulting in more purchase orders… That’s what this type of experimenting can be. It’s not always a solution to a problem customers had with credit.

And what is meant by the phrase “what customers wanted”?

The same for the above call to hire STEM majors. In the Capital One setting, rather than focus their skill sets on problems, STEM grads focus on solving puzzles that convert customers and retain them at higher rates, regardless of the product delivered.

This is a situation where one side becomes too good at what they do, with consequences for the other side.

And elsewhere

When it comes to social media, the products are often free, except in terms of attention.

Attention is the metric because attention means more frequent returning to the social media platforms and therefore more advertising revenue for the parent companies. Where past information businesses (newspapers, TV broadcasters, radio) were structured so that editors approved all stories, that is impossible now with user-generated content.

As stated by Tristan Harris, Executive Director Center for Humane Technology and former Google Design ethicist, in his recent testimony before the US House Subcommittee on Consumer Protection and Commerce:

“These private companies have become the eyes, ears, and mouth by which we each navigate, communicate and make sense of the world. Technology companies manipulate our sense of identity, self-worth, relationships, beliefs, actions, attention, memory, physiology and even habit-formation processes, without proper responsibility.” — from the Harris testimony

But really, this isn’t quite right. Companies are not forced to have this responsibility (though maybe they will be in the future). These outcomes are the result of what we’ve discussed in earlier articles, like scaling effects, addiction-supported business models, and many flows of information to niche audiences rather than few flows of information to mainstream audiences.

A 2017 interview with Facebook co-founder Sean Parker quoted him as saying:

“[W]hen Facebook was being developed the objective was: ‘How do we consume as much of your time and conscious attention as possible?’ It was this mindset that led to the creation of features such as the ‘like’ button that would give users ‘a little dopamine hit’ to encourage them to upload more content.

“It’s a social-validation feedback loop … exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology.”

So too with the business experiments at Capital One, Facebook, and others that can impact large populations with iterative testing.

The methods and tools have already been developed to swing customers (or users) as it benefits the business. Where do you see yourself exposed to these methods and tools today?

Consider

  • Are you solving a problem or a puzzle?
  • Are parts of your life trapped in puzzle solving?
  • What problems do we create when we become too good at solving puzzles? What can be done to minimize the effects of puzzle solving?
  • Where do you notice exposure to puzzle solving today? And where might you be exposed even if you haven’t noticed it?