Typically, we view problems as unexpected past events
. Framing problems this way is unproductive
because:
- It’s already happened — The event is in the past, and we can’t change it.
- It’s subjective — Often, it’s just an opinion, either from an individual or a group, and that limited perspective may not offer real solutions.
But what if it’s more than just an opinion? What if a thorough postmortem was conducted, and we know exactly why the event happened? What if we learn from a past event? Can we use this information to prevent similar events in the future? The answer: possible, but not probable.
There’s a famous quote by Mark Twain that resonates with me:
History doesn’t repeat itself, but it often rhymes.
— Mark Twain
The point here is that future events are influenced by a myriad of random variations, making it unreliable to base our solutions solely on past occurrences.
A New Way of Seeing Problems
Instead of looking backward, I propose we view problems as predicted future events
that are open to being refuted or validated. This approach reframes problems as hypotheses: ideas that can be tested, argued, and ultimately confirmed or disproved as the future unfolds.
But there’s more. After diving into David Deutsch’s works, I discovered an essential criterion for a meaningful explanation: it must be hard to vary. When an explanation is hard to change, it becomes systemic and holistic. It’s no longer just an opinion—it’s an idea that holds strong across various contexts, making it more reliable.
Defining the Problem
To define a problem, I’ve established a few key principles. A problem must be:
- Hard to Vary — It shouldn’t be easily manipulated to fit just any situation.
- Refutable — It should be possible to challenge the explanation or prediction.
- Validatable — The event or explanation can be tested and confirmed.
- A Predicted Future Event — The problem isn’t rooted in the past but in what we anticipate may happen.
But there’s one final element: desire. Do we want this event to happen? If the answer is yes, then it’s not a problem at all. But if the event is undesired
, then we have a true problem.
However, we must also understand that we cannot predict everything correctly. There will be errors in our understanding, and there will always be factors beyond our knowledge.
This is where we must be prepared—not just to act when anything bad happens, but also to upgrade our understanding that could lead to better future predictions.
This attitude of predict but be prepared
helps us make steady progress, no matter the field we’re working in. By embracing this mindset, we remain flexible and capable of improving our problem-solving over time.
Lets See some Examples to understand this better:
Example 1: Missed Project Deadlines
Usual View: A project misses its deadline. The project manager blames poor planning, unexpected obstacles, or team miscommunication. The team then implements more planning tools or tries to reduce similar roadblocks for the next project.
New Perspective: The team reframes the issue as a future problem: “We predict that future deadlines will continue to be missed if our current workflow remains inefficient.” By shifting the focus to what will happen next, the team builds a hard-to-vary solution that addresses the underlying systemic issues like team capacity, resource management, and communication bottlenecks. This prediction is refutable (testing new workflows) and validatable by measuring performance in future projects.
Prepared: The team must also be prepared to adapt their understanding of workflow management as projects evolve, ensuring they can handle new challenges that might impact future deadlines.
Example 2: Software Bug
Usual View: A bug in a software system causes downtime. The development team scrambles to find the root cause, determines it was a rare edge case that wasn’t accounted for, and patches the code. The problem is considered solved by preventing this specific bug from happening again.
New Perspective: Instead of treating the bug as a past event, the team reframes it as a predicted future event: “We predict that certain types of edge cases will continue to cause system failures if our current design isn’t robust enough to handle unexpected inputs.” The theory behind the prediction is hard to vary if the team builds a deep understanding of the system’s architecture and predicts where future failures might occur. By making the system more resilient, they focus on preventing future bugs, rather than reacting to individual cases.
Prepared: The team should also be prepared to enhance their understanding of edge cases as they appear. Monitoring and updating the system architecture based on new failures will ensure continued reliability in the future.
Summary
In summary, here’s how I define a problem:
An Undesired, Hard to Vary, Refutable, Validatable, Predicted Future Event.
and Predict but be Prepared
.
By approaching problems this way, we shift from reactive thinking to proactive problem-solving, grounded in systemic understanding and a focus on future possibilities.