Borrowed Laws: 15 Scientific Mental Models for Business and AI Strategy
Fifteen mental models from science, mapped to business and AI strategy
Most strategy writing runs on conviction. A leader picks a direction, sells the room, and the organisation moves. When it works, it looks like vision. When it fails, it looks like hubris. The difference often has nothing to do with the leader and everything to do with whether the underlying model of reality was any good.
Science offers a different habit of mind. Define a system. State the assumptions. Identify where the model breaks. Test early.
The models below come from physics, biology, and game theory. They are not decorative metaphors. They are disciplined analogies, each with known limits. Used well, they sharpen decisions on hiring, product design, pricing, AI deployment, and market positioning. Used carelessly, they become expensive parables.
Each model includes what it predicts, where it applies, and where it breaks. A signal to watch and an action trigger follow each one.
I. Memory and Thresholds
Some systems absorb stress gracefully, then snap. Others never fully recover from damage that looked temporary. The first cluster deals with systems that remember.
Hysteresis: The System Keeps Score
A magnetised iron bar does not lose its magnetism the moment the external field is removed. The bar remembers. Returning inputs to prior levels does not return outputs to prior levels.
Trust, morale, and creator supply follow the same curve. A platform that shifts ranking to favour short-term engagement will see low-quality posts rise, top creators leave, and audience quality fall. Reverting the ranking settings does not reverse the damage, because the creators who left are not sitting in a waiting room. They found other platforms, other audiences, other rhythms. Recovery, if it comes, requires explicit overcorrection: direct outreach, preferential terms, public accountability.
Not every decline is hysteresis. Some losses reverse quickly when switching costs are low. The test: track high-value participant return rate after a policy reversal. If recovery stays below target for two quarters, the system has memory and organic rebound is not coming.
Phase Transitions: Calm Until It Is Not
Water does not get "a little bit frozen" at one degree below zero. It is liquid, and then it is ice. Many business systems behave the same way: pressure accumulates invisibly, then the state changes all at once.
A workflow assistant inside a large firm shows weak adoption at 15 percent team coverage. Usage looks disappointing. Then docs, ticketing, chat, and calendar integrations connect, and the tool enters daily loops. Usage jumps not because the product improved but because connectivity crossed a percolation threshold, the point where isolated clusters become one connected network.
This is a genuinely useful model, and also a genuinely dangerous one. Threshold stories can defend weak products indefinitely. "We just need more adoption" is the strategic equivalent of "the cheque is in the post." The discipline: track cross-workflow coverage and weekly retained teams together. If coverage passes the pre-set threshold and retention still stalls, treat product value as the bottleneck, not adoption.
Error Catastrophe: Noise Has a Hard Ceiling
Replicating systems tolerate error up to a point. Beyond it, information degrades faster than correction can restore it. Viruses that mutate past this ceiling lose coherence and collapse.
The parallel in AI development is direct. A team trains on increasingly synthetic web text. Short-term benchmarks hold, then reasoning consistency drops across tasks because the training signal has slipped below a workable floor. The same logic applies to codebases with compounding technical debt and to organisations with runaway staff turnover: each departure takes institutional knowledge that the next hire cannot fully absorb.
Track source provenance mix and regression drift by capability class. If synthetic share or drift passes agreed bounds, pause scaling and repair the corpus first. The instinct to push through is strong and almost always wrong past the threshold.
II. Landscapes and Evolution
Biological systems do not optimise toward a fixed goal. They adapt to shifting fitness landscapes where today's peak can become tomorrow's valley. Business strategy has the same property.
Fitness Landscapes: Local Peaks Trap Good Teams
Natural selection climbs the nearest hill. It cannot see a taller peak on the other side of a valley because reaching it requires getting temporarily worse.
A mature SaaS product runs hundreds of A/B tests per quarter and reliably gains 0.5 to 1.5 percent on conversion. Revenue rises slightly. Strategic position erodes. The product never crosses into a new job-to-be-done category because every experiment is evaluated against the current peak, and category shifts require a deliberate, funded descent.
The failure mode is real: valley-crossing can become a licence for undisciplined spending. The operating discipline is to track median experiment uplift over rolling quarters. When uplift compresses below a set floor and strategic goals stay unmet, allocate a bounded portfolio to non-incremental designs with explicit kill gates and time limits.
Exaptation: Repurpose Before Building
Bird feathers evolved for thermoregulation. Flight came later. A trait built for one function gets co-opted for another, and the second function can become the more important one.
GPUs were built for graphics rendering. Their parallel arithmetic profile later fit neural network training, which shifted the hardware centre of AI development. resistance to COVID-19 mRNA vaccine technology was originally developed for cancer treatment. The lesson is not "repurpose everything." It is: existing assets can solve adjacent problems at lower time and cost than net-new builds, and the highest-value reuse is often non-obvious.
Run a quarterly asset reuse audit: list underused capabilities and score adjacent use cases by time-to-value and switching cost. The failure mode is overfit: bolting new purposes onto old architecture can block cleaner designs that a fresh build would have found.
Red Queen Dynamics: Running to Stay Still
In Lewis Carroll's Through the Looking-Glass, the Red Queen tells Alice: "It takes all the running you can do, to keep in the same place." In co-evolutionary biology, predators and prey improve in lockstep. Neither gains lasting advantage; both must keep moving to avoid losing ground.
Fraud detection, abuse prevention, model quality, and search ranking are all Red Queen races. A payment firm cuts fraud losses with a new model. Adversaries study the rejection patterns, adapt within months, and loss rates return unless features, policies, and review loops keep evolving.
Not every domain is a Red Queen race. Overreacting wastes capital in stable markets. Track opponent adaptation lag: the time from defence release to attack recovery. If lag shrinks below the planning horizon, continuous update cycles are the only viable posture.
III. Interfaces and Structure
Components rarely fail in isolation. Systems fail at seams: the handoff between two teams, the integration between two tools, the mismatch between a product and its context.
Impedance Matching: Good Parts, Bad Joints
In electrical engineering, maximum power transfers between components when their impedance profiles match. Connect a high-impedance source to a low-impedance load and most of the energy dissipates as waste.
A reasoning-heavy AI model is connected to a chat interface with strict low-latency expectations. The model is capable. The interface is well-built. Users perceive poor quality because response timing and cognitive depth are mismatched. The interface demands speed; the model needs time.
Track handoff error rates and latency-budget violations at system boundaries. If boundary metrics dominate incident volume, redesign interfaces before replacing components. The same logic applies to org design: when two high-performing teams produce poor joint output, the problem is almost always at the seam.
Braess's Paradox: More Capacity, Worse Throughput
Add a new road to a congested network and total travel time can increase. Each driver rationally takes the shortcut, the shortcut jams, and everyone moves slower than before.
A company adds a cross-team escalation channel. Soon every issue routes through a small leadership group. Decision latency rises even though communication paths increased. The new channel did not remove the bottleneck; it moved the bottleneck to a node with less capacity.
Track queue depth and cycle time at each handoff point. If one node absorbs rising share after a capacity addition, redesign routing rules before adding more capacity. Not every slowdown fits this pattern; staffing and priority conflicts may be the true constraint.
Competitive Exclusion: Sameness Is a Losing Position
Two species competing for the same niche in the same environment cannot stably coexist. One will eventually displace the other or both will diverge.
Benchmark parity is not a moat. A model provider that matches a rival on public scores but offers no structural differentiation on cost, latency, domain fit, distribution, or trust wins no durable share. Buyers stay with the incumbent because integration and procurement patterns remain unchanged, and the challenger offers no reason to absorb switching costs.
Track win reasons in competitive deals. If losses cite sameness, force a positioning fork tied to one structural advantage that the competitor cannot replicate without changing its own architecture.
IV. Signals and Incentives
Markets are games. Participants respond to incentives, interpret signals, and adapt their behaviour. The models in this cluster treat strategy as a design problem: build the rules so that good behaviour is the rational choice.
Costly Signaling: Credibility Has a Price
A peacock's tail is expensive to grow and maintain. That expense is the point. Only a genuinely healthy bird can afford the cost, which makes the signal reliable. Cheap signals carry no information because low-quality actors can mimic them.
As generated content becomes near-free, firms need trust signals tied to real cost and accountability. Two AI vendors claim enterprise readiness. One publishes external security audits, incident logs, and contractual service penalties. The other offers polished marketing materials. Regulated buyers choose the first because the signal is hard to fake.
Track conversion by proof tier: claims only, internal proof, external proof with contractual liability. If high-value deals cluster in the top tier, shift budget from content volume to verifiable commitments.
Mechanism Design: Write Rules So Good Behaviour Wins
Traditional game theory asks: given these rules, how will rational actors behave? Mechanism design inverts the question: given a desired outcome, what rules produce it?
A distributed compute network pays providers for completed jobs. Without verification, providers can submit fake outputs. The payment rule is redesigned so that verified work carries clear positive return while fraud carries guaranteed loss. Rational actors now choose honesty not from goodness but from arithmetic.
Track profitability of attack paths. If abuse remains net-positive after controls, the mechanism is incomplete. The problem is design, not enforcement.
Stag Hunt and Schelling Points: Coordinating Without a Script
Two hunters can chase a stag together for a large shared reward or chase rabbits alone for a small guaranteed one. The stag pays more, but only if both commit. Defection by either party leaves the cooperator worse off than if both had chased rabbits.
Platform transitions and category bets often carry this structure. A media company shifts from licensed content toward original AI-assisted production. The payoff is high only if distribution partners and rights holders align on terms and timing. If one critical party has better outside options, defection is rational, and the plan collapses.
A related problem: how do uncoordinated actors agree on anything? Schelling points are focal defaults that groups converge on when communication is weak. Independent tool vendors in an AI stack settle on JSON and markdown for agent handoffs without a formal treaty. Shared defaults lower integration friction.
Map required counterpart commitments before launch. If a critical partner has a strong outside option, price defection risk directly into the plan. For standards, track integration time by protocol; if one default repeatedly reduces integration time across partners, formalise it.
V. Scale and Rights
The final cluster addresses what happens when the unit of analysis is wrong or when too many parties hold a veto.
Renormalisation: Change the Lens, Find the Signal
In physics, renormalisation is the practice of changing the scale of observation to expose stable patterns hidden by micro-level noise.
A support organisation chases per-ticket resolution speed and sees little gain in customer outcomes. After shifting to customer-week resolution loops and repeat-contact rates, priorities change and total workload falls. The tickets were the wrong unit. The customer journey was the right one.
For each executive metric, require one coarse view and one fairness or risk slice. If conclusions conflict at different scales, hold decisions until the metric scale is reconciled. Coarse metrics can hide harmful sub-group effects; fine metrics can miss structural trends. Neither alone is safe.
Tragedy of the Anticommons: Too Many Vetoes Freeze Value
Private property rights exist to prevent the tragedy of the commons: when no one owns a resource, everyone overuses it. The anticommons is the mirror problem: when too many parties hold exclusion rights, no one can use the resource at all.
A multimodal training programme requires rights clearance across text, image, and audio owners. Transaction costs and veto risk stretch timelines until the project loses its market window. The product was technically feasible. It was legally frozen.
Track time-to-clear-rights and rights fragmentation index for key datasets. If clearance time exceeds product cycle length, shift to pooled licences, narrow-domain corpora, or first-party data creation. The failure mode of this framing: anticommons language can be misused to dismiss legitimate rights or safety obligations. Both risks are real.
Using the Set
These models work best as a diagnostic stack, not a reading list. For a live decision:
- State the decision: pricing, hiring, release cadence, platform scope, or partnership structure.
- Pick three models: one from thresholds, one from incentives, one from interfaces.
- Write testable predictions: what should change in 30, 90, and 180 days.
- Attach metrics and triggers: pre-commit to actions before dashboard debates begin.
- List analogy limits: one reason each model may not apply in this case.
A strategy memo built this way carries two strengths: stated assumptions and early error detection. That combination, not rhetorical confidence, separates useful strategy from polished guesswork.
Fifteen models. Borrowed from science. Returned with interest.
Transparency Note
The ideas, arguments, and structure in this essay originated with the author. AI tools were used to assist with drafting, research, and revision. All claims, sources, framing, and final wording reflect the author's own thinking and were reviewed for accuracy before publication.
This essay is for informational and educational purposes only and does not constitute professional advice of any kind, including financial, legal, medical, or otherwise. The author makes no guarantees regarding accuracy or completeness. Readers should consult a qualified professional before acting on any information contained here. The author accepts no liability for decisions made based on this content.