Saturday, February 28, 2009

What If?

The point of simulations, and models in general, is to get some understanding about what will happen if we try something. By understanding the consequences of our actions, we can optimize them to have the best chance of getting the results we want. The process of doing this is called scenario analysis. Here’s a short tutorial and an example “ripped from today’s headlines”.

Scenario Analysis
Scenarios tell us what will happen if factors outside our control act in our favor, or against it. These factors might be specific competitors or adversaries, the inanimate world (e.g. natural disasters), or the emergent behavior of the collection of others just like us (e.g. the economy). Whatever the nature of these external influences, we want to know how these will affect our plan and the attainment of its goals.

Our ability to forecast the future using models and scenarios depends on three things:
  1. The accuracy of the model
  2. The amount of uncertainty in how we expect influencing factors to behave
  3. The number of influencing factors we are including
So, even if we have a perfectly accurate model for history, we can't have a perfect forecast because of #2. Furthermore, any model we create is an simplified approximation of the real world, which guarantees that (#1) our model is not perfect and (#3) we will omit some influencing factors.

Though not perfect, scenario analysis allows us to develop
  • a sufficient understanding of the risks and rewards of a several alternative courses of action,
  • an understanding of what we need to watch out for, and
  • contingency plans in case a problem arises
In his now classic book The Art of the Long View, Peter Schwartz describes the purpose of scenario analysis as the systematic exploration of several possible futures. Schwartz notes that scenarios are tools "for ordering one's perceptions about alternative future environments. The end result... is not an accurate picture of tomorrow, but better decisions about the future." No matter how things might actually turn out, both the analyst and the policy maker will have "on the shelf" a scenario (or story) that resembles a given future and that will have helped them think through both the opportunities and the consequences of that future. Such a story "resonates with what [people] already know, and leads them from that resonance to re-perceive the world... Scenarios open up your mind to policies [and choices] that you might not otherwise consider."
There are a number of techniques for scenario analysis. A few of the more common ones are described below.

Worst-Case Analysis
You describe what you think is most likely to happen (known as the “baseline”
), and what will happen if things go really bad (the “worst case”). Of course, things can go really well too, but that’s a situation we don't usually spend a lot of time analyzing.

Monte Carlo Analysis

You identify all the variables that influence your results, and the likely range (including good and bad) that they will cover. You then generate a large number of scenarios (hundreds to millions, de
pending on the number of variables) by picking values at random for each of the variables. You then evaluate your plan for each of the scenarios, average all the results and call that the “expected value.” If you compare the expected value (the return) and standard deviation (the risk) among multiple plans, you can choose the point you want to be on in the risk/reward spectrum. Typically a higher expected value has more risk associated with it.

War Gaming
This is typically a multi-player and turn-based situation in which real people take on the roles of competitors or adversaries (hence the “war” part of the name) and compete with each other
in a virtual environment. This is an excellent approach for strategic decisions, because real people who are experienced in a domain (a market or a theatre of war) will pose much more realistic challenges to your plan. Furthermore, the turn-based aspect of war gaming means that the players can adapt to changes.

Vignettes
In this approach, people work together to identify the factors that can affect a system. These drivers are put into a common framework, which might be conceptual or a concrete software model. There are usually too many variables (influencing factors) initially, and the team works to pare this list down in two ways
  1. Developing a consensus as to which variables are independent (as opposed to variables that directly influence each other)
  2. Determining which of the variables from #1 are most influential in the behavior of the system.
Once there is a reasonable number of variables, the scenarios are fleshed out. Usually the variables are boiled down to two, which form the x and y axis of a plane. Each quadrant of the plane is given a name. For example, the x axis might range from dictatorship to democracy, and the y axis from socialism to capitalism, giving you a socialist dictatorship, a capitalistic democracy, and so on.

With these scenarios defined, the team discusses what the implications are in these situations. What opportunities and risks arise? Which do we want? How do we get there? Etc.

A Relevant Example
The current economic crisis provides a great real-time case study of scenario analysis. This is because the US government will, over the next several weeks, be subjecting major banks to a “capital assessment”, more commonly called the “Stress Test”. This is a “What If?” analysis in which:
"Each participating financial institution has been instructed t o analyze potential firm‐wide losses, including in its loan and securities portfolios, as well as from any off‐balance sheet commitments and contingent liabilities/exposures… The capital assessment will cover two economic scenarios: a baseline scenario and a more adverse scenario."
The two scenarios are known as “baseline” and “more adverse”. The baseline scenario “is intended to represent a consensus view about the depth and duration of the recession.” The variables in the scenarios are GDP (Gross Domestic Product, a measure of the total economic output of the US), unemployment, and housing prices. The more adverse scenario looks at situations that are believed to be 10% likely for each individual variable, and assumes that all three variables fall reach this pessimistic value. The charts below show the three variables for each metric.
The government chose worst-case analysis for a few reasons.
  • First of all, evaluating a $100B bank’s balance sheet against a scenario is a very involved task, and time and manpower is short. So, the number of scenarios needs to be limited and Monte Carlo is out.
  • Since the government is the only “agent” that can take any action to stabilize the economy, there isn’t really any competitor or adversary (other than the “invisible hand”). So war-gaming is out.
  • Finally, there’s no need to explore where the system (the economy) might go—that’s pretty well known. The influencing factors are well known too. So no need for Vignettes.

What will the government do when the stress test is complete? Stay tuned as we all find out…

Sunday, February 22, 2009

Not quite a simulation...

... but better than words alone, this animated explanation of the genesis of the financial meltdown is both entertaining and explanatory. We see the motivations of the various agents, the behaviors these motivations generate, the interactions that ensue, and the unintended consequences when the whole thing is put into action. Truly inspired, and inspiring!


The Crisis of Credit Visualized from Jonathan Jarvis on Vimeo.

Imagine if a simulation generated this kind of an output.

Saturday, February 21, 2009

The problem with words…

The WorldWatch Institute recently published the article “Our Panarchic Future” by Thomas Homer-Dixon, adapted from his book “The Upside of Down: Catastrophe, Creativity and the Renewal of Civilization”. The article primarily discusses the work and thinking of the eminent ecologist Crawford Stanley (Buzz) Holling, specifically his concept of “panarchy theory”.

The article gives a cogent description of the panarchy theory; here’s a synopsis: As a complex system like a forest develops, it self-optimizes to minimize redundancy (e.g. fewer species occupying each specific niche) and maximize efficiency (increasing percentages of the total water and nutrient flow are used). The resulting forest is a highly interconnected supply chain (food web), with decreasing ability to handle a disruption (the loss of a species), because each species is playing a unique and essential role.

The efficient, but decreasingly resilient, system can be severely impacted by an external shock, like a wildfire. While disrupting the system to the extent that its original structure (e.g. species) may not be able to reestablish itself, these shocks also clear out space for new species and a new structure to emerge. In the less common case of several simultaneous shocks, such as a wildfire during a drought, the system might never recover.

The article draws the parallel between ecosystems and societies, suggesting that as societies become more complex, they become less resilient and more likely to be disrupted by external shocks (like climate change), or permanently snuffed out by multiple simultaneous shocks. An argument is then made that this is likely the reason that Rome fell.

Holling’s experience in ecology is strong and deep, and his views on the evolution of forest systems are drawn from a lifetime of work and probably some of the best understanding we have. And Homer-Dixon’s attempt to draw the parallel to civilizations is thought provoking, and possibly scary. But how can we tell if this is an accurate application of the analogy? Civilizations are self-aware and can diagnose problems and take proactive steps to fix them. Yes, Rome did decline, but how much of their situation and its potential progression into the future were they able to understand at the time?

Hopefully the article and the book will stimulate a good discussion, but words can only take us so far. Suppose we could build models of ecosystems and civilizations, and compare and contrast them to see where the analogy holds and where it doesn’t; see each other’s assumptions in the light of day, put them into motion, find out where they break down, revise them and try again. Yes, these models are complex and limited in accuracy because the world is so much more complex and unmeasured in so many ways, but discussing in models can take us so much further than discussing in words.

Thursday, February 19, 2009

The Power of Agent-Based Modeling

This post briefly describes several different approaches to software modeling. My specialty is in Agent-Based Modeling (ABM), where systems are described from the “bottom up.” This means looking at the system from the perspective of the individual actors in it—it might be consumers in a product category, cars on a road network, or plants and animals in an ecosystem. ABM stands in contrast to top-down modeling, where one looks at a system as a whole and describes its behavior with equations.

The top down approach can work when
  1. You know the behavior patterns for the system and
  2. You can write (and solve) equations for the system
For a real-world system like a container of oxygen molecules, this approach works. There are laws in chemistry such as the ideal gas law that enables any high school student to calculate what happens to the gas when you compress or heat up the container.

However, many systems of interest are not that simple. Sometimes you don’t know the behavior patterns that might occur. Take the banking industry for example. When the regulations were put in place (or removed) regarding how home loans were made, no one anticipated the sub-prime meltdown. Even allowing that those responsible for the regulations were expert and responsible, it appears they were not aware of all the possible patterns of behavior in the system.

Suppose you built a model at the level of individual home buyers and banks, imposed the regulations upon them, gave these “agents” the motivation to buy houses and make money, and enabled them to explore any strategy that was legal (or they could get away with because of inability to enforce regulations). Wind this up and set it in motion.

I would bet that if you ran it a thousand times, under different initial conditions and external factors, you would find some scenarios in which things play out like they have in the real world. These results could be used to refine the regulations (or determine the amount of regulatory oversight necessary) to keep the system operating in a zone where we get to keep our jobs, our homes and our 401Ks.

This is power of agent-based modeling.

Wednesday, February 18, 2009

Why Simulate?

Simulation is one of the things that makes us human. Thinking is what happens when our brains host virtual objects acting and interacting via behavioral “rules”. Computer simulation brings those mental models into the light of day, infuses them with real world data, and allows us to explore the envelope of the future. When done right, simulation brings the best of both worlds together to helps us make smarter decisions in complex situations—as individuals and as a society.

Computers have done a lot to help us deal with digital data, from mundane accounting to viral YouTube videos. Most of these applications involve moving, transforming, and/or displaying data, all of which could be done before computers, though far less efficiently. There is one thing though, that only computers can do, and that is to host a dynamic process that responds when we interact with it. And the biggest computer of all that does this is the real world.

This is a rather information-centric point of view, and I’m making a several rather strong assertions I can’t easily back up—but I think it is a worthwhile perspective to try on.

Computer games of course lead the way in exploiting simulation, and provide increasingly realistic and immersive experiences. They entertain and increasingly train, allowing us to learn by doing in a safe environment. Less holistic and more accurate are industrial simulations used in areas such as mechanical modeling and integrated circuit design. But games don’t have to be accurate and industrial simulators can stick to well characterized physical systems.

There are other domains that are much less well characterized, more broad in their reach, and moving forward in a much more uncertain environment—traffic, markets, economies, societies. Despite our best efforts to manage these, we have traffic jams, failed products, recessions and non-sustainable lifestyles. We can do better, and simulation can help.

Tuesday, February 17, 2009

Simulations augment, don't replace, thinking

This article by Irving Wladawsky Berger discusses the promise and pitfalls of simulations of large complex systems. Starting with the vision of Psychohistory from Asimov's Foundation trilogy, the author notes that a highly unexpected event ("the Mule" for Asimov, fashionably called the "Black Swan" today) can violate the key assumptions of a model and generate an unanticipated outcome, such as the current financial crisis. The solution Berger proposes is to keep humans in the loop to provide the insight, broader experience and sanity checking that can't be built into any model.

This is a very important point for those who build and use simulations. A model is a tool that augments, but does not replace, the human ability to think. There are models for lay persons and models for experts; models for everyday work flow and models for the exploration of the posible and the very unlikely. You need the right tool for the right use (and user).

But don't throw the black swan out with the pond water-- even though one can't anticiate the unanticipatabe, it is still possible to use models to test systems for robustness:
  • Identify the weak spots and strengthen them.
  • Identify the early warning signs that the model is going off track and build alarm systems.
  • Give your model to someone else to drive. They will do something you didn't anticipate.

Sunday, February 15, 2009

What is Simulation?

Simulation, as used in this blog, is a dynamic model of a real-world system, implemented in software. Let's look at each of the important words in this definition:

Dynamic: There is an element of time.
  • The model receives inputs at specific points in time.
  • These inputs might affect the outputs of the model at the time the inputs are received, or at a later time.
  • If the inputs affect the outputs at a later time, then the model must be able to store inputs for later use.
  • The way that inputs affect the model is via behaviors, which are rules.
  • The outputs of a model at a certain time point may be used as inputs to the model in the next time point. This is called feedback.
  • The delayed effect of inputs via feedback allows simulations to exhibit behavior that is not explicitly described by the model. This is known as emergent behavior.

Model: A simplified description of behavior. There are many ways to create models. The dominant approaches are
  • Equations, which try to capture some fundamental truth about the universe, such as the way two objects attract each other via gravity.
  • Top-down system-dynamic models implement behavior in terms of boxes that implement some kind of behavior (usually storing a quantity) and arrows which allow the quantity to flow between the boxes.
  • Regression is a technique that compares inputs to outputs and looks for correlations among the two. Unlike equations and system dynamics, regression does not assume any sort of behavior beyond correlation.
  • Bottom-up agent-based models create the individual objects that comprise a system, give these objects behaviors, and allow the objects to interact.
This blog deals with agent based models (ABM) specifically. In another post I will discuss why I feel these are the most effective way to model complex, real-world systems.

Real-World: The physical world in which we exist.

System: An set of interacting objects.

Software: A dynamic model could be implemented in three ways:
  1. In a tangible physical system like an orrey (mechanial model of the solar system).
  2. In a purely information-based model hosted in a computer.
  3. In a purely information-based model hosted in a human mind