Thursday, March 15, 2012

The Importance of Less

In a nutshell, the author proposes that making something more efficiently lowers its cost, and when cost is lowered, more of the thing is used.  The unintended consequence is that making cars, appliances, transistors, manufacturing, etc. more efficient allows us to drive more, air condition our homes, have more sophisticated gadgets, and in general buy more stuff.  

In the best case we might expect the increase in consumption to be balanced out by the increase in efficiency.  But even if this were true, a growing population means that overall consumption increases. Moreover, as products become more affordable, they become accessible to the developing world population, further increasing growth in consumption.  And when products cost less in the developed world, they are more likely to be thrown away.  Finally, even when we have enough, we want more.  For example, the average house in the US is twice as large today as the average size in 1950, despite a decline in family sizes.  

The overall message of the book is that there is no magic technology bullet that will solve this problem.  The ultimate answer is to use less, period.

While a valuable read, a challenge I had with this book is that it could be organized better.  It has 34 chapters and tends to bounce around from topic to topic, as if it were written as a series of articles.  

An overview of some of the interesting themes:

The answer is not to buy more efficient stuff 
Efficiency drives more consumption:
  • More efficient planes lead to more air travel 
  • More efficient engines leads to more car (and more cars).  The Ford model T, manufactured between 1908 and 1927, had a fuel economy between 13 and 21 miles per gallon, similar to today's cars.  Of course today's cars are vastly safer, more luxurious, and can travel at much higher speeds.  All of which leads to tolerance for more driving and longer commutes.
  • More efficient roads and public transit leads to less congestion and more cars rushing in to fill the empty space
  • More efficient parking in cities leads to less use of public transit

The answer is not to eat locally grown food
How far you live from your grocery store is of far greater environmental significance than how far you live from the places where your food is grown, because moving small amounts of food from the grocery store to your home is inefficient in comparison to moving food in bulk from the farm to the grocery store.

The answer is not more efficient energy production
The human race currently consumes energy at an average rate of 16 terawatts - the equivalent of 160 billion light bulbs burning all the time.  Capping atmospheric CO2 at 450  parts per million - 15% higher than today and consistent with a 2 degree C rise in global temperature - would require freezing global energy consumption at current levels despite a projected increase in global population from 7 to 9 billion people.  It would also require replacing 13 of the 16 terawatts with the equivalent of all of the following:
  • 100 square meters of solar cells, 50 square meters of solar thermal reflectors, and one olympic-sized swimming pool of algae for biofuel, every second for the next 25 years.  
  • One 300 foot diameter wind turbine every five minutes
  • One 100 megawatt geothermal-powered steam turbine every eight hours
  • One 3 gigawatt nuclear power plant every week

The answer is to change how we live
We need to live smaller, live closer, and drive less.  The most sustainable lifestyle is a high density urban environment like Manhattan.  This unfortunately goes against many of our cultural visions of utopia as a cocoon of high quality belongings in a bucolic setting.  

How to get there: Frugality First
Herman E. Daley, an ecological economist and professor emeritus at the School of Public Policy at the University of Maryland, writes that "frugality first induces efficiency second;  efficiency first dissipates itself by making frugality appear less necessary.  Frugality [e.g. artificially increasing energy's scarcity through caps or taxes] keeps the economy at a sustainable scale; efficiency of allocation [i.e. artificially increasing energy's abundance] helps us live better at any scale, but does not help us set the scale itself.

A better title for the book might be:  "Frugality First: The Importance of Less."  But who would want to buy that?

Saturday, January 21, 2012

Coming to an iPad near you in 2012...

A fusion of data which will allow you to get the whole picture of how we live in America -- how our lifestyle choices shape our environment, our quality of life, and our future.  Through interactive visualizations you will be able to examine any place, from the smallest township to the entire nation, including where YOU live.

Here are a few early screenshots to give you and idea of what you will be able to explore (click on any image for full-size).

Electricity Generation
See which energy sources drive our power grid, in total and one by one, at the national, state or county level.    What do we do with that energy?  How does use vary in different climates?   At different income levels?  Cities versus suburbs?

Experience the bounty and beauty of our natural resources.  Below you see rainfall, lakes, and  waterways;  See where and why rivers are dammed, how we use water, and the tension between environment and our desire to live everywhere. 

Gigabytes of Data at Your Fingertips
Below is a zoom into New Orleans and the surrounding area.  The city is selected with a tap (and highlighted in red), giving you the ability to drill down and find out more.

Saturday, August 22, 2009

Productizing Human Behavior Models

I was alerted to the article The Sims Meet Science, by Franz Dill's great blog on emerging technology. The application domain here is the movement of people through space, for urban and architectural planning, but many of the issues are common to all of the agent-based modeling work I've been involved in...

A big part of making the simulations more accurate lies in increasing the intelligence of individual agents representing pedestrians. Xiaolin Hu, assistant professor at Georgia State University, said... The engineering challenge lies in figuring out how to balance complexity, accuracy, and performance. To create more accuracy, researchers want to make the decision-making model of the agents more complex, but this affects performance. Hu said, "You may end up where you add all this extra computation on the decision-making part, but don't gain much from the results point of view."


Aside from not adding anything useful to the results, extra complexity introduced to bring a model closer to reality creates more parameters that need to be calibrated, and obtaining the data for calibration is the achilles heel of any behavioral model:
When you need to know how a pedestrian space will work in a particular part of the world, you need to have the measurement data from that part of the world. Otherwise, how can you have confidence in your simulations?" explained Kevin Mannion, CEO of Legion.


One of the challenges in building a business around agent-based modeling is, as with any new technology, customer adoption. If a model is going to influence the behavior of decision-makers, they need to have confidence in the model's accuracy. A prerequisite to building this confidence is engaging the decision-makers though an experience of the model. This helps make the abstract concept of mathematical modeling more concrete, building a bridge to the everyday real world:
One of the more exciting trends combines pedestrian simulation with 3D animation. More illustrative and realistic uses of pedestrian simulation could help improve the architectural process by showing a decision maker how people move through a building. [Alex] Schmid [managing director of Savannah Simulations] explained, "First and foremost, these are engineering tools designed for runtime analysis. The graphics are added on after the simulation to make it easier to sell a concept or new design alternative."

Another challenge in selling modeling is being able to build models at a cost that provides a profit relative to the customers perception of the value. One thing that would help is standardization in the underlying simulation platform, so that domain experts can focus on modeling the domain:

At the moment, all commercial pedestrian-simulation packages are proprietary and run on a Windows PC. But researchers are looking at ways to abstract away the different modeling system levels to give modelers the same freedom that computer operating systems give programmers across different hardware implementations.
Just imagine if every time a business wanted to do something with their data, they had to write their own database software! That's where we are today with simulation. The challenge, in my experience, with standardized platforms for agent-based modeling is that they don't provide enough leverage to be worth using. Handling simulation events, while not trivial, is a small part of the problem. It's like saying you need to be able to save files to implement a database.

A much larger issue is to find an effective way to represent behaviors, modify them, and make them clearly observable to others who have not developed the model. The database analogy would be graphical data modeling tools.

There are some intermediate steps which hold great potential in building the use of agent-based models, and human behavior simulation models in general: Connecting together the models that exist today. I'll discuss that in another post.

Sunday, July 5, 2009

Crowdsourced Simulation

One of the challenges in creating a simulation model of a society is that it is complex, on several fronts:

  1. There are many systems—finance, government, healthcare, education, etc.—each with its own domain of expertise.
  2. There is no one correct way to model human behavior—anything we model in a computer has to be a gross simplification of realty. And we don't even know exactly how minds generate decisions.
  3. Models are often created from a particular point of view, and can have unintentional (or intentional) biases and/or omissions
  4. Models can have mistakes. Mistakes can be easy to spot when the correct behavior of the model is clear, such as simulating the mechanical stresses on a bridge, but not so easy to spot when modeling, for example, how people will respond to a change in tax policy.

I propose addressing these challenges via crowdsourcing. That is, in the spirit of Wikipedia and open-source software, providing a platform on which many people can contribute to a model.

Levels of contribution

There are several levels at which people could contribute to a model, listed below in order of increasing requirements on the skills of the contributor:

  • Testing: Take an existing model and explore how it responds. This may entail examining metrics currently provided by the model, or adding new metrics.
  • Scenarios: A scenario provides a set of initial conditions and external factors for a model. Scenarios may be forecast-oriented, in which case the initial conditions are likely to represent the present, and the external factors represent a possible future. Scenarios might also be education oriented, in which case the initial conditions and external factors might represent a period in real world history, or might be fictional to illustrate a point or test the model.
  • System Structure: Simulation models are often made up of interacting objects, also known as agents or actors. These actors have some type of distribution in space (and perhaps time) for a given simulation. For example, the mortgage model has a population of households and banks, as well as a single national market for mortgages. An example of a structural change would be to change this model to have a separate mortgage market for each state.
  • Behaviors: The actors in a model have a specific set of behaviors, such as buying and selling homes, and refinancing to get better terms or take money out of the home equity. To such a model, someone might want to add a behavior to the household that modeled life events (births, college, marriages, new cars, vacations, etc.) and use these life events to trigger refinancing actions that draw on home equity.
  • Actors: Changing the arrangement or number of actors was mentioned above in “System Structure”. Another way to modify a model is to create new kinds of actors. For example, one might want to add a secondary mortgage market (where banks buy and sell mortgages from other banks).

Each of these levels of contribution requires tools with specific capabilities. I’ll explore ideas around these tools in future posts, and also welcome comments.

Version Control

One challenge that a crowdsourced simulation model poses is version control. The typical open-source software project is convergent—there may be multiple solutions proposed to a particular problem (like file system structure), but a single solution is chosen and built upon by others.

Wikipedia is different— any particular article is constantly changing, and while there is one current version, past versions are available as well. This presents challenges when trying to build something upon Wikipedia content, as the content can change underneath your edifice.

I’m finding it a bit more challenging to think of an example where a crowdsourced effort branches and is not convergent, but this is certainly conceivable in simulation models. For example, there could be two groups with different views on politics (socialism versus capitalism) that build different behaviors into their models. Or, there could be multiple domain specific models (e.g. mortgages and healthcare) which each dive into detail for their specific domain while simplifying or ignoring the other domain.

It seems like we will need to pick one of two approaches to “version control”

  • Convergent: Some curatorial body will work to integrate the best thinking into a single version of “truth”. Or perhaps a small set of domain specific versions.
  • Embracing diversity: In this case, there would be little or no effort at integrating separate models built on the same platform. Perhaps there would be a way to integrate the response of many different models to a specific scenario. One would still want some level of quality control when choosing models whose results are integrated.

The Power of Crowdsourcing

I believe a crowdsourced approach to a model of society is essential.

  • Different domains each have their own experts,
  • Different approaches to any single aspect of a model each have pros and cons
  • Many sets of eyes can vet a model, uncovering biases, omissions and errors

There are some additional advantages that, in my experience, are very important if not essential is using models to effect changes in behavior:

  • Transparency: People must understand a model before they can trust it. A model that can be tested and changed is transparent.
  • Engagement: Interacting personally with a model, whether simply changing a few external factors or parameters, or building new behaviors, enhances people’s interest the model. It gets them thinking about it, talking about it, contributing to it and ultimately acting in the real world.
  • Understanding: With engagement and interaction comes understanding. One of the best ways to learn is by “doing”, and one of the best ways to “do” is by building a dynamic model.

I welcome comments on crowdsourcing a simulation model, on this blog as well via email ( Have you ever seen this? Have you tried it? Would you participate? How would you like it to work?

Sunday, June 14, 2009

Skill Set for Simulation Designers

This article in the NY Times (ht Bill Parks) provides a concise and cogent description of the skills of successful simulation designers.

The profession draws on expertise in a number of areas and does not fit neatly into any single category... Simulation “overlaps engineering, math and computer science, but it isn’t the same as any one of those,” .... [S]kills include … an aptitude for “conceptualizing the world,” he said. Developing a simulation requires enough native intelligence to view a problem abstractly, research the issues and tease out the myriad key elements. Then they must be incorporated into a model in which they are poked, prodded and tweaked to reach useful solutions.

Two other very important skills for those who use simulations are noted.

- First, as one pares down the real world to its essential elements for a specific model, an equally important skill is understanding the of limits the model—what is left out, what is not well understood, and what is “uncalibrated” (this is, not numerically reconciled with real-world data).

- Second, one needs to be very clear in communicating the correct way in which to interpret results.

Too often there is too much confidence placed in the results of a model, as illustrated in this article, also in the NY Times (ht Franz Dill). In this case, an underestimation of the spread of the Swine Flu, which was officially declared a pandemic by the WHO last week.

Disappointment and back-pedaling occurs when too much confidence is placed in a model. This lowers the credibility of modeling, and also dampens the desire to apply resources to improving the model.

A model should not be an end in itself, but a conversation. It is only through the crucible of testing, critique, discussion and refinement that a model can converge on reality. Key to this process is engaging others to use a model themselves, understand how that model works, and be able to suggest (and ideally implement) changes to improve the model. This is an important goal of the platform we’re developing.

Sunday, May 17, 2009

Economic Model Update

Though I have not posted for a while, a lot of work has been done on the model.

The first step was to build the simulation engine, which has been architected and implemented by Steve Noble. This engine is the platform within which we create populations of the objects: households (people), banks and homes, as well as the markets, government and external factors.

The next step was to create the behaviors for the various objects. We currently have a mortgage market up and running, in which the households request, and banks offer, mortgages. On a given simulated day, each bank evaluates all current household requests for a mortgage, based on the bank’s lending policy, and each requesting household evaluates any mortgages for which it is qualified. Each household and bank has its own unique way to evaluate a mortgage. I’ll describe these in a future positing.

We don't have houses in the model yet—we simply have the households request mortgages and, if the request is filled (the mortgage is preapproved), then the household has a probability of actually buying a house at the preapproved amount of the mortgage. The buying household then pays the down payment to the seller, and pays any points on the loan to the bank. The buyer then makes mortgage payments until the loan is paid off. The current loans are simple fixed interest rate loans.

While the addition of houses will allow for a supply-and-demand variation in housing prices, the current simple model already provides interesting behavior with respect to the business models of the banks. Banks make money on loan interest paid, but to make loans a bank needs deposits and has to pay interest on the deposits. These interest payment seat into the loan profits. The banks are competing for the households’ loan business, so they can’t charge too high an amount for loan interest. But if the deal on the mortgage is too “good” from the perspective of the borrower then the bank will get lots of business but lose money.

This example highlights that even when there is no risk of default in the loans, the banks still must balance offering attractive loans with making enough money to pay interest and turn a profit. If there is just one bank it is easy, but when there are multiple banks competing for the borrowers’ business it is not clear ahead of time which bank will win.

We have some more work to do to finish this first simulation, and plan to put it online within the next two to three weeks. Watch this space for an announcement when we do!

Meanwhile, here's a screenshot of the work in progress...

Thursday, April 16, 2009

Market Signals & Financial Engineering

One of the side effects of building dynamic models is that you learn new things about the system you were modeling. I’m not an economist by training, so while what I’m learning is new to me, I’m not claiming it is new to everyone. Please do keep in mind, though, that an important goal of this modeling work is to help society, most of whom are not experts in economics (or energy or healthcare or …) develop an understanding of how society works and how to make it work better.

This positing at a blog I follow, Franz Dills’s The Eponymous Pickle, pointed to an article Re-thinking Risk Management: Why the Mindset Matters More Than the Model. This got me thinking about how financial firms think about risk.

The Path to the Insight

Since we will be modeling “structured finance” products like Mortgage-Backed Securities (MBS) and Credit Default Swaps (CDS), I need to understand what these are, how they work, and why they are desirable to the parties involved. This led me to the concept of price signals in markets.

An example of a price signal is the price of gas. When it is high, it sends a signal to drivers to drive less in the short term, and consider more efficient vehicles and public transportation in the long term. It is possible for government to send the same signal by a gas tax, even if the price of imported oil is low. The key point is that in a market, signals influence behavior.

Investments have two key signals, price and risk. These are traditionally unified—if an asset is perceived to be riskier, buyers of the asset will demand a higher return to offset the risk (think of “junk bonds”). The risk signal is set by a credit rating agency such as Moody’s, which assigns a rating such as AAA for the safest investments (such as a money market fund) to BBB for a junk bond.

In the article Regulatory Malfeasance and the Financial System Collapse, economist Joseph R Mason, Senior Fellow at the Wharton School , writes that it is important to understand “the terms and triggers of securitizations to recognize perverse incentives apparent in selling the AAA securities but keeping the risk.”

What structured finance products do, whether intentionally or not, is to obscure the risk signal associated with an investment. This occurs because risk is separated out from the underlying investment. When a MBS is insured by a CDS, the risk is apparently moved from the firm holding the MBS to the firm holding the CDS. But the decoupling or risk and return is an illusion, because the MBS and CDS are both linked to the risk of the underlying real asset—the mortgage. If the mortgage fails, the MBS loses value, and the CDS is supposed to pay. But why did the mortgage fail? Because the economy has problems, and these problems also affect the ability of the holder of the CDS to pay when the mortgage fails. So, while the risk signal of the MBS was changed via financial engineering, the underlying risk remained with the MBS.

The Insight

The insight is that financial risk obeys a kind of “conservation of matter” law. That is, “risk can neither be created nor destroyed”. So, while financial engineering can move risk around, it behooves anyone considering purchasing a structured finance product to understand exactly where the risk is being moved to, and to ensure that the destination of the risk is truly unconnected with the structured finance product being purchased. My hunch is that you can't ever guarantee decoupling, and that structured finance is essentially just an exercise in obfuscation. Once we have the model running, I look forward to tracking the flow of risk.

A note on Financial Engineering

The practice of creating structured finance products is called “financial engineering”. I am trained as an engineer, and grew up viewing engineering as synergistically combining elements from a palette of technologies to create a new capability. These new capabilities add value to the economy, because they either solve a problem that has value or enable some new activity or product, which also has value.

Financial engineering, in contrast, seems to only move things (e.g. risk) around without creating new value. One may argue that securitizing debt (e.g. MBS) allowed many more institutions to sell debt to individual people in the form of mortgages and credit cards, which expanded homeownership and consumer spending. But it seems to me that primary motivation behind securitizing debt was to have new financial products to trade, with a commission being made on every trade. Hosting the trading of assets is a great business model, because “the house always wins” in that every trade generates a commission, whether the buyers and sellers win or lose. In other words, securitization of debt doesn't add any real value to the world; it just allows the institutions that facilitate the trades to siphon value out as commissions.

That doesn't sound like engineering to me.