October 16, 2012 by
A mention of “Robust-Yet-Fragile” label by resilience author @andrew_zolli led me to John Doyle‘s research at Caltech. Andrew Zolli writes:
We rightfully add safety systems to things like planes and oil rigs, and hedge the bets of major banks, in an effort to encourage them to run safely yet ever-more efficiently. Each of these safety features, however, also increases the complexity of the whole. Add enough of them, and soon these otherwise beneficial features become potential sources of risk themselves, as the number of possible interactions — both anticipated and unanticipated — between various components becomes incomprehensibly large.
This, in turn, amplifies uncertainty when things go wrong, making crises harder to correct: Is that flashing alert signaling a genuine emergency? Is it a false alarm? Or is it the result of some complex interaction nobody has ever seen before? [....]
CalTech system scientist John Doyle has coined a term for such systems: he calls them Robust-Yet-Fragile — and one of their hallmark features is that they are good at dealing with anticipated threats, but terrible at dealing with unanticipated ones. As the complexity of these systems grow, both the sources and severity of possible disruptions increases, even as the size required for potential ‘triggering events’ decreases — it can take only a tiny event, at the wrong place or at the wrong time, to spark a calamity.
In an 2007 Discover Magazine article, Carl Zimmer provides a simplified description of research conducted from a theoretical foundation (in Scale Free Networks) in contrast to that from empirical practicalities in control engineering.
In the 1990s, studying complex systems of all sorts became something of a fad following the emergence of “chaos theory.” Competing versions of this theory were emerging left and right; chaos was being touted as the science of the future. Doyle was unimpressed by most of the new ideas. “It was clear to me that they were just so far off the mark,” he says. Doyle made up a name that combined all the trendy buzzwords he came across: “emergilent chaoplexity.”
One reason that Doyle loathes emergilent chaoplexity is because it relies on superficial patterns. Doyle, by contrast, insists that his analyses draw from the gritty details of how things actually work.
As an example, Doyle points to what are known as scale-free networks. Many of these networks—interlinked sets of airports, friends, nerves in the body, and so on—have the same basic structure. A few nodes are highly connected hubs, while most other nodes have only a few connections. Any given small city airport probably connects to just a few others. Passengers rely on being able to transfer at a hub to reach most other places. But if you live in Chicago, you can take a direct flight from O’Hare Airport to hundreds of destinations.
Some researchers, like Albert-László Barabási at the University of Notre Dame, have argued (pdf) that the Internet shares a similar structure and that this accounts for why the Internet keeps humming even when some of its systems fail. Since hubs are rare, failures involving them are even rarer. But should a hub fail, researchers warned, it would lead to catastrophe. Their warning made headlines, with CNN reporting in 2000: “Scientists Spot Achilles’ Heel of Internet.”
Doyle was not impressed. “Everybody who knew how the Internet worked was puzzled by all this,” he says. He decided to test the Achilles’ heel theory by joining up with a group of collaborators and mapping a section of the Internet in unprecedented detail.
In that map, they found no Achilles’ heel. The Internet does have a few large servers at its core, but those servers are actually not very well connected. Each one has only a few links, mainly to other large servers through high-bandwidth connections. Much of the activity that occurs on the Internet actually lies out on its edges, where computers are linked by relatively low-bandwidth connections to small servers; think about how many e-mails office workers send to people in their building compared with how many they send overseas. If one of the big links at the core of the Internet crashed, Doyle and his colleagues discovered, it would not take the Internet down with it. Traffic could simply be rerouted through other big links.
The Internet works spectacularly well, despite the fact that over the past 30 years it has expanded a million-fold, absorbing new technology from BlackBerries to the iTunes music store with hardly any major changes to the basic rules it uses to move data. Doyle now knows why. It’s not just the physical arrangement of cables and servers that makes the Net so robust. Doyle and his colleagues showed that the software that runs the Internet uses feedback, in much the same way a jetliner’s computer does. The Internet can sense changing conditions and adjust itself.
The Internet has two kinds of feedback. It maintains a constantly updated picture of the entire network so that messages can be directed along the fastest routes. It also breaks down those messages and encapsulates them inside standardized packets of data, a little like using the standardized waybills and boxes provided by FedEx. Each packet can take its own path through the Internet. As packets arrive at the recipient’s computer, the message fragments in each packet are extracted and reassembled. Critically, as each packet arrives, it sends back a receipt to the sender’s computer. In heavy traffic, some packets get lost. In response to lost packets, computers slow down the rate at which they send their data, reducing congestion.
Together, these two types of feedback give the Internet a robustness more powerful than anyone anticipated. “These Internet engineers weren’t control theorists, but they built this incredibly robust network,” Doyle says. “Man, that’s awesome.” Then again, the engineers were doing something that evolution figured out long ago.
Looking at the original 2005 PNAS paper, Doyle and his coauthors create two models of networks, and then compares them to the real Internet. Read more... (2973 words, estimated 11:54 mins reading time)