Scale by Geoffrey West
I listened to a podcast with Geoffrey West and thought his ideas were compelling enough that I should delve deeper. "Scale" covers a lot of ground, but the main focus is what West calls allometric (as opposed to isometric) scaling laws, which means a nonlinear relationship.
True to his physicist roots, West looks for a unified theory of everything. According to him, biologists in general lack an appreciation for theory. And in general biology has us “drowning in a sea of data and thirsting for some theoretical framework with which to understand it”. The same can be said about the other, even less scientific fields that are subject to West's analysis: cities, economies, and companies.
West invokes colorful characters like Isambard Kingdom Brunel (what a name!) and his ship building adventures. His insight was that scaling law favor long ships: the main slowing factor in drag is cross sectional area, but thing to optimize is volume.
One of the earliest formulations of allometric scaling is Klieber's law. Animal metabolic rates scale as the ¾ power of the animal's mass. Many other traits follow similar power laws with exponents that are integral multiples of ¼. This pattern holds in many other examples:
- Biomass produced by insects ~ Mass of colony ^ 3/4
- Heart rates of animals ~ Mass of animal ^ -1/4
- White matter volume ~ Gray matter volume ^ 5/4
- Metabolic rate for cells ~ Mass of cell ^ 3/4
West poses the question: why ¼ is such an important and recurring ratio? He turns to network theory and fundamentals for an explaination: "these networks are constrained by three postulates: they are space filling, have invariant terminal units, and minimize the energy needed to pump fluid through the system."
The physics of blood flow was first understood by Thomas Young, the same guy (polymath) most famous for his double slit experiment. Interesting is that blood pressure is invariant across all animals regardless of size. One of the characteristics is that in animal pipes (veins and arteries), and plants (fiber bundles), the sum of cross sectional area of inflow has to equal sum of cross sectional area of outflow.
West tries really hard to make the book accessible to a wide range of readers, and is careful to never use any equations in the book. One unfortunate side effect of this is that he isn't able to complete the argument and actually derive the reason why multiples of ¼ show up so often. I had to look up the math later.
Something I haven't thought about is how blood flow goes from pulsative (near heart) to non pulsative (at capillaries). How does this transition occur? I'd love to do an explorable explanation around this. Small mammals can’t exist because pulsative transmission is more efficient and requires a minimum vessel size. Is this related to why AC is more effective at travelling long distances than DC?
Next, he turns to fractals. Measuring lengths of borders between countries, the higher the resolution the longer the length. In non-fractal situations, higher resolution causes convergence to a value. In fractal situations, higher resolution causes the value to increase indefinitely. You can quantify how fractal something is by looking at how quickly it’s length decreases as a function of resolution. So it’s basically:
fractality = - d(length) / d(resolution)
This was first identified when trying to measure lengths of coastlines, leading to the Coastline paradox, first identified by Richardson. A straight line has fractal dimension 1, but crinkly and fjord-filled borders of Norway have fractality 0.5 by the above definition, and so has a fractal dimension of 1 + 0.5 = 1.5. The crinkliest line would have a dimension close to 2, which corresponds to the fractal dimension of a perfectly smooth surface. The crinkliest surface would have a dimension close to 3, corresponding to a volume. This is fascinating since now we’re talking about a continuum in R^n, where n is no longer an integer!
I wonder... What would fractal dimension greater than 3 look like?
Even time series can be fractal: you can’t tell the time scale by looking at a snapshot of the value of a stock over time. I wondered if there were clear fractals in nature, as observed in satellite imagery, and found this site. Also, it would be very neat to have an AxiDraw generate fractal drawings. I went on a tangent, the way to do this would be with L-systems.
“Smooth shapes are very rare in the wild but extremely important in the ivory tower and factory.” - Mandelbrodt
The reason growth stops is that there are too many cells to support at some point, so no more extra energy to spend on creating net new ones. Holds true for individuals, colonies of organisms, and tumors. In terms of aging and human limits, the average age has been going up but the max seems to be converging at 125. Most age related damage happens at the terminals (capillaries, mitochondria). These are space filling so evenly distributed throughout. The larger the animal, the slower the metabolic rate and so less damage at terminals and so longer lifespan.
Caloric restriction and temperature reduction are both viable ways of increasing lifespans but obviously have negative side effects. Also these methods yield only moderate improvement, an order of 10%.
“Even if every cause of death were eliminated, all humans are destined to die before they reach 125.”
Cities also have scaling patterns associated with them. There’s an economy of
scale of many infrastructure elements: gas station, length of roads, electrical,
water and gas lines, all scale at
rate ~ population ^ 0.85. At the same
time, socioeconomic metrics scale at a super linear rate. Number of patents,
wages, crime rates, number of restaurants, scale at
rate ~ population ^ 1.15.
There are inherent limits to how many people you can know: 5 intimates, 15 close friends, 50 acquaintances, 150 familiars. This is Dunbar’s number, which seems to be fundamental and spans across societies. Another pattern is Zipf's Law which originated for words usage which varies inversely with rank. Example: “the” is most frequently used at 7%, next is “of” at 3.5% (1/2), then “and” at 2.3% (1/3). Same non-normal distribution applies to size of cities and companies.
Socioeconomic interactions in a city are the sum of interactions between people. If everyone knew everyone it would be a power law with exponent 2. But because of these limits like Dunbar’s number and Zipf's Law, there’s still a super linear relationship but the exponent is closer to 1 than 2. But why does infrastructure sublinearly and socioeconomic scale superlinearly? In infrastructure the biggest flow are main arteries: highways, major sewers, aortas. The nodes have capillary like access with small flows. In socioeconomics, the biggest flow is between terminals, or people. The rest just facilitated people getting together so have a smaller relative flow of information.
Movement in cities in general is not random but very structured: most commonly,
going from home to work and back, or home to another place and back. And there
are clear patterns, such as this inverse square one: number of people visiting a
location at a distance (r) and a number of times per month (f) is given by a
n ~ (r f) ^ -2.
West presents an interesting critique of per-capita metrics in cities: “it implicitly assumes that the baseline for any urban characteristic is that it scales linearly with population” contrary to the thesis of this book. Instead, per capita metrics should be compared to the expected values of those metrics based on the size of the city and the scaling law. He proposes a scale called Scale Adjusted Metro Indicators (SAMIs). Apparently San Francisco has a really good SAMI for innovation. I wonder how Seattle stacks up?
Organisms vs. cities vs. companies:
- Organisms: sublinear scaling and economies of scale dominating biology lead to stable bounded growth and slowing pace of life.
- Cities: superlinear scaling dominating socioeconomic activity leads to unbounded growth and increasing pace of life.
- Companies scale more like organisms than like cities. There is an analogy between metabolism and revenue, between maintenance and expenses.
Just like organisms that have a maximum size, companies also stop growing in relation to the market. There’s a temporal aspect too, by 20 years, 85% of companies disappear through bankruptcy or liquidation.
Speculation on why companies die: R&D fraction goes down over time. More rules and process over time, ossification of processes. More focused on short term results and a “tried and true” strategy. (Google appears to be good at mitigating a lot of this, may be in some sense more city-like).
I don’t understand why superexponential means that there must be finite time
singularities. It seems to me like
e^x < f(x) < \inf between the two. West's
overall view is that major paradigm shifts happen increasingly frequently.
Either we will need to be more and more innovative or learn to “be content with
what we’ve got and find some new way of defining progress”.
A surprisingly zenful end to a whirlwind of a book. It opened up my eyes to a lot of interesting ideas. I recommend it!