4. The elephant on the other side of the wall

by anyarchitectanyarchitect on 1220347598|%e %B %Y
rating: 0, tags:

When I was very young I had once pestered my father for a sketch. He drew very well, but he was always busy. So he took a piece of paper and drew a horizontal line across it. Then he drew an arc above the horizontal line. He gave that to me, telling me that he had now given me a drawing. I was perplexed.

I refused to accept the sketch. I told him that the drawing is incomplete or didn't make sense and he was trying to get away from me. "It makes perfect sense," he replied. Then he asked me to guess what he had drawn. I could not. He answered: "An elephant". I was still perplexed. He then clarified: "An elephant on the other side of a wall"

We'll get back to this in a moment. Talking about Elephants and curves and horizontal lines, here is something I discovered on the net:

Take a flexible chain of uniform linear mass density. Suspend it from the two ends. What is the curve formed by the chain? Galileo Galilei said that it was a parabola, and perhaps you made the same guess. This time Galileo was not correct. The curve is called a catenary. However, it is easy to see how he could arrive at this answer through casual observation and incomplete deduction Look at the similarities between a parabola and the curve of the chain. For this comparison, assume that the parabola is concave upward. Both curves have a single low point. They both have a vertical line of symmetry. They at least appear to be continuous and differentiable throughout. The slope is steeper as we move away from the low point, but it never becomes vertical. We can get back to the chain solution later. First consider this extension.What about the curve formed by the cables of a suspension bridge? Is it too a catenary? No, it is a parabola. So, what gives? How can this be a parabola while the other one is not?

(Warning: Much as I hate using analogies, I still end up using them as I may not be a refined writer with a cornucopia of capabilities: I dont seem to have at hand other literary devices to bring out my points equally well. So if you see an analogy here, please pause and apply your own thoughts.)

Much of science is peppered with such approximations. Who makes them? Or an even more important question: Who sanctions them? Do we know that the approximation has not taken out some crucial aspect of the thingamajig being spoken about that it no longer makes sense?. In the case of the elephant as well as in the example of the suspended chain, the ends are markedly different from the geometry of the central portion of the curve — the only portion where it may be similar to the real thing!. If we remove the wall, the little boy would have seen some quite remarkably different geometries at either end of the elephant. And he would have also noticed many things that may not even fit into the domain of geometry or art! Similarly, if we continue the curve of the chain towards either end, the catenary would deviate from the parabola.

I believe in some situations the issues the world throws at us cannot be handled empirically. We rather represent the world outside into convenient abstract constructs. This act of representation is also called modeling. As explained in some other article, modeling can also be looked upon as setting up a mathematical system. The crucial question is: what do we retain and what do we throw out, in models we construct? There is no answer to this question as it leads to the problem of infinite regression that obsess many philosophers: Any explanation of the real-world around us leads to even more questions; for each question that is asked.

It is often believed that the state of modern science and mathematics is sufficient for reasonable models. I don't believe so. The most remarkable failure of the philosophical approach of modern science and modeling methods was demonstrated with the discovery of Heisenberg's uncertainty principle. Steven Weinberg's book "Dreams of a final theory" has a good explanation (Page 72-73) that is tractable to the lay public which I would like to quote:

Heisenberg considered the problems that are encountered when a physicist sets out to measure the position and momentum of an electron. In order to make ana accurate measurement of position it is necessary to use light of short wavelength, because diffraction always blurs images of anything smaller than a wavelength of light. But light of short wavelength consists of photons with correspondingly high momentum, and, when photons of high momentum are used to observe an electron, the electron necessarily recoils from the impact, carrying off some fraction of the photon's momentum. Thus the more accurately we try to measure the position of an electron, the less we know after the measurement about the electron's momentum. This rule has become to known as the Heisenberg uncertainty principle.* … *To be a little more precise, because the wavelength of light equals Planck's constant divided by the photon momentum, the uncertainty in any particle's position cannot be less than Planck's constant divided by the uncertainty in its momentum. We do not notice this uncertainty for ordinary objects like billiard balls because Planck's constant is so small… a decimal point followed by twenty-six zeros and 6626. Planck's constant is so small that the wavelength of a billiard ball rolling across a table is much less than the size of an atomic nucleus, so there is no difficulty in making quite accurate measurements of the ball's position and momentum at the same time.

Today, when I look back on Heisenberg's historic discovery I wonder why that discovery did not lead to an re-assessment of the state of intellectual affairs. And what was that? Intellects have now reached a cul-de-sac where they have no choice but contend with juxtapositions hitherto considered immiscible. Even measurement devices have started interfering with the thing being measured! In earlier societies, people could possibly get by even when catenaries were confused with parabolas. But no longer. Earlier the actions carried out by people in various societies were quite amorphous. Earlier, people lived in separate geographic areas in their separate own worlds, in their respective societies. But now the amorphous has started becoming the homogeneous. This has started happening both in abstract theories (where fields have started influencing one another) but also out there in the empirical world. Now, in many instances, the wall in front of the elephant is getting demolished…. and it is not just the Berlin wall I am talking about.

An equally dramatic discovery was by Lorenz when he was trying to make mathematical models for weather(described in fair detail in James Gleick's well written book for the lay person: 'Chaos'). He found that even small rounding off errors (which is scientifically often forgiven), from one step of an algorithm to another can make a huge impact in the overall weather prediction. "Sensitive dependence on initial conditions" became a buzz-phrase in scientific circles. I believe complexity theory may offer some insight to fill the philosophical questions sparked off from Heisenberg's uncertainty principle. It took almost forty years for scientists to acknowledge the presence of complexity. Maybe once scientists realized that what was hitherto complex need not be completely intractable, it became polite to acknowledge its existence. Complexity theory is also called chaos theory, but I do not like the term 'chaos' because it is often confused with 'haphazard' and 'incomprehensible'. A complex thing need not be incomprehensible.

The mathematical tool called fractals whose discovery is attributed to Mandelbrot is often used to find patterns in complexity. (Sounds kind of contradictory to use both the words "pattern" and "complexity" in the same phrase). Note that the term "complexity theory" and "fractals" are not synonymous. The former talks about seemingly intractable and unpredictable phenomena that are distinguished by their sensitive dependence on initial conditions. Such phenomena are observed in all aspects of nature. The latter is just a mathematical tool that could (sometimes) lay down roads into the seemingly intractable territory of complexity. Fractals are often used in graphics to represent rough textures and curves that display the property of self-similarity. There are popular references of connecting fractals to architecture for its use in graphics, but I think the power of fractals lay much beyond. To tell you the truth, my experience with computer graphics has taught me that fractals is just a coy toy. CG has much more interesting and important things. But more on that in another article.

Scientists were used to theoretical approaches that can be traced back to reductionist philosophies of Descartes and others. The wall drawn by my father in front of the elephant was the classic reductionist strategy: It was falsely believed that once we concentrate on the known then the unknowns will take care of by themselves. There was another aspect of reductionism: When in difficulty, divide and conquer. It used to be believed that complexity is just an extrapolation of simpler parts. They also believed that extrapolation can be done over time once a fair idea of the initial conditions were known. (This particular philosophy was known as "determinism")

But with the discoveries by Heisenberg, Lorenz, Fiegenbaum, etc. all such notions are no longer held to be true all the time. For example; the equations that describe the streamlined motion of fluid cannot be used to describe what happens to the fluid in a turbulent state. As the wall gets demolished from front of the elephant, we need to tackle all kinds of complexities that happen at either ends of the curve. With fractals, it is possible to explore parts of the curve beyond the nice reductionist curve that was visible above the wall.

After this protracted introduction, I now need to quickly get to how all this may be relevant to social sciences such as architecture and urban design. But evidently, I am not done as yet :-)

This article (as in other introductory articles in this book) is not claiming to lay out any foundational maths regarding complexity theory. I am asking what I think are important questions — the answers of which would lead to the actual mathematics. Richard Feynman in yet another interview had brushed off the social sciences as hardly being scientific — though he did concede that it could become one. However I got the feeling that he probably did not believe that it had any means to become one. It disturbed me enormously: I do respect Feynman and it is often not easy to listen to words that ring true. I believe that the ball is in our court squarely correct the situation. Where do we start? I can give some simple pointers:

As per the official FAQ page on chaos theory; any system that displays sensitive dependence on initial conditions is said to be chaotic. This situation is also known as the "butterfly effect" (… A breeze caused by a butterfly fluttering in Mumbai causing a sandstorm in the Sahara…) A clearer example is this little nursery rhyme:

"For want of a nail, the shoe was lost; For want of a shoe, the horse was lost; For want of a horse, the rider was lost; For want of a rider, a message was lost; For want of a message the battle was lost; For want of a battle, the kingdom was lost!"

We know of many situations where we can apply that rhyme anecdotally. Unfortunately, science is not anecdotal. We cannot look at the haphazard nature of architecture around us and then proclaim chaos theory on all that. (Just the way we cannot look at all squiggly lines and rough terrains and proclaim fractals on those). For example; one should not confuse the butterfly effect with what is called the domino effect (like a house of cards that falls down). In the domino effect, one event iteratively, but predictably leads to another. But in case of the butterfly effect, the magnitude of each event is non-deterministically multiplied by event and/or events preceding it. The scientists explain it using a concept called a phase diagram with which it is possible to determine whether the system is indeed scientifically chaotic (i.e. that which depends on the initial conditions) Those phase diagrams remind me of a toy I used to use when I was a kid (a spirograph) where it used to produce fascinating patterns with a ball point. (Here is an interactive Java version at http://www.wordsmith.org/~anu/java/spirograph.html ) Just as one starts rotating the spirograph, it may look that it is resulting in some chaotic pattern … but no, the pattern ends up being quite geometrical. But in a chaotic phase diagram, the system would never really settle down but hover around what are best described as strange attractors

The challenge then is to define such phase diagrams in the field of architecture. My suspicion is that they are there alright, we simply have not been articulate enough to clearly describe them. In chemistry, two Russian scientists discovered the Belousov-Zhabotinsky reaction: Certain chemical mixture display visual patterns that grow organically right in front of your eyes. They remind me of the patterns that form in a slum (but be wary of my analogies!) The recipe for making the chemical reaction is explained in the eminently readable book "The Collapse of Chaos" (which, I think is more intense than James Geick's). It is also reproduced here: http://www.zoo.co.uk/~z0001246/zhab.html#zhab Warning: The chemical compound is poisonous!

Another promising method to investigate chaos in architecture is using cellular automata. Stephen Wolfram, the eminent author, scientist and entrepreneur predicts much use of cellular automata in future scientific pursuits. Here is the Wikipedia introduction: http://en.wikipedia.org/wiki/Cellular_automata

A cellular automaton (plural: cellular automata) is a discrete model studied in computability theory and mathematics. It consists of an infinite, regular grid of cells, each in one of a finite number of states. The grid can be in any finite number of dimensions. Time is also discrete, and the state of a cell at time t is a function of the state of a finite number of cells called the neighborhood at time t-1. These neighbors are a selection of cells relative to some specified, and does not change (Though the cell itself may be in its neighborhood, it is not usually considered a neighbor). Every cell has the same rule for updating, based on the values in this neighborhood. Each time the rules are applied to the whole grid a new generation is produced. One example of a cellular automaton (CA) would be an infinite sheet of graph paper, where each square is a cell, each cell has two possible states (black and white), and the neighbors of a cell are the 8 squares touching it. Then, there are 29 = 512 possible patterns for a cell and its neighbors. The rule for the cellular automaton could be given as a table. For each of the 512 possible patterns, the table would state whether the center cell will be black or white on the next time step. This is an example of a two dimensional cellular automaton.

Wolfram and others have shown that in some situations cellular automaton shows chaos like features. There is a popular game called the Conway game of life ( A web page on the Net has the interactive, Java version) that demonstrates one such cellular automaton.

To summarize, I believe that society is moving towards more homogeneity because of the Internet, and many other things. Juxtapositions of the strangest of attractors are inevitable. Complexity theory could offer some help in this story of the six blind men and the elephant. We could use fractals, cellular automata, phase diagrams, etc. to do cut through the confusion.

References:

  1. This site explains the question "What is chaos?" quite elegantly.
  2. http://www.hypography.com/go.cfm?url=12021 has a great introduction to Chaos theory. I would strongly urge the reader to go through that page at the very least.
  3. http://www.duke.edu/~mjd/chaos/chaosh.html has a more in depth introduction.
  4. Meta links: http://www.hypography.com/topics/chaostheory.cfm
  5. http://www.hypography.com/go.cfm?url=12022
  6. Lorenz: http://www.newton.cam.ac.uk/wmy2kposters/march/
  7. Simple FAQ: http://www.vismath.org/faq/chaosfaq.html
  8. Book: The Collapse of Chaos by Jack Cohen and Ian Stewart. 1994. Viking Penguin
  9. Book: Dreams of a final theory. The scientist's search for the ultimate laws of nature by Steven Weinberg. 1992. Vintage Books.

Back to mathematics


Add a New Comment
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License