Category Archives: Concepts

Parsimony

Parsimony means efficient simplicity. In science and philosophy, it often guides the quest for the shortest, cleanest path or explanation. This word comes from the Latin verb parcere – to spare. It is usually a safe bet that most series of events proceed under the rule of ‘common sense’ where waste of energy and/or time is minimized. Understanding and theorizing about such processes can benefit from the heuristic of parsimony.

One example of parsimony-guided analysis is found in an area of evolutionary biology called phylogenetics. Here, closeness of relationships is determined by counting the number of evolutionary changes between taxa (groups of organisms). For example, DNA base pair differences can be enumerated to hypothesize a most likely ancestral tree with speciation, hybridization, and extinction defining the branches. Even for just a few taxa, many different phylogenetic trees are possible. Applying parsimony, the most likely is the one that requires the fewest genetic changes. This scheme offers the hope that an entire systematic taxonomy (tree of life) can be compiled.

Parsimony is a valid heuristic in comparing phylogenetic trees because each tree results from the same theory (evolution) in general, and ancestry in particular. However, parsimony must not be applied when comparing competing theories. This can give misleading, and sometimes completely wrong, results. The only basis for comparing theories is each one’s ability to explain current observations and make correct predictions, not their elegance or parsimony. Parsimony cannot be used as a logical principle. For example, it is a fallacy to use parsimony to argue against what is in fact a fundamental requirement for life: complexity.

Expert System

One of the more mature and successful applications of AI (Artificial Intelligence) is the Expert System, in which a knowledge base (domain) is stored on a computer, and then delivered back to users via an ‘inference engine’. An inference engine is a sort of active decision tree, wherein branches can be taken according to current conditions, and thinking can even be ‘backtracked’. Backtracking is necessary if the current conditions change or if an unfruitful path is taken (perhaps a wrong guess). The goal is to have a machine-based, portable ‘expert’ that can make decisions within this domain using varying problem parameters like a human can. One person or persons contribute knowledge, and a different set of people then can use that knowledge. These two groups may be widely separated in time and location, perhaps even in areas of sparse population or hazardous conditions. Interestingly, the user is not necessarily another human. Expert Systems are sometimes employed in automated systems to enable machines to make (artificially) intelligent decisions.

Expert systems are usually created within a ‘shell’. This is a framework that allows rules to be defined and stored along with the facts that define the domain. The human expert imparts her knowledge using the development tools in the shell. Once created, users can then use the shell to apply that stored expertise, by means of an interactive dialog, to new sets of problems within the domain.

Of course, machines lack the ability to apply emotional, cultural, and social context. This can be a hinderance or a benefit depending on the application.

Water Chemistry and Life

Life on earth (and presumably elsewhere) depends on water. The biochemistry of life takes place in aqueous solution. Even organisms that live on dry land are mostly water (cells are liquid internally). Water has several key properties that life depends on.

The water molecule (H2O) is slightly polarized. The hydrogen end is positive and the oxygen end is negative. This makes water a powerful solvent (it is commonly known as a ‘universal solvent’). This is important because once in solution, chemicals can easily interact.

The two hydrogen atoms form an angle of 105° with the oxygen atom. This angle allows water to form a loose lattice with each molecule bonding to three or four others. These are very weak bonds that are constantly breaking and re-forming. This gives water some remarkable properties. First, it’s transparent to visible light. This lets sunlight penetrate the top few meters of the ocean, giving photosynthesis some working space. However, water absorbs harmful radiation like UV and microwaves, which would damage or even prevent life. Next, water has a very high specific heat, which moderates (buffers) the earth’s surface temperature, keeping it from getting too hot or too cold – again, important for life. Next, water actually gets less dense as it freezes into a solid, which is strange. Ice floats. Imagine what would happen if it didn’t. Layers of ice that formed on the surface would sink to the bottom of the ocean, where they they would stay frozen. Eventually, the whole ocean would freeze solid, and life wouldn’t be possible at all. Next, water has a high surface tension and resulting capillary activity. This allows water to be carried up plant stems, against gravity. Next, water can be either a base or an acid in chemical reactions.

Water is so fundamental for life that it’s what we look for whenever we search for life on other planets or even in deep space. The temperature range where water is liquid is the ‘sweet spot’ for life.

Collective Intelligence

Collective behaviour allows a group of individuals to behave as a single unit eg. flock, school, swarm. This has the advantages of common and wider awareness of food/predators as well as confusing predators (safety in numbers). This behaviour is achieved by each individual following simple rules, such as: stay aligned with neighbours, match neighbours’ velocity, maintain fixed distance from neighbours, etc. This behaviour produces the illusion of orchestration.

Collective action allows a group of individuals to coordinate their actions. An example is quorum sensing, which enables honeybees to choose a new hive location. This mechanism is also used by bacteria to coordinate the release of digestive enzymes and toxins, and formation of biofilm.

Swarm intelligence allows a group of individuals to make better decisions than any single individual could. It has three main components: independence, diversity, and positive feedback. An example is an ant colony. One of the tasks of the colony is to locate and fetch food with a minimum expenditure of time and energy. Since no single ant is intelligent enough, or well-informed enough to direct this effort, swarm intelligence is employed. A few ants go out and explore for food. Individual ants explore independent routes. This makes for a very diverse and widespread search. Although most of them will find low-value, inefficient routes, a few will find more efficient routes to food sources. These will be the first to return to the nest, thus inspiring others to follow their path. The current hypothesis is that subsequent trekkers lay down pheromone at key junctions, thus reinforcing the best path(s) via positive feedback. Very quickly, the entire effort of the colony is concentrated on the most efficient route(s). A good decision has been made.

How does such swarm intelligence evolve? In the particular case of ants, why would scouts evolve to lead such dangerous, solitary, often fruitless lives? Isn’t that the antithesis of Darwinism?

The answer is that evolution applies not to individual ants, but to the gene pool of the species. It is the ants’ DNA that is evolving. Natural selection screens for those mutations that are beneficial to the colony. The more independent are the scouts, the more diverse will be the search for food, the greater will be the likelihood of success of the colony and therefore reproduction. The colony’s gene pool will then propagate the trait of independence in scouts.

Human intelligence allows a single person to be sentient. It incorporates mechanisms such as inference, pattern-seeking, goal-seeking, perception, complex memory, imagination, and language. However, it has been speculated that the basis of this higher intelligence may be some form of collective intelligence amongst neurons.

Evolution

Living things reproduce with variation and are subject to natural selection. Reproductive variation results from random genetic mutations. Natural selection is the non-random screening (via death or survival of individuals) of those mutations which aid the life form in its struggle to survive long enough to reproduce. For example, even a slight improvement in eyesight will tend to aid in survival and thus help to propagate the genes that carry that trait within the gene pool of a species. Eventually, a single species will diverge into new ones (speciation), each better adapted to its particular environment, which includes geography, competition, predation, etc. Over geological time, this process has produced all the species that have ever existed (until humans began artificial selection and even direct synthesis of new species).

Evolution is perhaps the simplest, most profound, and most verified scientific theory ever conceived.

Analog Computer

A computer is a device that performs calculations on input data to produce output. The digital computer is the type in common use today. However, there is a much older type – the analog computer. Mechanical analog computers have been used since ancient times. The slide rule (invented in the 1600s) is one example. Electronic analog computers are still fairly widely used today, but mainly for very specific tasks (eg control systems).

In an analog computer, processing is done using continuous, real numbers instead of discrete, digital numbers. Real numbers can represent any value to the precision allowed by the physical device’s tolerances. Thus an analog computer can work on calculus problems directly. While a digital computer requires many simple transistors (each holding a 1 or 0) to store one discrete value, an analog computer can use a single capacitor to store one continuous (real) value. Analog computers can provide good models for the physical world. The mathematics governing masses, springs, fluidic flow, etc can be directly applied to an electronic circuit based on operational amplifiers. Also, all processing is done in parallel as opposed to the sequential nature of digital computers. Output varies with input in nearly ‘real time’, thus making analog computers useful in many control systems.

During World War II, analog computers grew in complexity and power as controllers for weapons systems. However, the advent of the modern electronic digital computer after the war largely led to their obsolescence. Digital computers offer great advantages. Miniaturization allows millions of simple binary transistors to be placed on a single chip. Digital numbers can easily separate mantissa and exponent (scientific notation),  creating virtually unlimited dynamic range. Digital numbers are largely immune to noise or signal loss.

Double-Blind Study

One of the greatest steps forward in science in the last several decades has been the widespread acceptance and use of the double-blind study. This has more to do with human nature than with the scientific study at hand, but science is done by people, so the statement holds. Direct experience of an event may not be as reliable as reasoning about evidence (scientific inference), even after the fact. This is a somewhat surprising notion, and one that has truly profound implications, not just for science, but for any field of testable knowledge.

A double-blind study is so called because neither the subjects nor the researchers administering the test are aware of the objective details of the test. For example, when testing a new drug, neither the patients nor the researchers recording the results know whether the patients are given the real drug being tested or a placebo. Such a study eliminates conscious and unconscious bias on both the parts of the subjects and researchers.

Another example is the analysis of the results of a particle physics experiment. Single-blind is sufficient in this case as the subjects (particles) are not capable of bias. One methodology is to subdivide the analysis into fractions the size of which are unknown to the analysts. They then would have no expectation of particular results for each fraction. Once the fractions have all been analyzed in this blind fashion, they can be assembled systematically and without bias into a complete (double-blind) study. There are several other methodologies used, each one carefully denying enough information to the analysts to ensure their biases are neutralized (knowledge of current theory, knowledge of apparatus used, knowledge of colleagues, etc).

This observer bias can have many causes, not the least of which is the pattern-seeking nature of the human mind. Our experience, expectations, and even desires can alter and augment our perceptions. In addition to observer bias, a well designed double-blind study can also minimize statistical illusions and false cause-effect conclusions.

Gravitational Waves

First postulated in 1916 by Albert Einstein, gravitational waves are disturbances in the fabric of spacetime. They are caused by, and radiate outward from, a quadrupole (mass distribution) with an accelerating moment. An example in nature of such a system, one that might generate detectable gravitational waves, is a pair of neutron stars in decaying orbit around their center of gravity. The orbital decay results from the loss of energy being radiated away as gravitational waves.

The effect of these passing gravitational waves would be to strain (stretch and squeeze) space. The magnitude of this strain would be in the order of 1 part in 10-20.   This is roughly the ratio of sub-atomic and interplanetary distances.

Several projects are currently trying to directly detect these waves. They use infrared laser interferometry along one or two multi-km vacuum tubes (made effectively even longer by bouncing the light back and forth several times) to measure local disturbances in space with (hopefully) sufficient accuracy. Low sensitivity runs have been completed (data analysis is ongoing), and high sensitivity plans are underway.

Detection of gravitational waves would open up a new avenue of astronomy. Firstly, such quadrupoles would emit gravitational radiation even though they might not emit electromagnetic radiation (eg some black hole configurations). Therefore some objects that were previously unobservable would now be visible. Secondly, unlike electromagnetic radiation, gravitational radiation is not absorbed or scattered significantly by matter between us and the observed object. Thirdly, the amplitude of such waves diminishes only by the inverse of distance, not by the inverse square of distance as with light intensity. Thus, much more remote views would be possible, perhaps all the way back to the big bang. Such detection would also be a strong positive test of the General Theory of Relativity.

If such detection does not happen, the implications are equally seismic. In a similar fashion to the observations that required the replacement of the Newtonian model, non-detection of gravitational waves would require the replacement of the Einsteinian model.

Origami of Life

In his current book, “The Greatest Show On Earth”, Richard Dawkins includes a fascinating look at embryology. His intention, indeed his accomplishment, is to show that assembly of complex structure can work on a very short time scale. This is valuable, since the concept of geological time is one of the greatest stumbling blocks to understanding evolution. It’s a non-starter for young-earth (6,000 year) creationists, but it’s also a big problem for everyone else because it’s well beyond staggering – it’s actually incomprehensible.

Dawkins discusses several analogies between natural construction and human activities. One of these is the art of paper folding known as origami. In origami, small steps are taken to achieve intermediate ‘larval’ stages. Emphasis is on the simple folding task at hand, not on the finished design. This is similar to the local rules obeyed by cells during embryonic development, by proteins as they fold into 3-D molecules, and viruses as they ‘self-assemble’. No overall blueprint is ever used, or indeed is ever necessary. Each embryonic cell may have a complete copy of the organism’s DNA, but each cell type will have a slightly different subset of these genes turned on for use. This is how successive generations of cells gradually differentiate.

Thus, local, simple rules govern local changes that ripple, augment, and feed back within the embryo. The origami of each small part participates in the eventual production of the finished, complex organism. This complexity is emergent and is very much unlike top-down architecture. Dawkins doesn’t like the commonly used DNA-blueprint analogy. Emergent complexity is not reversible. One could draw a blueprint from a finished building by taking measurements, but there is no way to reconstruct DNA from a finished organism. (Extraction is not reconstruction.)

To his credit, Dawkins honestly and clearly states that embryology is not evolution. Embryology merely utilizes a pre-existing genetic code while evolution is the process that assembled that code over hundreds of millions of years.