Wednesday, November 11, 2009

Software and business method patents: at least four justices see through the Christmas ornament loophole

Several years ago, before section 101 of the U.S. patent statute became fashionable again, I wrote a paper on it, "Elemental Subject Matter." I remember several professors and patent attorneys, who shall remain nameless, telling me that section 101, which defines what kinds of subject matter are patentable and what kinds are not, was a useless topic to explore -- these issues, they said, had all been resolved and the legal excitement was elsewhere. I thought otherwise. I researched and in my paper I described the basic loophole that made software patents possible. Algorithms are "laws of nature" or "abstract ideas" and as such are supposed to be unpatentable. Patent lawyers being clever got around this by tacking on an extra fig-leaf or Christmas-ornament element to patent claims: the patent was for process X,Y,Z "and a computer", where X and Y and Z is the novel and non-obvious algorithm and "computer" is just your general-purpose computer. Under a long line of high court precedents, starting with the old English case of Nielson v. Harford, and continuing through many Supreme Court cases, this was an invalid claim: {X,Y,Z}, the part of the patent that makes it novel and non-obvious, must itself be patentable subject matter, i.e. not just an algorithm or law of nature or abstract idea. But the Federal Circuit, which hears all U.S. patent appeals and thus dominates U.S. patent law, ignored Nielson. Software became patentable because lawyers could trivially tack on "computer" or "memory" onto software claims, turning abstract algorithms into patentable "machines." Still later, the Federal Circuit allowed even these fig-leafs to be dropped from software patents, they were implicitly understood. The issue has never come before the U.S. Supreme Court. Until now.

At least four Supreme Court justices brought up the issue in Monday's oral arguments in Bilski v. Kappos, a business methods patent. The main patent claim reads as follows:
A method for managing the consumption risk costs of a commodity sold by a commodity provider at a fixed price comprising the steps of:

(a) initiating a series of transactions between said commodity provider and consumers of said commodity wherein said consumers purchase said commodity at a fixed rate based upon historical averages, said fixed rate corresponding to a risk position of said consumer;

(b) identifying market participants for said commodity having a counter-risk position to said consumers; and

(c) initiating a series of transactions between said commodity provider and said market participants at a second fixed rate such that said series of market participant transactions balances the risk position of said series of consumer transactions
(Forget about the fact that this is not even novel much less non-obvious. When the Federal Circuit allows claims to be made in areas where they previously weren't, the U.S. Patent Office agents are incompetent to analyze techniques in the new area or to search for prior art, and indeed a search of prior patents, which is almost all they know how to do, naturally turns up no prior art. Thus the many preposterously obvious software and business method patents we've seen. The case is being heard on the assumption that the patent office agent was correct, absurd as it is, to declare this claim novel and non-obvious, and the issue is thus focused on whether such business methods are patentable subject matter under section 101 of the patent code).

These four justices seem to agree with the view of my paper that the Christmas ornament loophole lies at the heart of software and business method patents:

JUSTICE STEVENS: I don't understand how that can be a patent on a machine if the only thing novel is the process that the machine is using. Isn't -- isn't the question -- really, the question there was whether the new process was patentable.
(p. 42)

(in reply to Justice Stevens repeating the above point)
JUSTICE KENNEDY: That's -- that's a problem I have.
(p. 44)

JUSTICE BREYER: But then all we do is every example that I just gave, that I thought were examples that certainly would not be patented, you simply patent them. All you do is just have a set of instructions for saying how to set a computer to do it. Anyone can do that. Now, it's a machine. So all the business patents are all right back in...all you do is you get somebody who knows computers, and you turn every business patent into a setting of switches on the machine because there are no businesses that don't use those machines.
(p. 46)

This is also what Chief Justice Roberts is clumsily getting at on pg. 35:

CHIEF JUSTICE ROBERTS: ...that involves the most tangential and insignificant use of a machine. And yet you say that might be enough to take something from patentability to not patentable.

I'd like to think that somebody over there in the Supreme Court building has been reading my paper, but more likely, yet remarkably, Justice John Paul Stevens, the author of Parker vs. Flook, the last case to apply Nielson v. Harford properly, and the only justice left from that 1977 court, still remembers Nielson and has taught a whole new generation of justices its meaning.

The implications of this view may seem harshly radical (if you rely on software patents) or pleasantly reactionary (if you fondly remember the days when we didn't have them). The patent bar and software patent holders have been in a tizzy since Monday, fearing that the Court's hostility to business method patents will lead to a ruling that will spill over to invalidate the recent non-ornamented software patents they have been drafting and the USPTO has negligently been approving. And software engineers have been dreaming that they will finally be freed from some of the increasingly dense patent thicket. But if the Court, as the above comments suggest, returns to Nielson, the result could be even more dramatic than is hoped or feared. Taking the Nielson logic to its conclusion would invalidate practically all software-only and business method patents, including ornamented ones. Those who want software patents would have to go do what they should have done in the first place -- get Congress to pass a statute expanding patentable subject matter to software, and very importantly command the USPTO to recruit and train computer scientists and people who know how to search the non-patent software literature for prior art so that software claims that don't make sense won't pass muster. Then, if this experiment works, a few decades later try the same method for business patents. And if the experiment doesn't work, scrap software patents. At this point, the Federal Circuit's illegitimate experiment with software and business method patents is failing miserably. Let's hope the Supreme Court takes this opportunity to restore its old patent jurisprudence that the Federal Circuit so shamelessly flouted.

Thursday, November 05, 2009

The auction and the sword

Anno Domini 193 is often called the Year of the Five Emperors after the five that ruled as princeps ("first citizen") in all or major parts of the Roman Empire: Pertinax, Didianus Julianus, Pescennius Niger, Clodius Albinus, and Septimus Severus. Indeed, counting the Emperor Commodus, who died at the end of 192, the Empire saw six emperors in the space of five months.

The Roman imperial succession was supposed to proceed by adoption of the most competent possible successor [3]. This followed the example of Julius Caesar's adoption of Octavian as his heir, and Octavian's subsequent taking on the title of princeps as Augustus Caesar. In practice, however, at least three other factors often intervened: first, emperors tended to favor their natural sons over their adopted ones; second, the Praetorian Guard, the emperor's bodyguard, often exercised a life-or-death control over the succession; and third, Roman legions were often motivated to intervene. Combining this rickety system of succession with the awful power of the autocratic emperor, whose "will was law", made successions an all-or-nothing, win-or-die struggle of often devastating violence. The Year of the Five Emperors witnessed more than its share of such violence. It gave rise to the Severan dynasty and more importantly to its legal authorities, who are cited in courts of law today, millenia after the emperors themselves have been forgotten. The Severan's jurists also voiced political ideas that would echo down to our time, as we shall see in future articles.

Commodus, the incompetent and unpopular natural son and successor of Marcus Aurelius, was poisoned by his mistress Marcia (not, I'm afraid to tell fans of Gladiator, slain by Russell Crowe in the Colosseum). Apparently this assassination was a plot that included the Praetorian prefect Laetus and the urban prefect Pertinax. The urban prefect was something like the mayor of the city of Rome: he supervised all the collegia (corporations and guilds) in the city, supervised maintenance of its aqueducts and sewers, supervised the import and doling of grain, supervised a force of police and night watchmen, and other such administrative tasks. The Praetorian prefect was the head of the emperor's bodyguard, the Praetorian Guard, which also (as here) often had the power to make or break emperors.

The Guard declared Pertinax emperor. After only three months in power, as the great historian Cassius Dio reports, the Praetorians, unsatisfied with the funds Pertinax had provided them and fearing persecution, turned against Pertinax:
But Laetus...proceeded to put out of the way many of the soldiers, pretending that it was by the emperor's orders. The others, when they became aware of it, feared that they, too, should perish, and made a disturbance; but two hundred, bolder than their fellows, actually invaded the palace with drawn swords. Pertinax had no warning of their approach until they were already up on the hill; then his wife rushed in and informed him of what had happened. On learning this he behaved in a manner that one will call noble, or senseless, or whatever one pleases. For, even though he could in all probability have killed his assailants,— as he had in the night-guard and the cavalry at hand to protect him, and as there were also many people in the palace at the time,— or might at least have concealed himself and made his escape to some place or other, by closing the gates of the palace and the other intervening doors, he nevertheless adopted neither of these courses. Instead, hoping to overawe them by his appearance and to win them over by his words, he went to meet the approaching band, which was already inside the palace; for no one of their fellow-soldiers had barred the way, and the porters and other freedmen, so far from making any door fast, had actually opened absolutely all the entrances.[1]
The soldiers dispatched Pertinax and the Praetorians then decided to make their pecuniary preferences far more clear before they chose the next emperor:
Meanwhile Didius Julianus, at once an insatiate money-getter and a wanton spendthrift, who was always eager for revolution and hence had been exiled by Commodus to his native city of Mediolanum, now, when he heard of the death of Pertinax, hastily made his way to the camp, and, standing at the gates of the enclosure, made bids to the soldiers for the rule over the Romans. Then ensued a most disgraceful business and one unworthy of Rome. For, just as if it had been in some market or auction-room, both the City and its entire empire were auctioned off. The sellers were the ones who had slain their emperor, and the would-be buyers were Sulpicianus and Julianus, who vied to outbid each other, one from the inside, the other from the outside. They gradually raised their bids up to twenty thousand sesterces per soldier. Some of the soldiers would carry word to Julianus, "Sulpicianus offers so much; how much more do you make it?" And to Sulpicianus in turn, "Julianus promises so much; how much do you raise him?" Sulpicianus would have won the day, being inside and being prefect of the city and also the first to name the figure twenty thousand, had not Julianus raised his bid no longer by a small amount but by five thousand at one time, both shouting it in a loud voice and also indicating the amount with his fingers. So the soldiers, captivated by this excessive bid and at the same time fearing that Sulpicianus might avenge Pertinax (an idea that Julianus put into their heads), received Julianus inside and declared him emperor.[1]
But this was politics, not voluntary commerce, and the military hierarchy of the Roman legions proved to be mightier than the highest bidder. Three governors (commanding several legions each), Albinus of Britain, Severus of Pannonia (south-central Europe), and Niger of Syria, declared themselves emperor, suspended forwarding of tax revenues to Rome, and started marching on Rome to dethrone what they considered to be a corruptly selected emperor. Severus got there first:
Severus, after winning over everything in Europe except Byzantium, was hastening against Rome. He did not venture outside the protection of arms, but having selected his six hundred most valiant men, he passed his time day and night in their midst; these did not once put off their breastplates until they were in Rome.[1]
The security precautions of the Praetorians proved to be no match for Severus' legions, and this was so obvious that the city and Praetorian rank-and-file basically rebelled against Didianus Julianus and the Praetorian leaders and turned the city and the emperorship over to Severus:
Julianus, on learning of [Severus' approach to Rome], caused the senate to declare Severus a public enemy, and proceeded to prepare against him. In the suburbs he constructed a rampart, provided with gates, so that he might take up a position out there and fight from that base. The city during these days became nothing more nor less than a camp, in the enemy's country, as it were. Great was the turmoil on the part of the various forces that were encamped and drilling,— men, horses, and elephants,— and great, also, was the fear inspired in the rest of the population by the armed troops, because the latter hated them. Yet at times we would be overcome by laughter;he Pretorians did nothing worthy of their name and of their promise, for they had learned to live delicately; the sailors summoned from the fleet stationed at Misenum did not even know how to drill; and the elephants found their towers burdensome and would not even carry their drivers any longer, but threw them off, too. But what caused us the greatest amusement was his fortifying of the palace with latticed gates and strong doors. For, inasmuch as it seemed probable that the soldiers would never have slain Pertinax so easily if the doors had been securely locked, Julianus believed that in case of defeat he would be able to shut himself up there and survive.

But Severus presently reached Italy, and took possession of Ravenna without striking a blow. Moreover, the men whom Julianus kept sending against him, either to persuade him to turn back or to block his advance, were going over the Severus' side; and the Pretorians, in whom Julianus reposed most confidence, were becoming worn out by their constant toil and were becoming greatly alarmed at the report of Severus' near approach. At this juncture Julianus called us together and bade us appoint Severus to share his throne. But the soldiers, convinced by letters of Severus that if they surrendered the slayers of Pertinax and themselves kept the peace they would suffer no harm, arrested the men who had killed Pertinax, and announced this fact to Silius Messalla, who was then consul. The latter assembled us in the Athenaeum, so named from the educational activities that were carried on in it, and informed us of the soldiers' action. We thereupon sentenced Julianus to death, named Severus emperor, and bestowed divine honours on Pertinax. And so it came about that Julianus was slain as he was reclining in the palace itself; his only words were, "But what evil have I done? Whom have I killed?" He had lived sixty years, four months, and the same number of days, out of which he had reigned sixty-six days.[1]
Severus "inflicted the death penalty" on the plotters against Pertinax and "murdered" a number of Senators, after swearing a sacred oath not to harm any Senators. (The quoted language is Cassius Dio's [2] in translation). So what were Severus' homicides -- legal executions or illegal murders? This was question of legal procedure. Under the old Republican legal tradition, still nominally enforce but in practice long defunct where the emperor was concerned, most of these killings would have been considered extrajudicial, i.e. murders. As we shall see, under the laws codified under the Severan dynasty, "the emperor's will was law" -- he by definition could never murder, only execute, and his oaths were by definition not binding on his future self.

Major civil war ensued as the Severan legions went up against those of Albinus and Niger. The terrific battles included a spectacular siege of Byzantium -- later to become Constantnople, but already a mighty fortress strategically placed within on the Bosporus, controlling the maritime traffic between the Mediterranean and Black Seas. After four years of civil war between Roman legions [3], Severus came out the winner. I will examine the reign of the Severan dynasty, and in particular the effects of military structure of the victorious legions on the political structure and legal procedures of Rome, in subsequent posts.

References

[1] and [2] Cassius Dio, Roman History, books [1] 74 and [2] 75.

[3] Tony Honore, Ulpian, Oxford University Press (second edition 2002).

Commencing a history of Roman political and legal institutions

Most modern governments have political structures and legal procedures derived in a long evolution from those of the ancient Roman emperors, with a shallow overlay of modern democracy. The main exceptions, the Anglo-American countries, have legal procedures derived primarily from a partially independent evolution in England, but still with substantial influences from the old Roman autocrats. Political ideas and legal procedures are closely related, and versions of these derived from the Roman Empire have dominated most of European history.

I have started writing a history of this legal and political tradition. It starts with the Year of the Five Emperors, the rise of the Severan dynasty, and under that dynasty the first two major jurists (legal authorities) in the later Roman legal tradition, Papinian and Ulpian. It continues through the famous Codes of the emperor Justinian (as compiled by his jurist Tribonian), to the birth of universities in Western Europe upon the rediscovery of Justinian's codes, through the political philosophies of Bodin and Hobbes, to the Reception of Roman law into Western Europe, to the Code Napoleon, the German and Russian legal codes, and modern dictatorships based on the political and legal ideas of Rome. This will be a sprawling history and indeed I will probably never finish it. But meanwhile I will post a good bit of it to this blog, starting with the next post. I expect to proceed largely in temporal order, but no guarantees. Quite a few of my blog posts over the next two years may be part of this series. It should be quite enjoyable as well as provide unique insights into the history of political forms and constitutions.

Saturday, October 31, 2009

Incentives

Moral hazard and risk compensation for hikers.
If they had not been toting the device that works like Onstar for hikers, "we would have never attempted this hike," one of them said after the third rescue crew forced them to board their chopper.
I hope everybody by now knows about the moral hazard and risk compensation that comes with securitization, and the central role of novel mortgage securitization in the recent financial crisis that led to the current recession. If not, here's a good example:
By buying his mortgages and thus freeing up his capital to solicit even more business, Fannie and Freddie are a big reason Mr. Mozilo has driven [now-defunct sub-prime lender] Countrywide past the Citigroups and the Wells Fargos to the top of the mortgage heap. "If it wasn't for them," he said of Fannie and Freddie, "Wells knows they'd have us."
Here's my analysis of incentives and clocks:
Mechanical clocks, bell towers, and sandglasses provided the world’s first fair and fungible measure of sacrifice. So many of the things we sacrifice for are not fungible, but we can arrange our affairs around the measurement of the sacrifice rather than its results. Merchants and workers alike used the new precision of clock time to prove, brag, and complain about their sacrifices.
If you want something in an emergency you can wait in line, pay through the nose, or do without: choose one. Similar lessons apply to the current health care debate:
In emergencies rationing becomes extreme: people wait in long lines, pay "extortionate" prices, or, even worse, do without. We are thrown into economically unfamiliar territory and transaction costs balloon. Goods will always be rationed in one or more of the above four ways, and in an emergency the rationing can be quite severe. Our charitable spirit can temporarily overcome self-interest, but it can't overcome the knowledge problem or the scarcity of goods.
Even in emergencies, when charity is most likely to spring forth, we need incentives. For example, doctors in emergencies are an interesting exception to the general rule in contract law against officious intermeddlers: if you are a patient who is in no position to consent or decline treatment, a doctor can go ahead and treat you and bill you. An implied-in-law or "quasi" contract has been formed. The same is not true in almost any other case: if that annoying windshield-washing guy starts cleaning your window without your consent, or if the neighbor kid comes along one day and mows your law without permission, you don't legally owe them a thing.

Sunday, October 25, 2009

Our great...grandmother was a proton-powered rock

I've just read a very compelling theory of the origin of life. This pegs my "that explains so many things!" meter in a very big way. Alkaline vents -- the tame cousins of black smokers -- were common underneath earth's early oceans, but were chemically different than today. They were a chemical engineer's utopia: high temperatures and pressures, iron-sulfur mineral catalysts, a substantial proton gradient (alkaline vent water to soda-water-like ocean) and vast amounts of surface area formed by microbubbles. A new theory posits that Peter Mitchell's revolutionary discovery, proton-gradient manufacture of ATP, "the energy currency of life", was the original energy source of life, and that early evolution from primordial nucleic acids to the common ancestors of archaea, bacteria, and ourselves occurred in these sea-vent microbubbles. A proton gradient across a membrane simply means that one side is more acidic (it contains more naked protons) and the other side is more alkaline (it contains more water molecules missing a proton, called "hydroxyl radicals" hydroxide ions).

Because of Mitchell's discovery we now know that all known life uses membranes with proton gradients across them to convert energy into ATP molecules. Wherever the energy comes from -- from light, from carbohydrates stolen from other organisms (i.e. eating food), wherever -- in every living thing it gets converted into a proton gradient that then is tapped to manufacture ATP. In higher animals ATP is made from a proton gradient that is in turn made from "burning" blood sugar with oxygen, and this ATP powers our muscles and brains. In plants ATP is central to photosynthesis: light striking chlorophyll generates a proton gradient, and that proton gradient is used to manufacture ATP, which in turn is used to make sugars and other plant carbohydrates. In all life ATP powers the energy-using chemical reactions needed to make proteins, DNA, and RNA, the complex chemicals of life. (For biochemists reading this, relax, this is a summary: I've necessarily left out a very large number of complex steps, many still not fully understood).

The new theory of the origin of life recognizes that proton gradients existed on a massive scale in alkaline vents. The primordial, carbon-dioxide-rich oceans were acidic like Coca-Cola: they contained too many protons. These soda-water oceans were out of balance with the alkaline vent water, which contained water molecules with protons missing (hydroxide ions). Protons streamed across this gradient, with the protons from the soda-water ocean filling up the proton-deficient hydroxide ions to create normal water molecules. This stream of protons was a massive energy source that could be tapped to drive vast numbers of energy-consuming chemical reactions. Large amounts and varieties of chemicals were made on the vast surface areas of the microbubbles, eventually leading to the immensely complex chemicals and reaction pathways (metabolisms) that became life.

When, much later, plants evolved, they pulled almost all of the carbon dioxide out of the air and oceans, converting it into hydrocarbons and oxygen. Then animals evolved that could breath the oxygen, "burning" it with carbohydrates from eating the plants. Yet these very different energy sources get converted by plants and animals alike into the same thing -- proton gradients across membranes which are used to make ATP, the energy currency of life.

Ironically, we humans by burning fossil fuels are putting a small fraction of this ancient carbon dioxide which plants removed from the air and oceans back into the air, where it not only may be causing a bit of global warming, but is also dissolving back into the oceans and turning them a bit more acidic -- a tiny step back in the direction of the primordial conditions in which carbon dioxide concentrations were vastly higher than our puny modern levels, making the origins of life possible.

The theory's ten-step recipe for life:
1. Water percolated down into newly formed rock under the seafloor, where it reacted with minerals such as olivine, producing a warm alkaline fluid rich in hydrogen, sulphides and other chemicals - a process called serpentinisation.

This hot fluid welled up at alkaline hydrothermal vents like those at the Lost City, a vent system discovered near the Mid-Atlantic Ridge in 2000.

2. Unlike today's seas, the early ocean was acidic and rich in dissolved iron. When upwelling hydrothermal fluids reacted with this primordial seawater, they produced carbonate rocks riddled with tiny pores and a "foam" of iron-sulphur bubbles.

3. Inside the iron-sulphur bubbles, hydrogen reacted with carbon dioxide, forming simple organic molecules such as methane, formate and acetate. Some of these reactions were catalysed by the iron-sulphur minerals. Similar iron-sulphur catalysts are still found at the heart of many proteins today.

4. The electrochemical gradient between the alkaline vent fluid and the acidic seawater leads to the spontaneous formation of acetyl phosphate and pyrophospate, which act just like adenosine triphosphate or ATP, the chemical that powers living cells.

These molecules drove the formation of amino acids – the building blocks of proteins – and nucleotides, the building blocks for RNA and DNA.

5. Thermal currents and diffusion within the vent pores concentrated larger molecules like nucleotides, driving the formation of RNA and DNA – and providing an ideal setting for their evolution into the world of DNA and proteins. Evolution got under way, with sets of molecules capable of producing more of themselves starting to dominate.

6. Fatty molecules coated the iron-sulphur froth and spontaneously formed cell-like bubbles. Some of these bubbles would have enclosed self-replicating sets of molecules – the first organic cells. The earliest protocells may have been elusive entities, though, often dissolving and reforming as they circulated within the vents.

7. The evolution of an enzyme called pyrophosphatase, which catalyses the production of pyrophosphate, allowed the protocells to extract more energy from the gradient between the alkaline vent fluid and the acidic ocean. This ancient enzyme is still found in many bacteria and archaea, the first two branches on the tree of life.

8. Some protocells started using ATP as well as acetyl phosphate and pyrophosphate. The production of ATP using energy from the electrochemical gradient is perfected with the evolution of the enzyme ATP synthase, found within all life today.

9. Protocells further from the main vent axis, where the natural electrochemical gradient is weaker, started to generate their own gradient by pumping protons across their membranes, using the energy released when carbon dioxide reacts with hydrogen.

This reaction yields only a small amount of energy, not enough to make ATP. By repeating the reaction and storing the energy in the form of an electrochemical gradient, however, protocells "saved up" enough energy for ATP production.

10. Once protocells could generate their own electrochemical gradient, they were no longer tied to the vents. Cells left the vents on two separate occasions, with one exodus giving rise to bacteria and the other to archaea.
More here.

Dendritic carbonate growths on the Lost City alkaline vent

Given the vast complexity of the genes and metabolism that would likely have existed in the common rock-bubble ancestor of archaea and bacteria, I suspect it will be a long time before all but the simplest of these steps are recreated in a lab. Still, this is by far the most compelling theory of the origin of life I've ever seen.

Peter Mitchell, discoverer of the proton-gradient manufacture of ATP, was a fascinating character: instead of entering the "publish or perish" and "clique review" rat-race of government-funded science, he dropped out of mainstream scientific culture and set up his own charitable company (nonprofit in U.S. lingo), Glyn Research Ltd. His discoveries were compelling enough to win over the early "he's a wingnut" skeptics and are now the centerpiece of our understanding of biological energetics. My essay "The Trouble With Science" suggests why this kind of independence is good for science. Here's more about Mitchell's theory of proton-powered life called chemiosmosis. The ten-step process above is the theory of William Martin and Michael Russell, and is an extension of Gunter Wachterhauser's iron-sulfur world theory.

Tuesday, October 20, 2009

Non-market but voluntary economic institutions

Often in political parlance the phrase "the market" is used quite broadly to cover a wide variety of voluntary economic institutions, including firms, non-profit organizations, families, and so on in addition to markets proper. But traditional neoclassical economics is about ideal markets proper: instantaneous buying and selling on a costless spot exchange. Ronald Coase started expanding the scope of economics with his work on the firm, and this line of thinking has developed into a school, often called the "new institutional economics" or NIE that focuses on non-market or partial-market voluntary economic institutions as well as on the conditions that must be satisfied for efficient markets to be possible. The economics Nobel committee has finally recognized the study of non-market but voluntary economic institutions with its awards this year to Oliver Williamson and Elinor Ostrom.

Williamson and his fellow travelers Oliver Hart, Yoram Barzel, Steven Cheung, and Janet Landa have long influenced my thinking about measuring value, mental transaction costs, smart contracts,
the origins of money
, and more.

The new institutional economics school in a nutshell holds that often transaction costs are too high for spot markets to work properly. If spot markets were perfectly efficient we would not need firms or long-term contracts, for example, but in fact we have those and many other institutions besides pure markets. The NIE studies and has started to explain the functions of institutions that are not markets proper, such as long-term contracts and firms, as well as the legal underpinnings of market economies, especially property and contracts. Contracts and property are the main formal expressions of economic relations recognized by the NIE, which makes this school especially interesting to someone like me interested in the economic role of contracts and property and how to adapt these institutions to (and even to some extent incorporate them into) evolving technology.

Note that these institutions are "voluntary" in the sense of the traditional common-law principle of non-initiation of force, and assume a sophisticated legal framework. When this assumption doesn't hold, these principles usually work in a very different way or don't work at all, and one has to be very careful applying them. (See here and here for more on the problem of coercive externalities).

Meanwhile, here is a good article introducing the other economics Nobel winner this year, Elinor Ostrom.

Saturday, October 17, 2009

How to save yourself from chasing futuristic red herrings

For many people, the often outlandish proposals and predictions of futurists are just obviously impractical and are to be laughed off. This attitude, irrational is it may seem to futurists of the stripe who take outlandish ideas very seriously, is itself not to be sneered at -- automatic unbelievers in the alien save themselves from chasing many red herrings. Those who laugh at futurism because they are unimaginative dolts I will not try to defend, but those who laugh at futurism when futurists take themselves too seriously are usually spot-on. For those of a more serious nature and intellect who want to actually figure out the flaws in futuristic ideas, here are some heuristics:

(1) Find the easier thing. If there is an easier way to get much of the value from a proposal, ask yourself, why hasn't somebody pursued this easier way? For example, seasteading proposes the creation of novel structures for people to settle permanentantly in the ocean. Ask yourself, why don't there already exist communities that live permanently on cruise ships? Why haven't oil companies moved the families of their offshore platform workers out to live where the work is?

(2) Look to see if if the futurists have proposed experiments that can be done much sooner and more cheaply that would verify or falsify the propsosal or prediction. Many of the "most important", in terms of perceived future impact, hyper-futuristic ideas are conveniently unfalsifiable: artificial intelligence, uploading of consciousness, and so on. There are a near-infinite number of unfalsifiable theories that our imaginations could dream up, making the odds of any given such theory to be true about zero. The ability to conduct such dispositive experiments, the ability to prove a hypothesized event false if certain conditions occur, paradoxically makes that event far more likely. A related heuristic is to be very leery of ideas that, as is said of fusion power, are "always thirty years in the future". If the futurist can't explain why the futurist of 30 years ago who predicted something similar was wrong, that futurist should indeed be laughed at, early and often. Far too many futurists are so futuristic that they know little about the past which they purport to be projecting. Some don't even know when predictions similar to theirs were already made decades ago, and were already supposed to have come true. At the same time, be wary of futurists who are not willing to make short-term predictions, lest we obtain a track record of the vast uncertainty involved in their brand of futurism.

(3) Except for rare phenomena of high predictability, such as the orbits of planets, past performance does not guarantee future results. Futurists often chart exponential curves of growth in some measure of technology: the speed of transport, the number of transistors that can fit on a chip, and so on. The first half of a logistics curve looks much like an exponential curve. You can fit an exponential curve to the data points, but it's really a logistic curve, which in the long run, and possibly even in the short run, will lead to a radically different kind of future. Because of physical limits and human psychology, reality far more closely resembles logistic curves than exponential ones. For example, world population growth seemed to follow an exponential curve until about the 1960s, when it flipped into a quite different mathematical regime. This transition to sub-exponential growth started much sooner in the developed world, which should have been but was not a clue for the population alarmists. As for physical limits, a good example is transport speed: it seemed to be growing exponentially until it hit the sound barrier in earth's atmosphere and the implacable nature of earth's gravity well beyond it in the latter half of the twentieth century. More on the dubious nature of exponential projections here.

(4) Beware of the prophets of false certainty. These are people who focus on one out of many possible outcomes, or take very seriously unfalsifiable predictions, or follow exponential projections, or have neglected to find the easier thing, and pretend, because nobody has proven them wrong, that their version of the future has a high probability. We have, for example, the Bayesiologists, who, while to their credit are at least aware of first-order uncertainty (known unknowns), neglect the higher-order uncertainties (unknown unknowns) inherent in most futurism and demand that we make some intuitive guess as to the numerical probability of their predicted event. (When asked for an intuitive numerical guess about some hyper-futuristic prediction, "50%, plus or minus 50%, distribution function unknown" is usually the best answer).

(5) Look at interests. You may not understand the science involved, but individual and institutional interests are human universals. Take astrobiology, for example. Here we have a science without a subject. Now the astrobiologists to a man argue that extraterrestrial life must be common, indeed that it may well be right around the corner underneath the ice of Enceladus or Europa or on one of those exciting new exoplanets. There appears to be, as many activists like to say about global warming, a "consensus" among the astrobiologists about the ubiquity of life in the universe. But only primitive life, of course -- otherwise the uncomfortable fact that we have never observed the signs of any artificial surfaces, despite observing billions of stars in our own galaxy and billions of other galaxies, would rear its inconvenient head. Thus the Rare Earth Hypothesis, in which for clever reasons life is supposed to almost always stops evolving beyond some primitive stage, in sharp contrast to the ongoing evolution of life to higher complexity in the only history of life we have actually observed. Does the astrobiologists' consensus reflect their expertise and your ignorance in astronomical and biological matters, or does it reflect something else? Consider this -- if you were skeptical about this astrobiological thesis, why would you become an astrobiologist in the first place, risking your career on a science that has no subject? If the politicians and academic boards who fund them ever became convinced that extraterrestrial life probably does not exist anywhere where we will be able to observe it before they retire, astrobiologists would have to find new jobs. This is a career for true believers. Beyond this rather dramatic selection effect, we have individual and institutional self-interest to keep the argument going -- to fund their careers, astrobiologists must persuade us that life in universe is common, common enough that we should fund multibillion dollar telescopes and spacecraft and, of course, grant copious amounts of research funding to them in order to look for it within or astronomically very near our solar system, which is as far as we can observe the signs of primitive life. Even if you know nothing whatsoever about either astronomy or biology, but do understand a thing or to about humans, you are wise to be highly skeptical of the claims of astrobiologists.

(6) Be especially skeptical of political futurism. From NASA's Shuttle and Space Station, which were supposed to revolutionize space industry, to the politicization of doom-and-gloom scenarios such as overpopulation and the supposed dire consequences of global warming, politics mixed with futurism has a very poor track record. By contrast, private entities like the Singularity Institute, Foresight Institute, and so forth, while even more outlandish and preposterously self-serious, can provide creative starting points for brainstorming towards more practical ideas and are relatively harmless.

(That leads me to my last heuristic -- (7) avoid futurists who can't laugh at themselves).

Futurism at its best is a creative and entertaining game of ideas. Playing with outlandish ideas can be very productive -- for example, the Easier Thing on occasion may turn out to actually be a good idea you can implement now, and you arrive at the easier thing by starting with an outlandish idea. I occasionally explore outlandish futuristic ideas here at Unenumerated, which prides itself on an unending variety of topics. There is nothing to sneer at about futurism as fun unless you have an unimaginative rock for a brain. However, those who take these ideas too seriously, or have created a false sense of certainty about them, do deserve a few guffaws.

Monday, October 12, 2009

The tax collector's problem

Here's an edited excerpt from my old essay "Measuring Value":

Tax collection is the most efficient department of government. Its efficiency rivals that of many private sector institutions.

From the point of view of many taxpayers this is an incredible claim, given that tax collectors take money we ourselves know how to spend quite well, thank you, and often spend it on amazingly wasteful activities. And the rules by which they take it often seem quite arbitrary. Tax rules are usually complex but nevertheless fail to let us account for many events important to the earning of our incomes that differentiate us from other taxpayers.

How the money gets spent is quite outside the scope of my claim that tax collectors are uncommonly efficient. It is the collection process itself that is the subject of that claim, and the tax collection rules. This essay will demonstrate the efficiency of tax collector's rules by two arguments:

(1) First, we will show why tax collectors have an incentive to be efficient (and what "efficiency" means in this context)

(2) Second, we will explore the problem of creating tax rules, and see how the difficulty of measuring value rears its ugly head. Tax rules solve the value measurement problem through brilliant, often very non-obvious solutions similar to solutions developed in the private and legal sectors. Often (as, for example, with accounting) tax collectors share solutions used to measure value in private relationships (such as the absentee investor-management relationship in joint stock corporations). It is in making these very difficult and unintuitive trade-offs, and then executing them in a series of queries, audits, and collection actions, that tax collectors efficiently optimize their revenue, even if the results seem quite wasteful to the taxpayer.

The tax collector's incentives are aligned with the other branches of their government in a task that benefits all associated with the government, namely the collection of their revenue. No organization of any type collects more revenue with fewer expenditures than tax collection agencies. Of course, they have the advantage of coercion, but they must overcome measurement problems that are often the same as other users of accounting systems, such as owners of large companies. It is not surprising, then, that tax collectors have sometimes pioneered value measurement techniques, and often have been the first to bring them into large scale use.

Like other kinds of auditors, the tax collector's measurement problem is tougher than it looks. Investment manager Terry Coxon has described it well[1]. Bad measures or inaccurate measurements allow some industries to understate their income, while forcing others to pay taxes on income they haven't really earned. Coxon describes the result: the industries that are hurt tend to shrink. The industries that benefit pay fewer taxes than could be extracted. In both cases, less revenue is generated for the tax man than he might be able to get with better rules.

This is an application of the Laffer curve to the fortunes of specific industries. On this curve, developed by the brilliant economist Arthur Laffer, as the tax rate increases, the amount of revenue increases, but at an increasingly slower rate than the tax rate, due to increased avoidance, evasion, and most of all disincentive to engage in the taxed activity. At a certain rate due to these reasons tax revenues are optimized. Hiking the tax rate beyond the Laffer optimum results in lower rather than higher revenues for the government. Ironically, the Laffer curve was used by advocates for lower taxes, even though it is a theory of tax collection optimum to government revenue, not a theory of tax collection optimal to social welfare or individual preference satisfaction.

On a larger scale, the Laffer curve may be the most important economic law of political history. Adams[2] uses it to explain the rise and fall of empires. The most successful governments have been implicitly guided by their own incentives – both their short-term desire for revenue and their long-term success against other governments -- to optimize their revenues according to the Laffer Curve. Governments that overburdened their taxpayers, such as the Soviet Union and later Roman Empire, ended up on the dust-heap of history, while governments that collected below the optimum were often conquered by their better-funded neighbors. Democratic governments may maintain high tax revenues over historical time by more peaceful means than conquering underfunded states. They are the first states in history with tax revenues so high relative to external threats that they have the luxury of spending most of the money in non-military areas. Their tax regimes have operated closer to the Laffer optimum than those of most previous kinds of governments. (Alternatively, this luxury may be made possible by the efficiency of nuclear weapons in deterring attack rather than the increased incentives of democracies to optimize to tax collection).

When we apply the Laffer curve to examining the relative impact of tax rules on various industries, we conclude that the desire to optimize tax revenues causes tax collectors to want to accurately measure the income or wealth being taxed. Measuring value is crucial to determining the taxpayer's incentives to avoid or evade the tax or opt out of the taxed activity. For their part, taxpayers can and do spoof these measurements in various ways. Most tax shelter schemes, for example, are based on the taxpayer minimizing reported value while optimizing actual, private value. Tax collection involves a measurement game with unaligned incentives, similar to but even more severe than measurement games between owner and employee, investor and management, store and shopper, and plaintiff-defendant (or judge-guilty party).

As with accounting rules, legal damage rules, or contractual terms, the choice of tax rules involves trading off complexity (or, more generally, the costs of measurement) for more accurate measures of value. And worst of all, as with the other rule-making problems, rule choices ultimately ground out on subjective measures of value. Thus a vast number of cases are left where the tax code is unfair or can be avoided. Since tax collectors are not mind readers, tax rules and judgments must substitute for actual subjective values its judgments of what the “reasonable” or “average” person's preferences would be in the situation. Coxon provides the following example. Imagine that we wanted to optimize the personal income tax rules to measure income as accurately as possible. We might start reasoning along these lines:

... look a little closer and you find that an individual incurs costs and expenses in earning a salary. He has to pay for transportation to and from work. He may spend money on clothes he wouldn't otherwise buy and on lunches that would cost less at home. And he may have spent thousands of dollars acquiring the skills and knowledge he uses in this work.

Ideal, precise rules for measuring his income would, somehow, take all these and other costs into account. The rules would deduct the cost of commuting (unless he enjoys traveling about town early in the morning and later in the afternoon). They would deduct the cost of the clothes he wouldn't otherwise pay (to the extent it exceeds the cost of the clothes he would buy anyway). They would deduct the difference between the cost of eating lunch at work and the cost of lunch at home (unless he would eat lunch out anyway). And each year these ideal rules would deduct a portion of the cost of his education (unless he didn't learn anything useful in school or had enough fun to offset the cost).

[Because there are limits to complexity, and] because tax agents can't read minds, the government gives them arbitrary rules to follow: no deductions are allowed for commuting expenses, for clothing that is suitable for wearing outside of work, for lunches that aren't part of the “business entertainment” or for the cost of acquiring the skills a job requires (although you can deduct the cost of improving your skills).

The resulting rules often seem arbitrary, but they are not. They are trade-offs, often non-obvious but brilliant, between the costs of measuring more value with greater accuracy and extra revenue extracted thereby. However, the value measurement problem is hardly unique to tax collection. It is endemic when assessing damages in contract and tort law, and when devising fines punishments in administrative and criminal law. Many private sector rules found in contracts, accounting, and other institutions also have the quality that they use highly non-obvious measures of value that turn out, upon close examination, to be brilliant solutions to seemingly intractable problems of mind-reading and the unacceptable complexity of covering all cases or contingencies. Such measurement problems occur in every kind of economic system or relationship. The best solutions civilization has developed to solve them are in most institutions brilliant but highly imperfect. There is vast room for improvement, but failed large-scale experiments in attempts to improve these measures can be devastating.

The Laffer curve and measurement costs can also be used to analyze the relative benefits of various tax collection schemes to government. Prior to the industrial revolution, for example, the income tax was infeasible. Most taxes were on the prices of commodities sold, or on various ad-hoc measures of wealth such as the frontage of one's house. (This measurement game resulted in the very tall and deep but narrow houses that can still be found in some European cities such as Amsterdam. The stairs are so narrow that even normal furniture has to be hauled up to the upper story and then through a window with a small crane, itself a common feature on these houses).


Taxes distorted the economy of the Netherlands -- quite literally. Here are some houses in Amsterdam built in the 17th and 18th centuries, and a typical narrow staircase. Furniture and other large objects must be hauled up by the small cranes seen above the top-story windows.

Prior to the industrial revolution, incomes were often a very private matter. However, starting in England in the early nineteenth century, large firms grew to an increasing proportion of the economy. Broadly speaking, large firms and joint-stock companies were made possible by two phases of accounting advances. The first phase, double-entry bookkeeping, was developed for the trading banks and "super companies" of early fourteenth century Italy. The second phase were accounting and reporting techniques developed for the larger joint stock companies of the Netherlands and England, starting with the India companies in the seventeenth century.

Accounting allowed manager-owners to keep track of employees and (in the second phase) for non-management owners to keep track of managers. These accounting techniques, along with the rise of literacy and numeracy among the workers, provided a new way for tax collectors to measure value. Once these larger companies came to handle a sufficient fraction of an jurisdiction's value of transactions, it was rational for governments to take advantage of their measurement techniques, and they did so -- the result being the most lucrative tax scheme ever, the income tax.

References


[1] Adams, Charles, For Good and Evil: The Impact of Taxes on Civilization

[2] Coxon, T., 1996 Keep What You Earn, Times Business/Random House

Friday, October 09, 2009

When does citizen's arrest become battery?

Here's a rough citizen's arrest caught on film. Plenty of action during the "more than 7-8 minutes" between when the photographer started shooting and the on-duty police arrived, despite a police station reportedly 2 blocks away. Some interesting comments from locals (apparently most of them police officers) here. Although freelance photographer Mike Anzaldi was there and I wasn't, I'm not completely without doubt about his claim that all the blows administered by the Asian fellow (including a kick to the stomach not pictured here) were justified to "calm him down". You can see some of the ambiguity between controlling a resisting arrestee and battery. (For more of the story, hear Anzaldi's commentary and see more pictures at the first link above). My kudos to Anzaldi for his great documentation of this event. Not so much to the (presumably, since anonymous) cop bloggers with bad attitudes at the second link above, although they do raise one interesting issue, that the victim here did not end up pressing charges -- as a result, the alleged purse-snatcher was only charged with a misdemeanor. Victims often lack an incentive to press charges, and police often try to motivate victims to press charges by withholding stolen property they have recovered. If victims won't act out of a public spirit, or at least out of revenge, to help punish criminals, how can criminals be incapacitated and deterred? By contrast the volunteer citizen arrestors here seem to be acting in a public or at least gallant spirit. I'm not going to take it on faith, as the anonymous bloggers apparently do, that the police have great incentives here either, rather that this does seem to raise a public goods issue, to which police forces are a very imperfect, and in this case quite belated, solution. (AFAIK, BTW, no charges have been filed against any of the arrestors in this case, and I doubt any will be -- they seem to be near but not over the line in using reasonable force to control a resisting arrestee, and even if they were a bit over the line police sympathize with their situation and would be loathe to arrest, and even if arrested and prosecuted a jury would probably let them off).

Incidentally, if it had been police officers making the arrest here, those bites would probably constitute battery on a police officer. But there seems to be no analogous protection for citizens making an arrest -- we should consider adding such protection where the arrest is legitimate. (Often, BTW, the burden of proof on citizen arrest is much higher than for police -- in many states the arrestor must have actually seen the crime being committed, a greater burden than the typical police officer's burden of probable cause. I am skeptical of this discrepancy, too).

I previously posted on another case with a video showing a store owner shooting a robber, the first time in proper self-defense (or defense of others) under a castle law, but the second time apparently over the line between such proper force and murder.

Thursday, September 24, 2009

Staving off the Cosmic Malthus

Robin Hanson has a good argument about the inevitability of Malthusian economics in our future -- here refined by Anders Sandberg. This Malthusian future is distant in human terms but an eyeblink in cosmic terms. Sandberg observes that that our cosmic environment, while very large, is finite and dispersed. Emigration beyond our solar system can expand these resources only polynomially, which can't keep up with exponential economic or population growth. Therefore, our current boom era of exponential economic growth, wherein manufacturing productivity has grown at about the same positive percentage rate per year starting in Western Europe in the late Middle Ages, is historically very unique and must eventually come to an end. Furthermore, Hanson argues that the specter of Malthus, purged in the late 20th century by declining fertility, will return -- Darwinian genetic adaptation to modern fertility-reducing conditions and technologies will eventually bring population growth rates back to exponentially positive rates until once again resource limits are reached and most humans (or posthumans) live at subsistence levels -- very extreme poverty by the standards of modern developed countries.

The Darwinian argument may be overcome if culture keeps evolving faster than genes and thereby can keep overcoming future genetic adaptations. (Richard Dawkins argues that we can overcome our selfish genes). It may be rebutted that units of culture (what Dawkins calls "memes") themselves are Darwinian competitors and thus also face Malthusian limits, or that future computerized minds may reproduce very quickly and evolve as fast as culture. I won't elaborate on these arguments further here as Robin has got me into a hyper-futuristic mood and I'd like to suggest another way in which we might achieve more "room at the bottom".

Hanson counts atoms in order to estimate the density of information (or of minds) that might be created. But, just as Freeman Dyson, Gerard O'Neill, and others showed that planets are a waste of mass, so that technologically mature civilizations won't have planets, I'll argue here that atoms are a waste of mass-energy, and technologically mature civilizations may not have very many of them. Instead information may be stored in photons and collections of electrons and positrons (for example geonium) may handle most information processing and form the substrate of minds.

Given that a photon can come in a vast number of possibly distinguishable frequencies, the spectrum spanning more than 20 orders of magnitude, we may be able to store at least 10^30 bits per photon. One approach to creating photons is to simply capture the energy of solar nuclear fusions as photons, as we already know how to do -- this should give us about 10^95 bits worth of photons of average energy blue. But we'd have to either wait billions of years for all these fusion reactions to occur naturally in the sun or accelerate them somehow. More completely, the neutrons and protons in the sun, if converted into photons of average energy blue, would give us 10^97 bits and we may not have to wait billions of years if we can figure out how to bring about this hypothetical conversion. This is a fascinating but very speculative bit of physics which I will explore further.

Of course, we will still need some electrons or positrons around to actually process that information and recycle photons. And we still need some neutrons and protons around to fuse for energy to make up for the waste heat, to the extent that geonium computations will be less than perfectly reversible. Unless we are very clever and figure out how to make solid structures that don't blow up out of electrons and positrons, we will need some magnetic tanks made out of traditional heavy atoms to hold the geonium. Worse, the strong tendency for baryon number to be conserved makes cracking protons difficult and perhaps impossible. Protons are made out of three quarks, and while cracking quarks is quite possible (particles with two quarks but zero net baryon number decay spontaneously into particles with no quarks), the tendency for baryon numbers to be conserved at the energy levels used by current particle accelerators suggests that cracking the proton, if we can even figure out how to do it, may require vast amounts of energy, so that only a tiny fraction of the sun's neutrons and protons might be converted before we run out of energy from the fusion of the remaining nuclei. Right now we know how to crack the neutron into a proton and electron, but we don't know how to crack the proton. To be feasible we will have to discover a way to "catalyze" proton decay, by analogy to how the activation energies of chemical reactions can be lowered by catalysts.

If feasible, converting wasteful atoms into more useful photons would give us many orders of magnitude more room at the bottom. Staving off Malthus then becomes a question of how much information can be stored in a photon, and of how quickly electrons or positrons can process those photons.

We still face Heisenberg uncertainty as a limit on how quickly these photonic memories can be recalled. The product of the measured time of arrival of a photon and its measured energy (and thus the number of distinguishable frequencies) has a fixed uncertainty -- if we measure the time with greater precision, we can distinguish fewer frequencies, and vice versa. This sets a finite limit on the rate at which we can process the information stored in the photons. Seth Lloyd has calculated that 1 kilogram of mass converted into energy can perform at most 10^50 operations per second. So future civilizations could only stave off Malthus by going photonic -- Malthus will still eventually catch up, assuming Darwinian competition in reproduction remains.

In addition to classical bits stored as photon frequencies, an exponentially higher number of quantum bits (qubits) might be stored in the entangled states of these photons. However, to use some number of these qubits requires destroying an exponentially larger amount of them. Thus, against exponential population growth memory storage itself remains cheap, but recalling memories or thinking about things becomes exponentially expensive. Qubit minds might stave off Malthus by hibernating for exponentially longer periods of time, waking up only to observe an exponentially decreasing number of interesting events.

My argument that we may figure out how to crack three-quark particles like neutrons and protons into photons relies on the probability, due to the imbalance of protons and anti-protons (and neutrons and anti-neutrons) in the observable universe, that baryon number (a property of quarks) is not necessarily conserved, and is falsifiable in that sense: if for example we discover with better telescopes that the amount of antimatter in the universe is the same as the amount of matter, that will at least strongly suggest that even at Big Bang energies baryon number is conserved, rendering the possibility of ever converting the quarks which constitute most of the mass of neutrons and protons into non-quarkish things (like electrons, positrons or photons) extremely unlikely. It's also somewhat imminently testable insofar as if LHC and similar colliders continue to fail to crack the proton, that further dims prospects. Feasibility, however, is not so testable: one could argue that, even if baryon number was not conserved in the Big Bang, and even if we soon discover how to crack the proton in high-energy colliders, we may never figure out a method, analogous to catalysis in chemical reactions, to crack protons at economically low energies or to productively recycle the energies used to perform the conversions rather than it being dispersed as waste heat.

(h/t: the phrase "Cosmic Malthus" to describe Hanson's theory is from commenter Norman at Robin's blog).

Wednesday, September 16, 2009

Nondeterminism and legal procedure

Two of the most important characteristics of legal procedure are local coercion and nondeterminism. I've written plenty about coercion recently, so here I'll put forward some thoughts about its nondeterminism.

A deterministic process is one in which, for any state of the world -- a state being a theoretical description of everything that might change the future -- there is only one next state. The omniscient Laplace's daemon could in principle predict everything about the future if the universe were deterministic. In a nondeterministic process, there can be more than one future state, and not even Laplace's daemon can know for sure, and may not know at all, which one will happen. We can model simple processes as "state machines": in the present the process is in one state, in the next instant in the future the process may have transitioned to another state, and so on.

Here's a picture of a deterministic process -- one with only one possible future:



Here's a picture of a nondeterministic process:






If as in the picture above there are more than two possible future states, this can also be modeled as a sequence of binary events:





An event can be natural or a human act. If it's a human act, the decision to act often can or should be based on good estimates of in which state(s) the world is or was in. In legal procedure, generally an arrest should only be made based on an estimate that the person arrested in fact committed a specific crime, for example.

If causally related nondeterministic processes repeat themselves often enough, we can develop a probabilistic model of them. Physicists have developed probability density models for very small-scale phenomena in quantum mechanics, for example.

Practical nondeterminism stems from at least four sources: (1) some of the physical world is inherently nondeterministic, (2) even where deterministic in principle, the configuration of physical states in the world is vastly more complex than we can completely describe -- nobody and no computer comes anywhere close to resembling Laplace's demon, (3) people are far, far more complex than we can mutually comprehend -- especially if you get more than a Dunbar number of us together, and (4) the words and phrases we communicate with are often ambiguous.

Most of the nondeterminism in legal procedure stems from questions of who did what when and where, and the legal consequences that should ensue based on codes and judicial opinions written in ambiguous language. Law summarizes this uncertainty with a number of qualitative probabilities often called "burdens of proof". The following list is sort of, but not necessarily (as they come from different legal domains and are not necessarily comparable), in order of lesser to greater burden of proof:


  • Colorability
  • Air of Reality
  • Reasonable suspicion
  • Prima facie case
  • Probable cause
  • Preponderance of the evidence
  • Clear and convincing evidence
  • Beyond reasonable doubt
  • (To reverse a jury verdict) No reasonable jury could have reached the verdict

These label the probabilities -- not in the sense of numbers between 0 and 1, but in the sense of kinds of evidence and degrees of convincing argument -- required for various decisions of legal procedure to be made: for a search warrant to issue, for an arrest to be made, for property to be confiscated, for a legal motion to be accepted or denied, for liability to accrue in a civil trial, for a sentence of guilty in a criminal trial, for decisions about jurisdiction, and so on.

It's useful to look at these, not merely as classical probabilities, but in the style of quantum physics, as a superposition of states. When a nondeterministic event -- or a deterministic event for which we lack important knowledge -- has happened in the past, we can treat it as a superposition of all the possible events that might have happened. When a person or persons undertakes a procedural act -- arrests a person, issues a verdict, and so on -- under law they should be doing so based on a judgment, to some degree of certainty, that one particular set of facts occurred that justify the act. We can thus see a criminal defendant, for example, as in the state "guilty or not guilty" until a jury "collapses" this probability distribution to a verdict (which collapse, however, unlike quantum mechanics, can sometimes be reversed by an appeals court if deemed erroneous). A suspect is in the state "beyond reasonable suspicion" or "not beyond reasonable suspicion" until a police officer acts, for example to pull over your car on the highway, in a way that requires reasonable suspicion. In principle, at least, this decision too shoul dbe reversible (for example, if the officer pulled over your car without reasonable suspicion and noticed an open beer bottle, that evidence could be thrown out of court based on the lack of reasonable suspicion in the original stop).

Legal procedure needs to control nondeterminism so that people can operate in an environment of reasonably reliable legal rights. Think, for example, about how inefficient the economy would be if most property was in imminent danger of being taken from one owner and given to another due to frequent decisions reversing old court cases, or how full of worry our lives would be if we could be taken off the street and put in jail at random. Thus there is, for example, a strong presumption in English-law countries that a jury's decision is final: and this effected by putting the burden of proof on the court reversing the decision high: "no reasonable jury could have reached the verdict", a burden of proof in a criminal case much higher than the jury's own "beyond reasonable doubt."

Saturday, September 05, 2009

The Coase Theorem in action

Dilbert.com

A nice illustration of the major flaw I have described in the Coase Theorem. Much of what is valuable in the above link I actually wrote in the comments and will now foreground with some minor edits:
A music store is next door to a doctor's office. The music store would prefer (if the office of a rich doctor who wants quiet for his patients did not exist next door) to let its customers test its electric guitars at volume VM1 > 0. The doctor prefers it to be quieter (VD < VM1). Coase theory assumes that the only possible choices are within the range {VM1, VD}, i.e. any volume of electric guitar testing in between or including these two preferences. In the absence of transaction costs and given only this range, one can indeed conclude that the music store and the doctor will bargain to an efficient outcome. But these aren't the only choices. The music store can, at additional cost to itself C, turn up the volume nobs on its amplifiers and play the music at volume VM2 > VM1. If the doctor is willing to pay the music store P1 to change the volume from VM1 to VD, and P2 > P1 + C to turn the volume down from VM2 to VD, the music store has an incentive to play the music at volume VM2 instead of VM1, or to threaten same, in order to extract for itself a greater benefit from the situation.

In other words, the same physical effect that produced the externality gives rise to an opportunity and incentive to play a negative-sum game. Here it changes the music store's prefered volume in the absence of a rich doctor next door from VM1, to VM2 > VM1, due to the opportunity to extort extra payments from the doctor by creating an even less bearable din, for which the doctor is willing to pay even more to avoid. The music store is willing to incur an extra cost C to itself in order to extract the greater payment P2 from the doctor. For the overall game the payment P2 is a wash and C makes it negative-sum. (In the music store example, cost C comes from the music store chasing away some of its own customers, albeit at a slower rate than it chases away the doctor's customers, by testing its guitars more noisly than it would prefer in the absence of the doctor).

If, as in reality, there are transaction costs causing bargains to sometimes not be reached, the outcome is even worse, as noise VM2 is costlier, perhaps far more costlier, to the doctor's practice than VM1: such outcomes are often far worse outcome under transaction costs than the range of possible outcomes that Coaseians contemplate.

Of course, more generally in the absence of proper prior legal allocations of rights the doctor and music store could threaten each other in other ways: the doctor could threaten to poison the guitar frets, the music store could call in the mob on the doctor, etc.

(Furthermore, even with tort law preventing these other negative-sum games the music store has an incentive to falsely "reveal" preference VM2 instead of VM1 to the doctor and to the judge -- a common problem that good tort law usually, but hardly with perfection, tackles).

The example of the music store and its amplifier volume shows that the externality itself contains potential or actual coercion -- the same physical effect that causes the externality often makes negative-sum games possible, and in the absence of any prior legal limits on the externality, opportunities and incentives for coercive negative-sum games are inherent in the externality -- so that analyses of such externalities with the Coase Theorem, which assumes such games don't exist, will often lead to misleading or false conclusions.

The game being played here by the music store is negative-sum for the same reason a tax is, firstly because the music store's coercion distorts the behavior of the doctor and his patients. Assuming the doctor is helpless to stop the noise without making the payoff (e.g. we artificially assume he can't order a mob hit on the music store, or poison its customers, or emit any other such "extreme" externality to avenge or deter the music store's excess externality) he will go golfing more, and see fewer patients, if he is paying P2 to the store instead of P1. Fewer patients will be healed, a net loss of welfare. Since we assume the music store is rational, it will demand only the Laffer-maximum amount of extortion, but Laffer-maximum taxes still have plenty of distoritve effects that produce inefficiencies compared to the no-taxation case. Secondly, the behavior of the music store is also distorted because it has excess profits to spend. It will invest its extra money in opening new music stores and concert halls next to other doctor's offices, nursing homes, and similar because that is a lucrative source of profit, and so other activities that would prefer quiet will be distorted in turn. It is often unreasonable to assume that Coaseian payees are spending their extra money efficiently. Interestingly, Gary Becker assumed the Coaseian payor's behavior was not distorted and that the Coaseian payee was spending its extra profits efficiently, and used this Coaseian reasoning to argue that governments themselves are efficient outcomes of Coaseian bargaining. Becker's argument is wrong for the same reason that [anarcho-capitalist David] Friedman's [Coaseian] argument is wrong for legal protection agencies: it doesn't account for the economic distortions caused by coercion.

To see where these negative-sum games lead, let's take the case of roving loudspeakers. Pickup trucks drive through the city, parking in front of every business in turn and demanding large payments to take their noise elsewhere. The optimal extortion for the extortors in this case is nearly 100% of all business wealth in the city (again assuming the victims are defenseless), because if extortor A doesn't extort any remaining wealth extortor B will be happy to come in and take it. The economy is so distorted that practically nothing gets produced or distributed and the city's economy collapses. This is the "roving bandit" case studied by Mancur Olson. Where two stores are next to each other and neither can move constitute "stationary bandits", as do gangs or governments with "monopolies of coercion" over fixed territories. As the roving loudspeakers case illustrates, rational stationary bandits collect a far lower percent of their victims' profits in taxes than do roving bandits. (But stationary bandits with the much lower rate than 100% end up collecting a far higher absolute amount -- recall the Laffer curve) . If on the other hand we assume the victims are not defenseless, we have negative-sum games like hawk/dove, negative tit-for-tat, etc. which again are paradigmatically very different from voluntary Coaseian bargains.

We can measure the effectiveness of an excess (or coercive) externality for extracting super-Coasiean payoffs by how great a harm the externality can produce for the least cost to the emitter. The ubiquity of technology that is very effective in producing the greatest harm for the least cost, i.e. weapons, in our world should be a very good clue that our world is not Coaseian. Music volume, spark emission, and so on beyond the "preferred" level Coaseians falsely assume to be maximal are logically weapons. Their harm/cost ratio is lower than guns, tanks, bombers, missiles, flamethrowers, herbicides, and so on, but they have an advantage in being physically hard to distinguish from merely Coaseian externalities, which would come in handy in a world where judges and other lawmakers actually based law on the Coase theorem (the good news is that they mostly don't).

Saturday, August 22, 2009

Praying to Gliese 581d

COSMOS magazine of Australia is teaming up with the Australian national government and the Jet Propulsion Laboratory (JPL) and NASA in the United States to send a ream of messages gathered from the public via the JPL-run Deep Space Network (on which I once worked) to the nearby star system of Gliese 581, which includes the recently discovered extrasolar planet Gliese 581d. The schedule is to send the messages on Friday, August 28th. According to COSMOS's editor, Wilson da Silva,
Yes, it is a "stunt": the purpose is to engage the public during Science Week in Australia and get them thinking about the big questions: are we alone, is life common in the universe, how often does intelligent life arise, how big is space, etc etc.
It will apparently also be good for COSMOS' advertising and subscription revenue:
So far, it has been very successful: more than 1,000 newspapers and other media have published online stories all over the world, it has been featured on 9,000 blogs and more than 1.17 million pages of the site have been read in the past 10 days.
I can't say whether it will hurt or harm the quests of the government agencies and contractors involved for more taxpayer money.

Alas, this admitted publicity stunt gets the public thinking about "the big questions" in a way that is rather prejudiced about the answers. The very act of sending a message to a specific star suggests to our newly attracted pupils that there is some substantial probability of extraterrestrial intelligence (ETI) at the other end, when astronomers have observed over 10 billion galaxies and have never seen any signs of ETI. The odds of even one other ETI civilization in our galaxy, much less specifically around one of the two hundred billion stars in our galaxy, Gliese 581, are rather remote. Just how remote we will now explore -- then we shall take a look at the prayers to Gliese 581d.

The mindset at work with the COSMOS transmission is similar to those behind the web calculator Drake Equations that I've seen. The Drake Equation supposedly brings together all the main probabilities relevant to calculating how many ETI we might expect to find in our galaxy: the expected number of habitable planets, the probability of the origin of life given a habitable planet, the probability of intelligence evolving given life, and so on. You are supposed to be able to input your own assumptions into these calculators and it spits out the expected number of ETI in our galaxy based on these assumptions. Real scientists actually observe the universe and fit their equations and parameters to what they see, rather than what they wish were true, but these "science education" sites invite the visitor to plug in what they wish -- unless it doesn't fit what the web site authors wish.

From these sites we learn something very interesting, not about how science should be done or what it has observed, but about the hopes and wishes of their authors. These calculators don't allow just any numbers to be placed in them, but only numbers within a range defined by the authors. They don't even allow numbers to be placed into them that are most consistent with what astronomers have actually observed in the universe, i.e. the ubiquitous naturalness and lack of artificiality everywhere they look. Let's use as an example the least pathological.

A reasonable guess, given the improbability of actually existential threats between the invention of printing (with the permanence it brings to civilization) and the end of the universe, is that most casually connected series of civilizations will achieve a substantially >1 billion years lifetime. (In other words, while many civilizations might rise and fall, and subsequent intelligent species might even replace prior ones, once a civilization achieves printing this causal chain of civilizations is unlikely to be permanently terminated, and will probably move beyond the home planet and within a few tens of millions of years spread across its home galaxy). The largest value this calculator allows for average civilization lifetime is 1 billion years, but even putting in this too small value makes it impossible to put in at least one other value consistent with our observations. (Update: since I wrote this section on the Drake calculator for a private list a few months ago, they've updated the calculator and it now supports lifetimes up to 5 billion years, but the other limitations remain).

Astronomers have looked far and wide in the skies, builders and miners and geologists and archaeologists have dug and examined millions of places on our own planet, and have seen neither any alien civilization, nor even their remains, on or near our planet or anywhere in our galaxy. They would likely have long since spread across our galaxy by now, a process that would take only a few tens of millions of years. They would have blotted out the stars to keep their energy from going to waste. Our observations of other galaxies -- with extremely few galaxies deeply moved into the infrared consistent with a space-faring civilization efficiently harvesting the energy of its stars -- strongly suggest that the number of civilizations is far less than 1 per galaxy. The naturally rare molecules used in the artificial surfaces of these massive constructions would also stand out in spectra against the naturally common molecules in dust clouds, planetary nebulae, etc. that astronomers actually observe.

But even sticking with the order of magnitude of between 0.1 and 1 per galaxy, this "model" does not allow the input of values consistent with both this order of magnitude and with what we observe about life on our own planet.

One has to put the minimum allowed for both the fraction of habitable planets with life the fraction of inhabited planets that achieve intelligent life to achieve this order of magnitude. Based on the commonality of near-intelligence life on our planet the latter number is probably much higher than the minimum allowed value 1/10^6. Based on the extremely improbable genetic complexity of even the simplest known self-sufficient microbial ecosystem, the former number is probably much lower than 1/10^6. But the program prevents input of these kinds of values most consistent with our observations.

Other Drake calculators I have looked at are far worse still in not allowing the most reasonable values to be placed into the Drake Equation. They are not teaching science -- numerology, or here we might call it Bayesiology, is not science -- they are selling a belief, the belief that our galaxy is filled with morally advanced beings that we can talk to. How are these grossly misleading "educational tools" and publicity stunts like COSMOS helpful in teaching the questions of "are we alone, is life common in the universe, how often does intelligent life arise, how big is space, etc."?

COSMOS An Enid News & Eagle piece republished by COSMOS invokes another prejudiced cliche of the SETI (Search for ETI) crowd: ETI living in a heavenly utopia:
Me [Human interviewer]: "... So you don’t elect leaders?
Then who keeps you safe?”

Bleem: “From what?”

Me: “From criminals, from other countries that declare
war on you.”

Bleem: “Explain criminals.”

Me: “People who steal other people’s things or hurt
them. Some even kill other people.”

Bleem: “Explain steal.”

Me: “Taking things that don’t belong to you.”

Bleem: “Explain kill.”

Me: “To terminate one’s existence.”
This nonsense neatly avoids an important question Drs. Jared Diamond and David Brin have raised -- if, per COSMOS' assumption that ETI is common, these creatures, likely far more ancient and powerful than humans, do receive our message and thereby discover us, may that not put humanity in severe danger? Instead of "shouting at the cosmos", shouldn't we put reasonable restrictions on the power, focus, and targets of transmissions until we learn whether and what kinds of threats might exist? (I realize Gliese 581 probably doesn't raise this issue, because being within about 20 light-years they would probably have already detected our oxygen spectra, our "I Love Lucy" and "Seinfeld" broadcasts, our nuclear tests, and much else, but the [update: COSMOS' own, as well as the EN&E's] article[s], supposedly an exercise in education, doesn't even raise the issue and explain this).

By blithely ignoring this issue while it sends the messages, COSMOS again answers the question with extreme prejudice by assuming it is safe. They even have a theological justification: ETI wouldn't harm a flea, because the only thing these innocents can understand is their seraphic utopia. Apparently no living thing up there in the heavens eats any other living thing -- our ETI are puzzled by the very concept of "kill". Our beatific interlocutors were apparently created by an onmniscient and omnibenevolent god to dwell together in heavenly communal bliss rather than evolved through Darwinian evolution. Children of Australia, there's your biology lesson for the day.

COSMOS has rejected "inappropriate" messages to Gliese 581d, but it does not describe what its criteria for "inappropriate" might be. How can any of us humans predict the reaction to any given message of a genetically completed unrelated creature, even assuming it exists, in a culture about which we know absolutely nothing? COSMOS has no basis for deciding that any stupid or insulting message, and there are plenty of stupid and insulting messages that they let through, is better than any other, besides their own particular human 21st century Australian reactions.

Many submitters are sending normal chatty messages, while some are quite properly treating the whole thing as a joke, but it is worth thinking abut how closely many of the messages in this "science" project resemble prayers (all errors in the original):

"I hope when you recieve these messages that you will come and visit and bring a new age to the human race. LIVE LONG AND PROSPER."

"All things work together for good."

"We're live in one universe,so we just like a family.We can share our happy with you."

"We are so small."

"Please help us to stop the obesity problem that haunts our world!"

"Please come visit us on Earth as soon as you can.We've been waiting a long time to see you.Don't make us wait any longer! LIVE LONG & PROSPER!!!"

"I know we are not alone.You are watching us."

"Don't let humans colonize habitable extrasolar planets, contact us before then please. Thanks for not colonizing Earth long ago and allowing humans to evolve."

"Bring some peace to the earth."

"I am told if you say something to the universe it may come true."

"We know that we are not alone,hope to hear from you."

"Your technologies must be advanced than humans by millons of years.let us share all good things and we shall be friends."

"Hello God, are you there?"

And that's just from the first page of messages. These interstellar tweets that COSMOS collected and NASA obligingly plans to send via its Deep Space Network to Gliese 581, the earlier "Teen Message" sent by Dr. Alexander Zaitsev and his team from the Evpatoria dish in the Ukraine, and the Drake calculators that invite you to believe that your wishes are scientifically true, unless they disagree with the authors', are exercises, not in science education, but in "educating" the public in the tenets of an often twisted faith.