Oct 01

If Guns Are Outlawed, Outlaws Will Use Crossbows

This
happened about 15 minutes from where I live:

Police in West Chester are looking for an assailant they believe used
a crossbow to shoot a pedestrian from a passing SUV.

The victim, a restaurant worker who was walking home along High
Street early Sunday morning, was shot in the stomach with a 16-inch
hunting arrow. He was released Wednesday from the University of
Pennsylvania Hospital in Philadelphia.

Benito Vargas told police he was at the corner of High and Barnard
Streets at about 1 a.m. when he saw the white SUV’s driver-side window
slide down, revealing the front part a crossbow just inside. Seconds
later, he was lying on the ground.

[…]

“This thing would be silent. You wouldn’t hear any noise,” West
Chester Detective Thomas Yarnall said. […] Yarnall said the
shooting appeared random […]

Gives a whole old meaning to the phrase “looking for a quarrel”,
which in fact, originally referred to a crossbow bolt.

UPDATE:(Well, maybe. Some etymologists think the noun quarrel and verb quarrel have
separate origins.)

Blogspot comments

Sep 29

Statism — Love It Or Leave It

For many years I’ve been seeing proposals for implementing
libertarian reforms that look superficially appealing and plausible,
but on closer examination run hard aground either on some pesky
reality of politics as it is or the extreme difficulty of waging a
successful revolution. Since I’m a libertarian,
you may well imagine that I find this annoying. How do we get there
from here?

For the first time, I think I’ve seen a path that is both
principled and practical. Not the whole path, but some firm steps
that both accomplish good in themselves and open up great
possibilities. And the best part is that it’s a path most statists
can’t object to, one that uses the premises of the existing federal
system to achieve a fair first test of libertarian ideas within that
system. Even opponents of libertarianism, if they are fair-minded,
should welcome this reality check. Libertarians should cheer it on
and join it.

I’ve had troubles with other libertarians recently. Too many have
retreated into isolationism in the face of a war with terrorism that I
do not believe we can or should evade. The isolationists judge that
empowering the State when we use it as an instrument of self-defense
has consequences for the long term that are more dangerous than
terrorists’ aims are in the short term. I sympathize with this view,
but when all is said and done, Al-Qaeda shahids with backpack nukes
from the ‘stans are more of a danger than John Ashcroft has ever been.
I have done my homework and if anything, I believe the U.S. Government
is understating the danger we face.

But the dangers of empowering the State to fight a necessary war
make it more, not less urgent that we pursue all possibilities for
libertarian reform at home. Now, I think I see a workable one. What
if, by perfectly legal and proper means, we could take over a small
American state and actually try out our ideas there?

Yes, I thought it was a crazy idea when I first heard it. An
entire state? How? But the Free State Project has
done the math. I’ve looked at their arguments and trend curves, and
I’m pretty much convinced. It can be done. We can do it. The
key is very simple; enough of us just have to move
there. Vote with our feet, and then vote in a bloc. And why
a state? Becausr that’s the only intermediate level of government
with enough autonomy to make a good laboratory.

The Free State Project identified ten small states where 20,000
active libertarians would be a critically large voting bloc. They are
signing up libertarians and like-minded people to vote on the target
state and to move there when the group passes 20,000. The winning
state will be announced on 1st October; they’ve signed up about 5400
people so far, on a classic exponential growth curve with a six-month
doubling time that should get them there in late 2004.

What could be more American than migrating to a thinly-settled area
to experiment with liberty? And this time we won’t have to kill off the
natives, because they’re not going to be organizing any scalping parties.
Most of the states under consideration have a strong local
libertarian tradition, and none of them are going to look askance at
the sort of bright, hardworking, highly-skilled people most likely to
be pro-freedom activists.

Some people won’t like this idea, though. The national media
establishment, which is statist down to its bones even in the few
crevices where it isn’t leftist, will inevitably try to portray the
Free State migrants as a bunch of racist conservative redneck gun-nuts
(all these terms being effectively synonymous in the national media)
intent on turning the poor victim state into one gigantic Aryan
Nations compound (especially if it’s Idaho, as it could be). Expect
network-news interviews with locals teary-eyed with worry that the
incomers will be hosting regular cross-burnings on the courthouse
lawn. Awkward little inconsistencies like the libertarian opposition
to drug laws, censorship, and theocracy will be ignored. This prospect
is especially ironic because, in most of the possible target states,
it is our lifestyle liberalism that is actually most likely to produce
a culture clash with the natives.

The more intelligent members of the political class won’t like this
either. The brighter and better-able one is to extrapolate
second-and-third-order effects, the more likely the potential success
of libertarianism at a state level is likely to scare them —
conservatives nearly as much as liberals, and conservatives perhaps
more so when we challenge them to emulate our success with
small-government policies that they speak but don’t really mean.

But I don’t think this will be easy to stop. Libertarian
demographics being what they are, 20,000 of us in a small state will
be a huge concentration of technical, creative and
entrepreneurial talent. We’ll found software businesses, studios,
innovative light-manufacturing shops and engineering companies
by the bucketload. We’ll create favorable regulatory conditions
for old-line businesses like financial-services houses and for
bleeding-edge ones like the private space-launch industry.
We’ll attract more people like us. The lucky state, especially
if it’s depressed and mostly rural like a lot of the candidates, will
experience a renaissance. And we’ll get to make the difference.

The real fun will start when Americans elsewhere start asking “Why
can’t our state be more like this?”

Liberty in our lifetime? I think this might be how to get there.

Blogspot comments

Sep 11

One year later…

One year ago today, the World Trade Center fell in flames. And that very day, just a few hours after the event, I wrote the following:

QUOTE BEGINS

Some friends have asked me to step outside my normal role as a technology evangelist today, to point out in public that a political panic reaction to the 9/11 terrorist attack could do a great deal more damage than the attack itself.

Today will not have been a victory for terrorism unless we make it one. If we reward in any way the Palestinians who are now celebrating this hideous crime in the streets of the West Bank, that will have been a victory for terrorism. If we accept “anti-terrorism” measures that do further damage to our Constitutional freedoms, that will have been a victory for terrorism. But if we learn the right lessons, if we make policies that preserve freedom and offer terrorists no result but a rapid and futile death, that will have been a victory for the rest of us.

We have learned today that airport security is not the answer. At least four separate terror teams were able to sail right past all the elaborate obstacles — the demand for IDs, the metal detectors, the video cameras, the X-ray machines, the gunpowder sniffers, the gate agents and security people trained to spot terrorists by profile. There have been no reports that any other terror units were successfully prevented from achieving their objectives by these measures. In fact, the early evidence is that all these police-state-like impositions on freedom were exactly useless — and in the smoldering ruins of the World Trade Center lies the proof of their failure.

We have learned today that increased surveillance is not the answer. The FBI’s “Carnivore” tap on the U.S.’s Internet service providers didn’t spot or prevent this disaster; nor did the NSA’s illegal Echelon wiretaps on international telecommunications. Video monitoring of public areas could have accomplished exactly nothing against terrorists taking even elementary concealment measures. If we could somehow extend airport-level security to the entire U.S., it would be just as useless against any determined and even marginally competent enemy.

We have learned today that trying to keep civilian weapons out of airplanes and other areas vulnerable to terrorist attack is not the answer either — indeed, it is arguable that the lawmakers who disarmed all the non-terrorists on those four airplanes, leaving them no chance to stop the hijackers, bear part of the moral responsibility for this catastrophe.

I expect that in the next few months, far too many politicians and pundits will press for draconian “anti-terrorist” laws and regulations. Those who do so will be, whether intentionally or not, cooperating with the terrorists in their attempt to destroy our way of life — and we should all remember that fact come election time.

As an Internet technologist, I have learned that distributed problems require distributed solutions — that centralization of power, the first resort of politicians who feed on crisis, is actually worse than useless, because centralizers regard the more effective coping strategies as threats and act to thwart them.

Perhaps it is too much to hope that we will respond to this shattering tragedy as well as the Israelis, who have a long history of preventing similar atrocities by encouraging their civilians to carry concealed weapons and to shoot back at criminals and terrorists. But it is in that policy of a distributed response to a distributed threat, with every single citizen taking personal responsibility for the defense of life and freedom, that our best hope for preventing recurrences of today’s mass murders almost certainly lies.

If we learn that lesson, perhaps today’s deaths will not have been in vain.

END QUOTE

As I reread the above, it does not seem to me that we have yet learned our lesson. We have taken steps towards arming pilots, but not passengers. Tiger-team probes of airport security have shown that the rate at which weapons can be smuggled through remains 30% — unchanged since before 9/11. A year later, therefore, the frisk searches of little old ladies and the no-sharp-edges prohibitions have bought us no security at all.

The scorecard is not entirely bleak. Al-Qaeda has not been able to mount another successful mass murder. Post-9/11 legal changes through the Patriot Act and related legislation have been troubling, but not disastrous. And the war against the Taliban was a rather less complicated success than one might have expected — civilian casualties minimal, no uprising of the mythical “Arab Street”, and Al-Qaeda’s infrastructure smashed. Osama bin-Laden is probably dead.

Still, the war is far from over. Islamic terrorism has not been repudiated by the ulema, the college of elders who prescribe the interpretation of the Koran and the Hadith. The call to violent jihad wired into the foundations of Islam has not yet been broken or tamed into a form civilization can coexist with. Accomplishing that is the true challenge that faces us, one greater and more subtle than merely military victory.

(Yes, I wrote the above in 2002. Some glitch in the blog software gives it a 2003 date. I don’t know why.)

Blogspot comments

Aug 22

An Open Letter To Darl McBride

Mr. McBride:

Late yesterday. I learned that you have charged
that your company is the victim of an insidious conspiracy
masterminded by IBM. You have urged the press and public to believe
that the Open Source Initiative and the Free Software Foundation and
Red Hat and Novell and various Linux enthusiasts are up in arms not
because of beliefs or interests of their own, but because little gray
men from Armonk have put them up to it. Bwahahaha! Fire up the
orbital mind-control lasers!

Very few things could possibly illustrate the brain-boggling
disconnect between SCO and reality with more clarity than hearing you
complain about how persecuted your company is. You opened this
ball
on 6 March by accusing the open-source community of
criminality and incompetence as a way to set up a lawsuit against IBM.
You have since tried to seize control of our volunteer work for your
company’s exclusive gain, and your lawyers have announced
the intention
to destroy not just the GPL but all the open-source
licenses on which our community is built. It’s beyond me how can have
the gall to talk as though we need funding or marching orders from IBM
to mobilize against you. IBM couldn’t stop us from
mobilizing!

I’m not sure which possibility is more pathetic — that the
CEO of SCO is lying through his teeth for tactical reasons, or that
you genuinely aren’t capable of recognizing honest outrage when you
see it. To a manipulator, all behaviors are manipulation. To a
conspirator, all opposition is conspiracy. Is that you? Have you
truly forgotten that people might make common cause out of integrity,
ethical considerations, or simple self-defense? Has the reality you
inhabit truly become so cramped and ugly?

I’m in at least semi-regular communication with most of the people
and organizations who are causing you problems right now. The only
conspiracy among us is the common interest in preventing the
open-source community from being destroyed by SCO’s greed and
desperation. (And we think it’s a perfect sign of that desperation
that at SCOforum you ‘proved’ your relevance by
bragging about the amount of press coverage SCO generates. Last I checked,
companies demonstrated relevance by showing products, not
press clippings.)

Yes, one of the parties I talk with is, in fact, IBM. And you know
what? They’re smarter than you. One of the many things they
understand that you do not is that in the kind of confrontation SCO
and IBM are having, independent but willing allies are far better
value than lackeys and sock puppets. Allies, you see, have initiative
and flexibility. The time it takes a lackey to check with HQ for
orders is time an ally can spend thinking up ways to make your life
complicated that HQ would be too nervous to use. Go on, try to
imagine an IBM lawyer approving this letter.

The very best kind of ally is one who comes to one’s side for
powerful reasons of his or her own. For principle. For his or her
friends and people. For the future. IBM has a lot of allies of that
kind now. It’s an alliance you drove together with your
arrogance, your overreaching, your insults, and your threats.

And now you cap it all with this paranoid ranting. It’s classic,
truly classic. Was this what you wanted out of life, to end up
imitating the doomed villain in a cheesy B movie? Tell me, does that
dark helmet fit comfortably? Are all the minions cringing in proper form?
“No, Mr. Torvalds, I expect you to die!” I’d ask if you’d
found the right sort of isolated wasteland for your citadel of dread yet, but
that would be a silly question; you’re in Utah, after all.

It doesn’t have to be this way. Sanity can still prevail. Here’s
the message that Jeff Gerhardt read at SCOforum again:

In recent months, the company formerly known as Caldera and now
trading as SCO has alleged that the 2.4 Linux kernel contains code
misappropriated from it. We in the open-source community are
respectful of intellectual-property rights, and take pride in our
ability to solve our own problems with our own code. If there is
infringing code in the Linux kernel, our community wants no part of it
and will remove it.

We challenge SCO to specify exactly which code it believes to be
infringing, by file and line number, and on what grounds it is
infringing. Only with disclosure can we begin the process of
remedying any breach that may exist. If SCO is truly concerned about
protecting its property, rather than simply using the mere accusations
as a pretext to pump its stock price and collect payoffs from
Microsoft for making trouble, then it will welcome the opportunity to
have its concerns resolved as quickly and with as little disruption as
possible. We are willing to cooperate with that.

The open-source community is not, however, willing to sit idly by
while SCO asserts proprietary control, and the right to collect
license fees, over the entirety of Linux. That is an unacceptable
attempt to hijack the work thousands of volunteer programmers
contributed in good faith, and must end.

If SCO is willing to take the honest, cooperative path forward, so are
we. If it is not, let the record show that we tried before resorting
to more confrontational means of defending our community against
predation.

Linus Torvalds is backing me on this, and our other chieftains and
philosopher-princes will as well. Show us the overlaps. If your code
has been inserted in our work, we’ll remove it — not because
you’ve threatened us but because that’s the right thing to do, whether
the patches came from IBM or anywhere else. Then you can call off
your lawyers and everyone will get to go home happy.

Take that offer while you still can, Mr. McBride. So far your
so-called ‘evidence’ is crap;
you’d better climb down off your high horse before we shoot that
sucker entirely out from under you. How you finish the contract fight
you picked with IBM is your problem. As the president of OSI,
defending the community of open-source hackers against predators and
carpetbaggers is mine — and if you don’t stop trying to destroy
Linux and everything else we’ve worked for I guarantee you
won’t like what our alliance is cooking up next.

And in case it’s not pellucidly clear by now, not one single
solitary damn thing I have said or published since 6 March (or at any
time previously for that matter) has been at IBM’s behest. I’m very
much afraid it’s all been me, acting to serve my people the best way I
know how. IBM doesn’t have what it would take to buy me away from
that job and neither do you. I’m not saying I don’t have a price
— but it ain’t counted in money, so I won’t even bother being
insulted by your suggestion.

You have a choice. Peel off that dark helmet and deal with us like
a reasonable human being, or continue down a path that could be bad
trouble for us but will be utter ruin — quite possibly
including jail time on fraud, intellectual-property theft, barratry,
and stock-manipulation charges — for you and the rest of SCO’s
top management. You have my email, you can have my phone if you want
it, and you have my word of honor that you’ll get a fair hearing for
any truths you have to offer.

Eric S. Raymond

esr@thyrsus.com

President, Open Source Initiative

Friday, 20 August 2003

Blogspot comments

Jul 28

Brother, Can you Paradigm?

I just read an interview with my friend Tim O’Reilly in which he approvingly cited Thomas Kuhn’s “The Structure of Scientific Revolutions”. There are some books so bad, but so plausible and influential, that periodically trashing them in public is almost an obligation. The really classic stinkeroos of this kind, like Karl Marx’s Das Kapital, exert a weird kind of seduction on otherwise intelligent people long after their factual basis has been completely exploded.

Yes, Kuhn’s magnum opus is one of these. When I was a bright and naive young sprat, full of zeal to correct my fuddy-duddy elders, I loved Kuhn’s book. Then I reread it, and did some thinking and fact-checking, and discovered that it is both (a) deeply wrong, and (b) dishonest,

First, deeply wrong. Kuhn’s basic model that paradigm changes are generational — you have to wait for the old dinosaurs to die — is dramatically falsified by the history of early 20th-century physics. Despite well-publicized exceptions like Einstein’s refusal to accept “spooky action at a distance”, the record shows us that a generation of physicists handled not one but two major paradigm shifts in their lifetimes — relativity and quantum mechanics — quite smoothly indeed.

Later in the 20th century, the paradigm shift produced by the discovery of DNA and the neo-Darwinian synthesis of evolutionary theory didn’t require the old guard to die off before it was accepted, either. More recently, the discovery of things like reverse transcriptase and “jumping genes”, which broke two of the central dogmas of genetics, were absorbed with barely a ripple.

I found many other examples once I started looking. It turns out that the kind of story Kuhn wants to tell is quite rare in the hard sciences. There are a few examples of paradigm shifts that fit his model — my personal favorite is Wegener and the continental-drift hypothesis — but they are the exception rather than the rule. Most theoretical upheavals, even most very radical ones, happen rather smoothly.

The soft sciences are a somewhat different story, and the reasons for this are revealing. Look at the post-Freudian upheaval in psychology or the clashes between social contructivism and the evolutionary-psych crowd and you will see something much more like a Kuhnian shock going on (I suspect we’ve got another one coming in linguistics when Noam Chomsky kicks off). But these fields are vulnerable largely to the extent that they are not science — that is, when the dominant model is poorly confirmed or untestable, and holds largely for reasons of politics and/or the influence of a single charismatic personality.

One of the most pointed criticisms of Kuhn is that his book is a sort of soft-science imperialism, an attempt to project onto the hard sciences the kind of incoherence, confusion, and political ax-grinding we see in (say) sociology or “political science”. In doing so, it does real science a profound disservice.

The dishonesty in the book is that Kuhn evades the question of whether paradigm shifts are an emic or etic phenomenon. In fact, he does this so neatly that it’s possible to read the whole thing and not notice that the largest central question about the nature of paradigm shifts is being dodged. Do they change the world or just our description of it? Kuhn hints at a radical sort of subjectivism without ever acknowledging what that would actually mean.

Kuhn got me interested in the cultural history of science when I read this book around 1971. But the more I studied it, the more I became convinced that Kuhn’s thesis is simple, appealing, and wrong. Among many other flaws, he erects a binary distinction between “normal” science and paradigm-shattering earthquakes that is not really sustainable except through a kind of selective hindsight. It plays to our human tendency to want to make heroic narratives out of history, but it misrepresents science as it is actually practised and perceived by the people who do it.

(For a demolition of Kuhn that focuses less on the factual holes in his thesis and more on the historical and logical flaws, see this New Criterion article.)

Blogspot comments

Jul 15

The Myth of Man the Killer

(An updated version of this essay lives here.)

One of the most dangerous errors of our time is the belief that human beings are uniquely violent animals, barely restrained from committing atrocities on each other by the constraints of ethics, religion, and the state.

It may seem odd to some to dispute this, given the apparently ceaseless flow of atrocity reports from Bosnia, Somalia, Lebanon and Los Angeles that we suffer every day. But, in fact, a very little study of animal ethology (and some application of ethological methods to human behavior) suffices to show the unbiased mind that human beings are not especially violent animals.

Desmond Morris, in his fascinating book Manwatching’, for example, shows that the instinctive fighting style of human beings seems to be rather carefully optimized to keep us from injuring one another. Films of street scuffles show that “instinctive” fighting consists largely of shoving and overhand blows to the head/shoulders/ribcage area.

It is remarkably difficult to seriously injure a human being this way; the preferred target areas are mostly bone, and the instinctive striking style delivers rather little force for given effort. It is enlightening to compare this fumbling behavior to the focussed soft-tissue strike of a martial artist, who (having learned to override instinct) can easily kill with one blow.

It is also a fact, well-known to military planners, that somewhere around 70% of troops in their first combat-fire situation find themselves frozen, unable to trigger lethal weapons at a live enemy. It takes training and intense re-socialization to make soldiers out of raw recruits. And it is a notable point, to which we shall return later, that said socialization has to concentrate on getting a trainee to obey orders and identify with the group. (Major David Pierson of the U.S. Army wrote an illuminating essay on this topic in the June 1999 Military Review).

Criminal violence is strongly correlated with overcrowding and stress, conditions that any biologist knows can make even a laboratory mouse crazy. To see the contrast clearly, compare an urban riot with post-hurricane or -flood responses in rural areas. Faced with common disaster, it is more typical of humans to pull together than pull apart.

Individual human beings, outside of a tiny minority of sociopaths and psychopaths, are simply not natural killers. Why, then, is the belief in innate human viciousness so pervasive in our culture? And what is this belief costing us?


The historical roots of this belief are not hard to trace. The Judeo-Christian creation story claims that human beings exist in a fallen, sinful state; and Genesis narrates two great acts of revolt against God, the second of which is the first murder. Cain kills Abel, and we inherit the “mark of Cain”, and the myth of Cain — the belief that we are all somehow murderers at bottom.

Until the twentieth century, Judeo-Christianity tended to focus on the first one; the Serpent’s apple, popularly if not theologically equated with the discovery of sexuality. But as sexual taboos have lost their old forbidding force, the “mark of Cain” has become relatively more important in the Judeo-Christian idea of “original sin”. The same churches and synagogues that blessed “just wars” in former centuries have become strongholds
of ideological pacifism.

But there is a second, possibly more important source of the man-as-killer myth in the philosophy of the Enlightenment — Thomas Hobbes’s depiction of the state of nature as a “warre of all against all”, and the reactionary naturism of Rousseau and the post-Enlightenment Romantics. Today these originally opposing worldviews have become fused into a view of nature and humanity that combines the worst (and least factual) of both.

Hobbes, writing a rationalization of the system of absolute monarchy under the Stuart kings of England, constructed an argument that in a state of nature without government the conflicting desires of human beings would pit every man against his neighbor in a bloodbath without end. Hobbes referred to and assumed “wild violence” as the normal state of humans in what anthropologists now call “pre-state” societies; that very term, in fact, reflects the Hobbesian myth,

The obvious flaw in Hobbes’s argument is that he mistook a sufficient condition for suppressing the “warre” (the existence of a strong central state) for a necessary one. He underestimated the innate sociability of human beings. The anthropological and historical record affords numerous examples of “pre-state” societies (even quite large multiethnic/multilingual populations) which, while violent against outsiders, successfully maintained internal peace.

If Hobbes underestimated the sociability of man, Rousseau and his followers overestimated it; or, at least, they overestimated the sociability of primitive man. By contrasting the nobility and tranquility they claimed to see in rural nature and the Noble Savage with the all-too-evident filth, poverty and crowding in the booming cities of the Industrial Revolution, they secularized the Fall of Man. As their spiritual descendants today
still do, they overlooked the fact that the urban poor had unanimously voted with their feet to escape an even nastier rural poverty.

The Rousseauian myth of technological Man as an ugly scab on the face of pristine Nature has become so pervasive in Western culture as to largely drive out the older opposing image of “Nature, red in tooth and claw” from the popular mind. Perhaps this was inevitable as humans achieved more and more control over their environment; protection from famine, plague, foul weather, predators, and other inconveniences of nature encouraged the fond delusion that only human nastiness makes the world a hard place.

Until the late nineteenth to early twentieth century, the Rousseauian view of man and nature was a luxury confined to intellectuals and the idle rich. Only as increases in urbanization and average wealth isolated most of society from nature did it become an unarticulated and unexamined basic of popular and academic belief. (In his book “War Before Civilization”, Lawrence Keeley has given us a trenchant analysis of the way in which the Rousseauian myth reduced large swathes of cultural anthropology to uttering blinkered nonsense.)

In reality, Nature is a violent arena of intra- and inter-species competition in which murder for gain is an everyday event and ecological fluctuations commonly lead to mass death. Human societies, outside of wartime, are almost miraculously stable and nonviolent by contrast. But the unconscious prejudice of even educated Westerners today is likely to be that the opposite is true. The Hobbesian view of the “warre of all against all” has survived only as a description of human behavior, not of the wider state of nature. Pop ecology has replaced pop theology; the new myth is of man the killer ape.

Another, darker kind of romanticism is at work as well. To a person who feels fundamentally powerless, the belief that one is somehow intrinsically deadly can be a cherished illusion. Its marketers know full well that violence fantasy sells not to the accomplished, the wealthy and the wise, but rather to working stiffs trapped in dead-end jobs, to frustrated adolescents, to retirees — the marginalized, the lonely and the lost.

To these people, the killer-ape myth is consolation. If all else fails, it offers the dark promise of a final berserkergang, unleashing the mythic murderer inside to express all those aggravations in a gory and vengeful catharsis. But if seven out of ten humans can’t pull the trigger on an enemy they have every reason to believe is trying to kill them, it seems unlikely that ninety-seven out of a hundred could make themselves murder.

And, in fact, less than one half of one percent of the present human population ever kills in peacetime; murders are more than an order of magnitude less common than fatal household accidents. Furthermore, all but a vanishingly small number of murders are performed by males between the ages of 15 and 25, and the overwhelming majority of those by unmarried males. One’s odds of being killed by a human outside that demographic bracket are comparable to one’s chances of being killed by a lightning strike.


War is the great exception, the great legitimizer of murder, the one arena in which ordinary humans routinely become killers. The special prevalence of the killer-ape myth in our time doubtless owes something to the horror and visibility of 20th-century war.

Campaigns of genocide and repressions such as the Nazi Holocaust, Stalin’s engineered famines, the Ankha massacres in Cambodia, and “ethnic cleansing” in Yugoslavia loom even larger in the popular mind than war as support for the myth of man the killer. But they should not; such atrocities are invariably conceived and planned by selected, tiny minorities far fewer than .5% of the population.

We have seen that in normal circumstances, human beings are not killers; and, in fact, most have instincts which make it extremely difficult for them to engage in lethal violence. How do we reconcile this with the continuing pattern of human violence in war? And, to restate to one of our original questions, what is belief in the myth of man the killer doing to us?

We shall soon see that the answers to these two questions are intimately related — because there is a crucial commonality between war and genocide, one not shared with the comparatively negligible lethalities of criminals and the individually insane. Both war and genocide depend, critically, on the habit of
killing on orders
. Pierson observes, tellingly, that atrocities “are generally initiated by overcontrolled personality types in second-in-command positions, not by undercontrolled personality types.” Terrorism, too, depends on the habit of obedience; it is not Osama bin Laden who died in the 9/11 attack but his minions.

This is part of what Hannah Arendt was describing when, after the Nuremberg trials, she penned her unforgettable phrase “the banality of evil”. The instinct that facilitated the atrocities at Belsen-Bergen and Treblinka and Dachau was not a red-handed delight in murder, but rather uncritical submission to the orders of alpha males — even when those orders were for horror and death.

Human beings are social primates with social instincts. One of those instincts is docility, a predisposition to obey the tribe leader and other dominant males. This was originally adaptive; fewer status fights meant more able bodies in the tribe or hunting band. It was especially important that bachelor males, unmarried 15-to-25 year-old men, obey orders even when those orders involved risk and killing. These bachelors were the tribe’s hunters, warriors, scouts, and risk-takers; a band would flourish best if they were both aggressive towards outsiders and amenable to social control.

Over most of human evolutionary history, the multiplier effect of docility was limited by the small size (250 or less, usually much less) of human social units. But when a single alpha male or cooperating group of alpha males could command the aggressive bachelor males of a large city or entire nation, the rules changed. Warfare and genocide became possible.

Actually, neither war nor genocide needs more than a comparative handful of murderers — not much larger a cohort than the half-percent to percent that commits lethal violence in peacetime. Both, however, require the obedience of a large supporting population. Factories must work overtime. Ammunition trucks must be driven where the bullets are needed. People must agree not to see, not to hear, not to notice certain things. Orders must be obeyed.

The experiments described in Stanley Milgram’s 1974 book “The Perils of Obedience” demonstrated how otherwise ethical people could be induced to actively torture another person by the presence of an authority figure commanding and legitimizing the violence. They remain among the most powerful and disturbing results in experimental psychology.

Human beings are not natural killers; very, very few ever learn to enjoy murder or torture. Human beings, however, are sufficiently docile that many can eventually be taught to kill, to support killing, or to consent to killing on the command of an alpha male, entirely dissociating themselves from responsibility for the act. Our original sin is not murderousness — it is obedience.


And this brings us to the final reason for the prevalence of the myth of man the killer; that it encourages obedience and legitimizes social control of the individual. The man who fears Hobbes’s “warre”, who sees every one of his neighbors as a potential murderer, will surrender nearly anything to be protected from them. He will call for a strong hand from above; he will become a willing instrument in the oppression of his fellows. He may even allow himself to be turned into a killer in fact. Society will be atomized into millions of fearful fragments, each reacting to the fear of fantasied individual violence by sponsoring the political conditions for real violence on a large scale.

Even when the fear of violence is less acute, the myth of man the killer well serves power elites of all kinds. To define the central problem of society as the repression of a universal individual tendency to violence is to imply an authoritarian solution; it is to deny without examination the proposition that individual self-interest and voluntary cooperation are sufficient for civil order. (To cite one current example, the myth of man the killer is a major unexamined premise behind the drive for gun control.)

In sum, the myth of man the killer degrades and ultimately disempowers the individual, and unhelpfully deflects attention from the social mechanisms and social instincts that actually underlie virtually all violence. If we are all innately killers, no one is responsible; the sporadic violence of crime and terrorism and the more systematic violence of governments (whether in “state” or “pre-state” societies, and in wartime or otherwise) is as inevitable as sex.

On the other hand, if we recognize that most violence (and all large-scale violence) arises from obedience, and especially from the commission of aggressive violence by bachelor males at the command of alpha male pack leaders, then we can begin to ask more fruitful questions. Like: what can we do, culturally, to disrupt this causal chain?

First, we must recognize the primary locus and scope of the problem. By any measure, the pre-eminent form of aggressive pack violence is violence by governments, in either its explicit form as warfare and genocide or in more or less disguised peacetime versions. Take as one indicator the most pessimistic estimate of the 20th-century death toll from private aggression and set it against the low-end figures for deaths by government-sponsored violence (that is, count only war casualties, deliberate genocides, and extra-legal violence by organs of government; do not count the deaths incurred in the enforcement of even the most dubious and oppressive laws). Even with these assumptions biasing the ratio to the low side, the ratio is clearly 1000:1 or worse.

Readers skeptical of this ratio should reflect tha government-directed genocides alone (excluding warfare entirely) are estimated to have accounted for more than 250,000,000 deaths between the massacre of the Armenians in 1915 and the “ethic cleansings” of Bosnia and Rwanda-Burundi in the late 1990s. Even the 9/11 atrocity and other acts of terrorism, grim as they have been, are mere droplets besides the oceans of blood spilled by state action.

In fact, the domination of total pack violence by government aggression reaches even further than that 1000:1 ratio would indicate. Pack violence by governments serves as a model and a legitimizing excuse not merely for government violence, but for private violence as well. The one thing all tyrants have in common is their belief that in their special cause, aggression is justified; private criminals learn and profit by that example. The contagion of mass violence is spread by the very institutions which ground their legitimacy in the mission of suppressing it — even as they perpetrate most of it.

And that is ultimately why the myth of man the killer ape is most dangerous. Because when we tremble in fear before the specter of individual violence, we excuse or encourage social violence; we feed the authoritarian myths and self-justifications that built the Nazi death camps and the Soviet gulags.

There is no near-term hope that we can edit either aggression or docility out of the human genome. And the individual small-scale violence of criminals and the insane is a mere distraction from the horrific and vast reality that is government-sanctioned murder and the government-sanctioned threat of murder.

To address the real problem in an effective way, we must therefore change our cultures so that either alpha males calling themselves “government” cease giving orders to perform aggression, or our bachelor males cease following those orders. Neither Hobbes’s counsel of obedience to the state nor Rousseau’s idolization of the primitive can address the central violence of the modern era — state-sponsored mass death.

To end that scourge, we must get beyond the myth of man the killer and learn to trust and empower the individual conscience once again; to recognize and affirm the individual predisposition to make peaceful choices in the non-sociopathic 97% of the population; and to recognize what Stanley Milgram showed us; that our signpost on the path away from mass violence reads “I shall not obey!”

Blogspot comments

Jun 14

Hacking and Refactoring

In 2001, there was a history-making conference of software-engineering
thinkers in Snowbird, Colorado. The product of that meeting was a remarkable
document called the Agile Manifesto,
a call to overturn many of the assumptions of traditional software development.
I was invited to be at Snowbird, but couldn’t make it.

Ever since, though, I’ve been sensing a growing convergence between
agile programming and the open-source movement. I’ve seen agile
concepts and terminology being adopted rapidly and enthusiastically by
my colleagues in open-source-land—especially ideas like
refactoring, unit testing, and design from stories and personas. From
the other side, key agile-movement figures like Kent Beck and Martin
Fowler have expressed strong interest in open source both in published
works and to me personally. Fowler has gone so far as to include
open source on his list of agile-movement schools.

I agree that we belong on that list. But I also agree with
Fowler’s description of of open source as a style, rather than a
process. I think his reservations as to whether open source can be
described as just another agile school are well-founded. There is
something more complicated and interesting going on here. and I
realized when I read Fowler’s description of open source that at some
point I was going to have to do some hard thinking and writing in an
effort to sort it all out.

While doing research for my forthcoming book, The Art of Unix
Programming
, I read one particular passage in Fowler’s
Refactoring that finally brought it all home. He
writes:

One argument is that refactoring can be an alternative to up-front
design. In this scenario, you don’t do any design at all. You just
code the first approach that comes into your head, get it working, and
then refactor it into shape. Actually, this approach can work. I’ve
seen people do this and come out with a very well-defined piece of
software. Those who support Extreme Programming often are portrayed
as advocating this approach.

I read this, and had one of those moments where everything comes
together in your head with a great ringing crash and the world assumes
a new shape—a moment not unlike the one I had in late 1996
when I got the central insight that turned into The Cathedral
and the Bazaar
. In the remainder of this essay I’m going to
try to articulate what I now think I understand about open source,
agile programming, how they are related, and why the connection should
be interesting even to programmers with no stake in either movement.

Now I need to set a little background here, because I’m going
to need to have to talk about several different categories which are
contingently but not necessarily related.

First, there is Unix programmer. Unix is the operating
system with the longest living tradition of programming and design.
It has an unusually strong and mature technical culture around it, a
culture which originated or popularized many of the core ideas and
tools of modern software design. The Art of Unix
Programming
is a concerted attempt to capture the craft wisdom
of this culture, one to which I have successfully enlisted quite a few
of its founding elders.

Second, there is hacker. This is a very complex term, but
more than anything else, it describes an attitude—an
intentional stance that relates hackers to programming and other
disciplines in a particular way. I have described the hacker stance
and its cultural correlates in detail in How To Become A
Hacker
.

Third, there is open-source programmer. Open source is a
programming style with strong roots in the Unix tradition and the
hacker culture. I wrote the modern manifesto for it in 1997, The
Cathedral and the Bazaar
, building on earlier thinking by
Richard Stallman and others.

These three categories are historically closely related. It is
significant that a single person (accidentally, me) wrote touchstone
documents for the second and third and is attempting a summum
bonum
of the first. That personal coincidence reflects a larger
social reality that in 2003 these categories are becoming increasingly
merged — essentially, the hacker community has become the core
of the open-source community, which is rapidly re-assimilating the
parts of the Unix culture that got away from the hackers during
the ten bad years after the AT&T divestiture in 1984.

But the relationship is not logically entailed; we can imagine
a hacker culture speaking a common tongue other than Unix and C (in
the far past its common tongue was Lisp), and we can imagine an
explicit ideology of open source developing within a cultural and
technical context other than Unix (as indeed nearly happened several
different times).

With this scene-setting done, I can explain that my first take on
Fowler’s statement was to think “Dude, you’ve just described
hacking!”

I mean something specific and powerful by this. Throwing together
a prototype and refactoring it into shape is a rather precise
description of the normal working practice of hackers since that
culture began to self-define in the 1960s. Not a complete one, but it
captures the most salient feature of how hackers relate to code. The
open-source community has inherited and elaborated this practice,
building on similar tendencies within the Unix tradition.

The way Fowler writes about design-by-refactoring has two huge
implications for the relationship between open source and agile
programming:

First, Fowler writes as though he didn’t know he was describing
hacking
. In the passage, he appears unaware that design by
repeated refactoring is not just a recent practice semi-accidentally
stumbled on by a handful of agile programmers, but one which hundreds
of thousands of hackers have accumulated experience with for over three
decades and have in their bones. There is a substantial folklore, an
entire craft practice, around this!

Second, in that passage Fowler described the practice of hacking
better than hackers themselves have done. Now, admittedly,
the hacker culture has simply not had that many theoreticians, and if
you list the ones that are strongly focused on development methodology
you lose Richard Stallman and are left with, basically, myself and
maybe Larry Wall (author of Perl and occasional funny and illuminating
ruminations on the art of hacking). But the fact that we don’t have a
lot of theoreticians is itself an important datum; we have always
tended to develop our most important wisdoms as unconscious and
unarticulated craft practice.

These two observations imply an enormous mutual potential, a gap
across which an arc of enlightenment may be beginning to blaze. It
implies two things:

First, people who are excited by agile-programming ideas can
look to open source and the Unix tradition and the hackers for the
lessons of experience
. We’ve been doing a lot of the stuff the
agile movement is talking about for a long time. Doing it in a
clumsy, unconscious, learned-by-osmosis way, but doing it
nevertheless. I believe that we have learned things that you agile
guys need to know to give your methodologies groundedness. Things
like (as Fowler himself observes) how to manage communication and
hierarchy issues in distributed teams.

Second, open-source hackers can learn from agile programmers
how to wake up
. The terminology and conceptual framework of
agile programming sharpens and articulates our instincts. Learning to
speak the language of open source, peer review, many eyeballs, and
rapid iterations gave us a tremendous unifying boost in the late
1990s; I think becoming similarly conscious about agile-movement ideas
like refactoring, unit testing, and story-centered design could be
just as important for us in the new century.

I’ve already given an example of what the agile movement has to
teach the hackers, in pointing out that repeated redesign by
refactoring is a precise description of hacking. Another thing we can
stand to learn from agile-movement folks is how to behave so that we
can actually develop requirements and deliver on them when the
customer isn’t, ultimately, ourselves.

For the flip side, consider Fowler’s anecdote on page 68-69, which
ends “Even if you know exactly what is going on in your system,
measure performance, don’t speculate. You’ll learn something, and
nine times out of ten it won’t be that you were right.” The Unix guy
in me wants to respond “Well, duh!“. In my tribe, profiling
before you speculate is DNA; we have a strong tradition of
this that goes back to the 1970s. From the point of view of any old
Unix hand, the fact that Fowler thought he had to write this down is a
sign of severe naivete in either Fowler or his readership or both.

In reading Refactoring, I several times had the
experience of thinking “What!?! That’s obvious!” closely followed
by “But Fowler explains it better than Unix traditions do…” This may
be because he relies less on the very rich shared explanatory context
that Unix provides.

How deep do the similarities run? Let’s take a look at what the
Agile Manifesto says:

Individuals and interactions over processes and tools. Yeah,
that sounds like us, all right. Open-source developers will toss out
a process that isn’t working in a nanosecond, and frequently do, and take
gleeful delight in doing so. In fact, the reaction against heavyweight
process has a key part of our self-identification as hackers for
at least the last quarter century, if not longer.

Working software over comprehensive documentation. That’s
us, too. In fact, the radical hacker position is that source code of
a working system is its documentation. We, more than any
other culture of software engineering, emphasize program source code as
human-to-human communication that is expected to bind together
communities of cooperation and understanding distributed through time
and space. In this, too, we build on and amplify Unix tradition.

Customer collaboration over contract negotiation. In the
open-source world, the line between “developer” and “customer” blurs
and often disappears. Non-technical end users are represented by
developers who are proxies for their interests—as when, for
example, companies that run large websites second developers to
work on Apache Software Foundation projects.

Responding to change over following a plan. Absolutely.
Our whole development style encourages this. It’s fairly unusual for
any of our projects to have any plan more elaborate than “fix
the current bugs and chase the next shiny thing we see”.

With these as main points, it’s hardly surprising that so many of
the Principles
behind the Agile Manifesto
read like Unix-tradition and hacker
gospel. “Deliver working software frequently, from a couple of weeks
to a couple of months, with a preference to the shorter timescale.
Well, yeah—we pioneered this. Or “Simplicity—the art of
maximizing the amount of work not done—is essential.” That’s
Unix-tradition holy writ, there. Or “The best architectures,
requirements, and designs emerge from self-organizing teams.”

This is stone-obvious stuff to any hacker, and exactly the sort of
subversive thinking that most panics managers attached to big plans,
big budgets, big up-front design, and big rigid command-and-control
structures. Which may, in fact, be a key part of its appeal to
hackers and agile developers—because at least one thing that points
agile-movement and open-source people in the same direction is a drive
to take control of our art back from the suits and get out from under
big dumb management.

The most important difference I see between the hackers and the
agile-movement crowd is this: the hackers are the people who never
surrendered to big dumb management — they either bailed out of the
system or forted up in academia or industrial R&D labs or
technical-specialty areas where pointy-haired bosses weren’t permitted
to do as much damage. The agile crowd, on the other hand, seems to be
composed largely of people who were swallowed into the belly of the
beast (waterfall-model projects, Windows, the entire conventional
corporate-development hell so vividly described in Edward Yourdon’s
books) and have been smart enough not just to claw their way out but
to formulate an ideology to justify not getting sucked back in.

Both groups are in revolt against the same set of organizational
assumptions. And both are winning because those assumptions are
obsolete, yesterday’s adaptations to a world of expensive machines and
expensive communications. But software development doesn’t need big
concentrations of capital and resources anymore, and doesn’t need the
control structures and hierarchies and secrecy and elaborate rituals
that go with managing big capital concentrations either. In fact, in
a world of rapid change, these things are nothing but a drag. Thus
agile techniques. Thus, open source. Converging paths to the same
destination, which is not just software that doesn’t suck but a
software-development process that doesn’t suck.

When I think about how the tribal wisdom of the hackers and the
sharp cut-the-bullshit insights of the agile movement seem to be
coming together, my mind keeps circling back to Phil Greenspun’s brief
but trenchant essay Redefining
Professionalism for Software Engineers
. Greenspun proposes,
provocatively but I think correctly, that the shift towards
open-source development is a key part of the transformation of
software engineering into a mature profession, with the dedication to
excellence and ethos of service that accompanies professionalism. I
have elsewhere suggested that we are seeing a close historical analog
of the transition from alchemy to chemistry. Secrets leak out, but
skill sustains; the necessity to stop relying on craft secrecy is one
of the crises that occupational groups normally face as they attain
professional standing.

I’m beginning to think that from the wreckage of the software
industry big dumb management made, I can see the outline of a mature,
humane discipline of software engineering emerging — and
that it will be in large part a fusion of the responsiveness and
customer focus of the agile movement with the wisdom and groundedness
of the Unix tradition, expressed in open source.

Blogspot
comments

May 13

A Taxonomy of Cognitive Stress

I have been thinking about UI design lately. With some help from my
friend Rob Landley, I’ve come up with a classification schema for the
levels at which users are willing to invest effort to build
competence.

The base assumption is that for any given user there is a maximum
cognitive load any given user is willing to accept to use an
interface. I think that there are levels, analogous to Piagetian
developmental thresholds and possibly related to them, in the
trajectory of learning to use software interfaces.

Level 0: I’ll only push one button.

Level 1: I’ll push a sequence of buttons, as long as they’re all visible
and I don’t have to remember anything between presses. These people
can do checklists.

Level 2: I’m willing to push as sequence of buttons in which later ones may
not be visible until earlier ones have been pressed. These people
will follow pull-down menus; it’s OK for the display to change as long
as they can memorize the steps.

Level 3: I’m willing to use folders if they never change while I’m not looking.
There can be hidden unchanging state, but nothing must ever
happen out of sight. These people can handle an incremental replace
with confirmation. They can use macros, but have no capability to
cope with surprises other than by yelling for help.

Level 4: I’m willing to use metaphors to describe magic actions. A folder
can be described by “These are all my local machines” or “these
are all my print jobs” and is allowed to change out of sight in an
unsurprising way. These people can handle global replace, but must
examine the result to maintain confidence. These people will begin
customizing their environment.

Level 5: I’m willing to use categories (generalize about nouns). I’m
willing
to recognize that all .doc files are alike, or all .jpg files are
alike, and I have confidence there are sets of actions I can apply
to a file I have never seen that will work because I know its type.
(Late in this level knowledge begins to become articulate; these
people are willing to give simple instructions over the phone or
by email.)

Level 6: I’m willing to unpack metaphors into procedural steps. People at
this level begin to be able to cope with surprises when the
metaphor breaks, because they have a representation of process.
People at this level are ready to cope with the fact that HTML
documents are made up of tags, and more generally with
simple document markup.

Level 7: I’m willing to move between different representations of
a document or piece of data. People at this level know that
any one view of the data is not the same as the data, and lossless
transformations no longer scare them. Multiple representations
become more useful than confusing. At this level the idea of
structural rather than presentation markup begins to make sense.

Level 8: I’m willing to package simple procedures I already understand.
These people are willing to record a sequence of actions which
they understand into a macro, as long as no decisions (conditionals)
are involved. They begin to get comfortable with report generators.
At advanced level 8 they may start to be willing to deal with
simple SQL.

Level 9: I am willing to package procedures that make decisions, as long
as I already understand them. At his level, people begin to cope
with conditionals and loops, and also to deal with the idea of
programming languages.

Level 10: I am willing to problem-solve at the procedural level, writing
programs for tasks I don’t completely understand before
developing them.

I’m thinking this scale might be useful in classifying interfaces and
developing guidelines for not exceeding the pain threshold of an
audience if we have some model of what their notion of acceptable
cognitive load is.

(This is a spinoff from my book-in-progress, “The Art of Unix
Programming”, but I don’t plan to put it in the book.)

Comments, reactions, and refinements welcome.

Blogspot comments

May 05

The Delusion of Expertise

I learned something this weekend about the high cost of the subtle delusion that creative technical problem-solving is the preserve of a priesthood of experts, using powers and perceptions beyond the ken of ordinary human beings.

Terry Pratchett is the author of the Discworld series of satirical fantasies. He is — and I don’t say this lightly, or without having given the matter thought and study — quite probably the most consistently excellent writer of intelligent humor in the last century in English. One has to go back as far as P.G. Wodehouse or Mark Twain to find an obvious equal in consistent quality, volume, and sly wisdom.

I’ve been a fan of Terry’s since before his first Discworld novel; I’m one of the few people who remembers Strata, his 1981 first experiment with the disc-world concept. The man has been something like a long-term acquaintance of mine for ten years — one of those people you’d like to call a friend, and who you think would like to call you a friend, if the two of you ever arranged enough concentrated hang time to get that close. But we’re both damn busy people, and live five thousand miles apart.

This weekend, Terry and I were both guests of honor at a hybrid SF convention and Linux conference called Penguicon held in Warren, Michigan. We finally got our hang time. Among other things, I taught Terry how to shoot pistols. He loves shooter games, but as a British resident his opportunities to play with real firearms are strictly limited. (I can report that Terry handled my .45 semi with remarkable competence and steadiness for a first-timer. I can also report that this surprised me not at all.)

During Terry’s Guest-of-Honor speech, he revealed his past as (he thought) a failed hacker. It turns out that back in the 1970s Terry used to wire up elaborate computerized gadgets from Timex Sinclair computers. One of his projects used a primitive memory chip that had light-sensitive gates to build a sort of perceptron that could actually see the difference between a circle and a cross. His magnum opus was a weather station that would log readings of temperature and barometric pressure overnight and deliver weather reports through a voice synthesizer.

But the most astonishing part of the speech was the followup in which Terry told us that despite his keen interest and elaborate homebrewing, he didn’t become a programmer or a hardware tech because he thought techies had to know mathematics, which he thought he had no talent for. He then revealed that he thought of his projects as a sort of bad imitation of programming, because his hardware and software designs were total lash-ups and he never really knew what he was doing.

I couldn’t stand it. “And you think it was any different for us?” I called out. The audience laughed and Terry passed off the remark with a quip. But I was just boggled. Because I know that almost all really bright techies start out that way, as compulsive tinkerers who blundered around learning by experience before they acquired systematic knowledge. “Oh ye gods and little fishes”, I thought to myself, “Terry is a hacker!”

Yes, I thought ‘is’ — even if Terry hasn’t actually tinkered any computer software or hardware in a quarter-century. Being a hacker is expressed through skills and projects, but it’s really a kind of attitude or mental stance that, once acquired, is never really lost. It’s a kind of intense, omnivorous playfulness that tends to color everything a person does.

So it burst upon me that Terry Pratchett has the hacker nature. Which, actually, explains something that has mildly puzzled me for years. Terry has a huge following in the hacker community — knowing his books is something close to basic cultural literacy for Internet geeks. One is actually hard-put to think of any other writer for whom this is as true. The question this has aways raised for me is: why Terry, rather than some hard-SF writer whose work explicitly celebrates the technologies we play with?

The answer now seems clear. Terry’s hackerness has leaked into his writing somehow, modulating the quality of the humor. Behind the drollery, I and my peers worldwide have accurately scented a mind like our own.

I said some of this the following day, when I ran into Terry surrounded by about fifty eager fans in a hallway. The nature of the conference was such that about three-quarters of them were hackers, many faces I recognized. I brought up the topic again, emphasizing that the sort of playful improvisation he’d been describing was very normal for us, and that I thought it was kind of sad he’d been blocked by the belief that hackers need to know mathematics, because about all we ever use is some pieces of set theory, graph theory, combinatorics, and Boolean algebra. No calculus at all.

Terry then admitted that he had at one point independently re-invented Boolean algebra. I didn’t find this surprising — I did that myself when I was about fifteen; I didn’t mention this, though, because the moment was about Terry’s mind and not mine. I think reinventing Boolean algebra is probably something a lot of bright proto-hackers do.

“Terry,” I said, fully conscious of the peculiar authority I wield on this point as the custodian of the Jargon File, the how-to on How To Become A Hacker and several other related documents, “you are a hacker!

The crowd agreed enthusiastically. Somebody handed Terry one of the “Geek” badge ribbons the convention had made for attendees who wanted to identify themselves as coming from the Linux/programming side. Much laughter ensued when it was discovered that the stickum on the ribbon had lost its virtue, and a nearby hacker had to ceremonially affix the thing to Terry’s badge holder with a piece of duct tape.

Terry actually choked up a little while this was going on, and I don’t think there was anyone there who didn’t understand why. To the kind of teenager and young man he must have been — bright, curious, creative, proud of his own ability — it must have been very painful to conclude that he would never cut it as the techie he so obviously wanted to be. He ended up doing public-relations work for the British nuclear-power industry instead.

The whole sequence of events left me feeling delighted that I and my friends could deliver the affirmation Terry had deserved so long ago. But also — and here we come to the real point of this essay — I felt very angry at the system that had fed the young Terry such a huge load of cobblers about the nature of what programmers and hardware designers do.

I’m not referring to the obvious garbage about needing a brain-bending amount of mathematics. No; they fed Terry something much subtler and more crippling, a belief that real techies actually know what they’re doing. The delusion of expertise.

The truth is that programmers only know what they’re doing when the job is not very interesting. When you’re breaking new ground in any technical field, exploration and improvisation is the nature of the game. Your designs are going to be lash-ups because you don’t yet know any better and neither does anyone else. Systematization comes later, with the second system, during the re-write and the re-think. Einstein had it right; imagination is more valuable than knowledge, and people like Terry with a demonstrated ability to creatively wing it make far better hackers than analytically smart but unimaginative people who can only follow procedures.

The thought that Terry may have spent thirty years of working days grinding out press releases for the Central Electricity Generating Board because he didn’t know this, rather than following his dreams into astronomy or programming or hardware design, bothers the crap out of me. If Terry was bright enough to invent Boolean algebra, he was bright enough to cut it in any of these fields. The educational system failed him by putting artificial requirements in his way and making him believe they were natural ones. It failed him even more fundamentally by teaching him a falsehood about the nature of expertise.

In doing this, it failed all of us. How many bright kids with first-class minds, I wonder, end up under-employed because of crap like this? How much creative potential are we losing?

OK, some might answer, so we got the Discworld fantasies instead…that ain’t exactly chopped liver. The thing is, I’m not sure that was actually a trade-off. I’m enough of a writer myself to believe that you can’t block a writing talent like Terry’s merely by dropping him into a more demanding day job. It will come out.

On the other hand, one thing I am sure of is that you don’t need intelligence or talents like Terry’s just to do PR. One way or another, this man was going to do something with more lasting effects than soothing British farmers about radiation leaks. Inventing one of the funniest alternate worlds of the last hundred years during your free time is nice, and I devoutly hope he will get to keep doing it for decades to come — but in a society that valued and nurtured genius properly, I think Terry might have helped re-imagine the real world just as radically during his day job.

But he didn’t. Tot it up to the cost of taking creativity too seriously, of undervaluing improvisation and play and imagination. And wonder how much else that error has cost us.

Blogspot comments

Apr 22

Fascism Is Not Dead

Fascism is not dead. The revelations now coming out of Iraq about Baathist atrocities lend this observation particular point; Saddam Hussein was able to successfully imitate Hitler for three decades. Baathists using similar methods still run Syria, and elsewhere in the Islamic world there are militarist/authoritarian tendencies that run uncomfortably close to fascism.

Recent events — including the fall of Saddam Hussein’s regime and Glenn Reynolds blogging on Pio Moa’s The Myths of the Civil War have inspired me to dust off some research and writing I did a while back on the history of fascism. Some of the following essay is about the Spanish Civil War annd Francisco Franco, but much of it is about the history and structure of fascism.

Pio Moa’s thesis is that the Spanish Civil War was not a usurping revolt against a functioning government, but a belated attempt to restore order to a country that had already collapsed into violent chaos five years before the Fascists landed in 1936.

I’ve studied the history of the Spanish Civil War enough to know that Moa’s contrarian interpretation is not obviously crazy. I had an unusual angle; I’m an anarchist, and wanted to grasp the ideas and role of the Spanish anarchist communes. My conclusions were not pleasant. In short, there were no good guys in the Spanish Civil War.

First, the non-anarchist Left in Spain really was pretty completely Stalin’s creature. The volunteers of the International Brigade were (in Lenin’s timeless phrase) useful idiots, an exact analogue of the foreign Arabs who fought on in Baghdad after Iraqi resistance collapsed (and were despised for it by the Iraqis). They deserve neither our pity nor our respect. Insofar as Moa’s thesis is that most scholarship about the war is severely distorted by a desire to make heroes out of these idiots, he is correct.

Second, the Spanish anarchists were by and large an exceedingly nasty bunch, all resentment and nihilism with no idea how to rebuild after destroying. Wiping them out (via his Communist proxies) may have been one of Stalin’s few good deeds.

Third, the Fascists were a pretty nasty bunch too. But, on the whole, probably not as nasty as their opponents. Perceptions of them tend to be distorted by the casual equation of Fascist with Nazi — but this is not appropriate. Spanish Fascism was unlike Communism or Italian and German Fascism in that it was genuinely a conservative movement, rather than a attempt to reinvent society in the image of a revolutionary doctrine about the perfected State.

Historians and political scientists use the terms “fascist” and “fascism” quite precisely, for a group of political movements that were active between about 1890 and about 1975. The original and prototypical example was Italian fascism, the best-known and most virulent strain was Naziism, and the longest-lasting was the Spanish nationalist fascism of Francisco Franco. The militarist nationalism of Japan is often also described as “fascist” .

The shared label reflects the fact that these four ideologies influenced each other; Naziism began as a German imitation of Italian fascism, only to remake Italian (and to some extent Spanish) fascism in its own image during WWII. The militarist Japanese fascists took their cues from European fascists as well as an indigenous tradition of absolutism with very similar structural and psychological features

The shared label also reflects substantially similar theories of political economics, power, governance, and national purpose. Also similar histories and symbolisms. Here are some of the commonalities especially relevant to the all too common abuse of the term.

Fascist political economics is a corrupt form of Leninist socialism. In fascist theory (as in Communism) the State owns all; in practice, fascists are willing to co-opt and use big capitalists rather than immediately killing them.

Fascism mythologizes the professional military, but never trusts it. (And rightly so; consider the Von Stauffenberg plot…) One of the signatures of the fascist state is the formation of elite units (the SA and SS in Germany, the Guardia Civil in Spain, the Republican Guard and Fedayeen in Iraq) loyal to the fascist party and outside the military chain of command.

Fascism is not (as the example of Franco’s Spain shows) necessarily aggressive or expansionist per se. In all but one case, fascist wars were triggered not by ideologically-motivated aggression but by revanchist nationalism (that is, the nation’s claims on areas lost to the victors of previous wars, or inhabited by members of the nationality agitating for annexation). No, the one exception was not Nazi Germany; it was Japan (the rape of Manchuria). The Nazi wars of aggression and Hussein’s grab at Kuwait were both revanchist in origin.

Fascism is generally born by revolution out of the collapse of monarchism. Fascism’s theory of power is organized around the `Fuehrerprinzip’, the absolute leader regarded as the incarnation of the national will.

But…and this is a big but…there were important difference between revolutionary Fascism (the Italo/German/Baathist variety) and the more reactionary sort native to Spain and Japan.

The Italo/German/Baathist varieties were radical, modernist ideologies and not (as commonly assumed) conservative or traditionalist ones; in fact, all three of these examples faced serious early threats from cultural-conservative monarchists (or in Baathism’s case, from theocrats).

But Japanese and Spanish Fascism were a bit different; they were actually pro-monarchist, conservative in essence, aimed at reasserting the power relationships of premodern Spain and Japan. In fact, Spanish Fascism was mostly about Francisco Franco’s reactionary instincts.

After the fall of the Second Republic in 1931 Francisco Franco had rather better reason than Hitler ever did to regard the Communist-inspired left as a mortal threat to his country; a wave of `revolutionary’ expropriations, massacres, and chaos (unlike the opera-bouffe capitulation of the Italian monarchy or the relatively bloodless collapse of Germany’s Weimar Republic) followed. Obedient to what remained of central authority, Franco sat out the undeclared civil war for five years before invading from Morocco with Italian and German help. His belief that he was acting to restore a pre-1931 order of which he was the last legitimate representative appears to have been genuine — perhaps even justified.

The declared portion of the Spanish Civil War lasted from 1936 to 1939. It has passed into legend among Western leftists as a heroic struggle between the Communist-backed Republican government and Nazi-backed Franco, one that the good guys lost. The truth seems rather darker; the war was fought by two collections of squabbling, atrocity-prone factions, each backed by one of the two most evil totalitarianisms in human history. They intrigued, massacred, wrecked, and looted fairly indiscriminately until one side collapsed from exhaustion. Franco was the last man left standing.

Franco had no aspirations to conquer or reinvent the world, or to found a dynasty. His greatest achievements were the things that didn’t happen. He prevented the Stalinist coup that would certainly have followed a Republican victory. He then kept Spain out of World War II against heavy German pressure to join the Axis.

Domestically, Spain could have suffered worse. Spanish Fascism was quite brutal against its direct political enemies, but never developed the expansionism or racist doctrines of the Italian or German model. In fact it had almost no ideology beyond freezing the power relationships of pre-Republican Spain in place. Thus, there were no massacres even remotely comparable to Hussein’s nerve-gassing of Kurds and Shi’as, Hitler’s Final Solution or Stalin’s far bloodier though less-known liquidation of the kulaks.

Francisco Franco remained a monarchist all his life, and named the heir to the Spanish throne as his successor. The later `fascist’ regimes of South and Central America resembled the Francoite, conservative model more than they did the Italo/German/Baathist revolutionary variety.

One historian put it well. “Hitler was a fascist pretending to be a conservative. Franco was a conservative pretending to be a fascist.” (One might add that Hussein was not really pretending to be about anything but the raw will to power; perhaps this is progress, of a sort.) On those terms Franco was rather successful. If he had died shortly after WWII, rather than lingering for thirty years while presiding over an increasingly stultified and backward Spain, he might even have been remembered as a hero of his country.

As it is, the best that can be said is that (unlike the truly major tyrants of his day, or Saddam Hussein in ours) Franco was not a particularly evil man, and was probably less bad for his country than his opponents would have been.

Blogspt
comments

Dec 17

Some Christmas cheer

Some deeply warped Christmas humor here . Now,
this Santa might get me the presents I really want. Like,
say, a custom-tuned Baer .45 semiauto. Or Liv Tyler, fetchingly
attired in nothing but a pair of Arwen ears.

I actually did get a really peculiar Christmas present from a
stranger this morning. It was a gourmet frying pan with a
Tux-the-Linux-Penguin on it. And
an earnest cover letter explaining that it is #8 of a special limited
edition of 1024. Made by a German cookwares company that has
gotten good service out of Linux and decided to commemmorate
the fact.

Odd…

Dec 04

Sneering at Courage

One of the overdue lessons of 9/11 is that we can’t afford to sneer
at physical courage any more. The willingness of New York firemen,
Special Forces troops in Afghanistan, and the passengers of Flight 93
to put their lives on the line has given us most of the bright spots
we’ve had in the war against terror. We are learning, once again,
that all that stands between us and the night of barbarism is the
willingness of men to both risk their lives and take the awful
responsibility of using lethal force in our defense.

(And, usually, it is men who do the risking. I mean no disrespect
to our sisters; the kind of courage I am talking about is not an
exclusive male monopoly. But it has been predominently the job of
men in every human culture since Olduvai Gorge, and still is today.
I’ll return to this point later in the essay.)

The rediscovery of courage visibly upsets a large class of bien
pensants
in our culture. Many of the elite molders of opinion in
the U.S and Europe do not like or trust physical courage in men. They
have spent decades training us to consider it regressive, consigning
it to fantasy, sneering at it — trying to persuade us all that
it’s at best an adolescent or brute virtue, perhaps even a vice.

If this seems too strong an indictment, consider carefully all the
connotations of the phrase “testosterone poisoning”. Ask yourself
when you first heard it, and where, and from whom. Then ask yourself
if you have slid into the habit of writing off as bluster any man’s
declaration that he is willing to risk his life, willing to fight for
what he believes in. When some ordinary man says he is willing to
take on the likes of the 9/11 hijackers or the D.C. sniper — or
even ordinary criminals — them, do you praise his determination
or consign him, too, to the category of blowhard or barbarian?

Like all virtues, courage thrives on social support. If we mock
our would-be warriors, writing them off as brutes or rednecks or
simpletons, we’ll find courage in short supply when we need it. If we
make the more subtle error of sponsoring courage only in uniformed men
— cops, soldiers, firemen — we’ll find that we have
trouble growing the quantity or quality we need in a crisis. Worse:
our brave men could come to see themselves apart from us, distrusted
and despised by the very people for whom they risk their lives, and
entitled to take their due when it is not freely given. More than one
culture that made that mistake has fallen to its own guardians.

Before 9/11, we were in serious danger of forgetting that courage
is a functional virtue in ordinary men. But Todd Beamer reminded us of
that — and now, awkwardly, we are rediscovering some of the
forms that humans have always used to nurture and reward male courage.
Remember that rash of news stories from New York about Upper-East-Side
socialites cruising firemen’s bars? Biology tells; medals and
tickertape parades and bounties have their place, but the hero’s most
natural and strongest reward is willing women.

Manifestations like this absolutely appall and disgust the sort of
people who think that the destruction of the World Trade Center was a
judgment on American sins; — the multiculturalists, the
postmodernists, the transnational progressives, radical feminists, the
academic political-correctness brigades, the Bush-is-a-moron elitists,
and the plain old-fashioned loony left. By and large these people
never liked or trusted physical courage, and it’s worth taking a hard
look at why that is.

Feminists distrust physical courage because it’s a male virtue.
Women can and do have it, but it is gender-linked to masculinity just
as surely as nurturance is to femininity. This has always been
understood even in cultures like the Scythians, Teutons, Japanese, and
modern Israelis that successfully made places for women warriors. If
one’s world-view is organized around distrusting or despising men and
maleness, male courage is threatening and social support for it is
regressive.

For multi-culti and po-mo types, male physical courage is suspect
because it’s psychologically linked to moral certitude — and
moral certitude is a bad thing, nigh-indistinguishable from
intolerance and bigotry. Men who believe in anything enough to fight
for it are automatically suspect of would-be imperialism &mdash,
unless, of course, they’re tribesmen or Third Worlders, in which
fanaticism is a praiseworthy sign of authenticity.

Elite opinions about male physical courage have also had more
than a touch of class warfare about them. Every upper crust
that is not directly a military caste — including our own
— tends to dismiss physical courage as a trait of peasants
and proles and the lesser orders, acceptable only when they
know their place is to be guided by their betters.

For transnational progressives and the left in general, male
physical courage is a problem in the lesser orders because it’s an
individualizing virtue, one that leads to wrong-think about
autonomy and the proper limits of social power. A man who develops in
himself the grit that it takes to face death and stare it down is less
likely to behave meekly towards bureacrats, meddlers, and taxmen who
have not passed that same test. Brave men who have learned to fight
for their own concept of virtue — independently of
social approval or the party line — are especially threatening
to any sort of collectivist.

The multiculturalist’s and the collectivist’s suspicions are
backhanded tributes to an important fact. There is a continuity among
self-respect, physical courage and ethical/moral courage. These virtues are
the soil of individualism, and are found at their strongest only in
individualists. They do not flourish in isolation from one another.
They reinforce each other, and the social measures we take to reward
any of them tend to increase all of them.

After 1945 we tried to separate these virtues. We tried to teach
boys moral steadfastness while also telling them that civilized men
are expected to avoid confrontation and leave coping with danger to
specialists. We preached the virtue of `self-esteem’ to adolescents
while gradually abolishing almost all the challenges and ordeals that
might have enabled them to acquire genuine self-respect. Meanwhile,
our entertainments increasingly turned on anti-heros or celebrated
physical bravery of a completely mindless and morally vacuous kind.
We taught individualism without responsibility, denying the unpleasant
truth that freedom has to be earned and kept with struggle and blood.
And we denied the legitimacy of self-defense.

Rudyard Kipling would have known better, and Robert Heinlein did.
But they were written off as reactionaries — and many of us were
foolish enough to be surprised when the new thinking produced a bumper
crop of brutes, narcissists, overgrown boys, and bewildered hollow men
apt to fold under pressure. We became, in Jeffrey Snyder’s famous
diagnosis, a nation
of cowards
; the cost could be measured in the explosion in crime
rates after 1960, a phenomenon primarily of males between 15 and 35.

But this was a cost which, during the long chill of the Cold War,
we could afford. Such conflicts as there were stayed far away from
the home country, warfare was a game between nations, and nuclear
weapons seemed to make individual bravery irrelevant. So it remained
until al-Qaeda and the men of Flight 93 reminded us otherwise.

Now we have need of courage. Al-Qaeda’s war has come to us. There
is a geopolitical aspect to it, and one of the fronts we must pursue
is to smash state sponsors of terrorism. But this war is not
primarily a chess-game between nations — it’s a street-level
brawl in which the attackers are individuals and small terrorist cells
often having no connection to the leadership of groups like al-Qaeda
other than by sympathy of ideas.

Defense against this kind of war will have to be decentralized and
citizen-centered, because the military and police simply cannot be
everywhere that terrorists might strike. John F. Kennedy said this during
the Cold War, but it is far truer now:

“Today, we need a nation of Minutemen, citizens who are not only prepared to
take arms, but citizens who regard the preservation of freedom as the basic
purpose of their daily life and who are willing to consciously work and
sacrifice for that freedom.”

The linked virtues of physical courage, moral courage, and
self-respect are even more essential to a Minuteman’s readiness than
his weapons. So the next time you see a man claim the role
of defender, don’t sneer — cheer. Don’t write him off with some
pseudo-profound crack about macho idiocy, support him. He’s trying to
tool up for the job two million years of evolution designed him for,
fighting off predators so the women and children can sleep safe.

Whether he’s in uniform or not, young or old, fit or flabby
— we need that courage now.

Blogspot comments

Dec 04

Social Security and the Demography Bomb

A friend of mine, Russ Cage aka Engineer-Poet, comments on my essay
Demographics
and the Dustbin of History
:

People used to have children to take care of them in their old age.
Social Security took care of this by socializing the benefits, but all
of the costs still fell to individuals; worse, taking time out of the
workforce to raise kids reduces your Social Security benefits.
Rational actors will stop having kids to have a good retirement.

He’s right, and this applies to all public pension schemes.
It’s a very simple, very powerful mechanism. When you subsidize old
age, you depress birthrates. The more you subsidize old age, the more
you depress birthrates. Eventually…crash!

It’s not just Euro-socialism that’s going to get trashed by
demographics, it’s the U.S’s own welfare state. It might take longer
here because our population is still rising, but it will happen.

Now that the effects of income transfer on demography are no longer
masked by the Long Boom, this is going to become one of the principal
constraints on public policy.

Blogspot comments

Dec 04

Demographics and the Dustbin of History

Karl Zinsmeister’s essay Old and In The Way presents a startling — but all too plausible — forecast of Europe’s future. To the now-familiar evidence of European insularity, reflexive anti-Americanism, muddle, and geopolitical impotence, Zinsmeister adds a hard look at European demographic trends.

What Zinsmeister sees coming is not pretty. European populations are not having children at replacement levels. The population of Europe is headed for collapse, and for an age profile heavily skewed towards older people and retirees. Europe’s Gross Domestic Product per capita (roughly, the amount of wealth the average person produces) is already only two-thirds of America’s, and the ratio is going to fall, not rise.

Meanwhile, the U.S population continues to rise — and the U.S. economy is growing three times as fast as Europe’s even though the U.S. is in the middle of a bust! Since 1970 the U.S. has been more than ten times as successful at creating new jobs. But most importantly, the U.S.’s population is still growing even as Europe’s is shrinking — which means the gap in population, productivity, and economic output is going to increase. By 2030, the U.S will have a larger population than all of Europe — and the median age in the U.S. will be 30, but the median age in Europe will be over 50.

Steven den Beste is probably correct to diagnose the steady weakening of Europe as the underlying cause of the increasing rift the U.S. and Europe’s elites noted in Robert Kagan’s essay Power and Weakness (also recommended reading). But Kagan (focusing on diplomacy and geopolitics), Zinsmeister (focusing on demographic and economic decline) and den Beste (focusing on the lassitude of Europe’s technology sector and the resulting brain drain to the U.S.) all miss something more fundamental.

Zinsmeister comes near it when he writes “Europe’s disinterest in childbearing is a crisis of confidence and optimism.”. Europeans are demonstrating in their behavior that they don’t believe the future will be good for children.

Back to that in a bit, but first a look on what the demographic collapse will mean for European domestic politics. Zinsmeister makes the following pertinent observations:

  1. Percentage of GDP represented by government spending is also diverging. In the U.S. it is roughly 19% and falling. In the EU countries it is 30-40% and rising.
  2. The ratio of state clients to wealth-generating workers is also rising. By 2030, Zinsmeister notes, every single worker in the EU will have his own elderly person 65 or older to provide for through the public pension system.
  3. Chronic unemployment is at 9-10% (twice the U.S.’s) and rising.
  4. Long-term unemployment and drone status is far more common in Europe than here. In Europe, 40% of unemployed have been out of work for over a year. Un the U.S. the corresponding figure is 6%.

Zinsmeister doesn’t state the obvious conclusion; Euro-socialism is unsustainable. It’s headed for the dustbin of history.

Forget ideological collapse; the numbers don’t work. The statistics above actually understate the magnitude of the problem, because as more and more of the population become wards of the state, a larger percentage of the able will be occupied simply with running the income-redistribution system. The rules they make will depress per-capita productivity further (for a recent example see France’s mandated 35-hour workweek).

Unless several of the key trends undergo a rapid and extreme reversal, rather soon (as in 20 years at the outside) there won’t be enough productive people left to keep the gears of the income-redistribution machine turning. Economic strains sufficient to destroy the political system will become apparent much sooner. We may be seeing the beginnings of the destruction now as Chancellor Schröder’s legitimacy evaporates in Germany, burned away by the dismal economic news.

We know what this future will probably look like, because we’ve seen the same dismal combination of economic/demographic collapse play out in Russia in the 1980s and 1990s. Progressively more impotent governments losing their popular legitimacy, increasing corruption, redistributionism sliding into gangsterism. Slow-motion collapse.

But there are worse possibilities that are quite plausible. The EU hase two major advantages the Soviets did not — a better tech and infrastructure base, and a functioning civil society (e.g. one in which wealth and information flow through a lot of legal grassroots connections and voluntary organizations). But they have one major disadvantage — large, angry, totally unassimilated immigrant populations that are reproducing faster than the natives. This is an especially severe problem in France, where housing developments in the ring zones around all the major cities have become places the police dare not go without heavy weapons.

We’ve already gotten a foretaste of what that might mean for European domestic politics. At its most benign, we get Pim Fortuyn in Holland. But Jörg Haider in Austria is a more ominous indicator, and Jean-Marie Le Pen’s startling success in the last French presidential elections was downright frightening. Far-right populism with a racialist/nativist/anti-Semitic tinge is on the rise, an inevitable consequence of the demographic collapse of native populations.

As if that isn’t bad enough, al-Qaeda and other Islamist organizations are suspected on strong evidence to be recruiting heavily among the North African, Turkish, and Levantine populations that now predominate in European immigrant quarters. The legions of rootless, causeless, unemployed and angry young men among Muslim immigrants may in fact actually be on their way to reifying the worst nightmares of native-European racists.

One way or another, the cozy Euro-socialist welfare state is doomed by the demographic collapse. Best case: it will grind to a shambolic halt as the ratio of worker bees to drones goes below critical. Worst case: it will blow itself apart in a welter of sectarian, ethnic, and class violence. Watch the frequency trend curve of synagogue-trashings and anti-Jewish hate crimes; that’s bound to be a leading indicator.

The only possible way for Europe to avoid one of these fates would be for it to reverse either the decline in per-capita productivity or its population decline. And reversing the per-capita productivity decline would only be a temporary fix unless it could be made to rise faster than the drone-to-worker ratio — forever.

Was this foredoomed? Can it be that all national populations lose their will to have children when they get sufficiently comfortable? Do economies inevitably grow old and sclerotic? Is Europe simply aging into the end stages of a natural civilizational senescence?

That theory would be appealing to a lot of big-picture historians, and to religious anti-materialists like al-Qaeda. And if we didn’t have the U.S.’s counterexample to look at, we might be tempted to conclude that this trap is bound to claim any industrial society past a certain stage of development.

But that won’t wash. The U.S. is wealthier, both in aggregate and per-capita, than Europe. A pro-market political party in Sweden recently pointed out that by American standards of purchasing power, most Swedes now live in what U.S. citizens would consider poverty. If wealth caused decline, the U.S. would be further down the tubes than the EU right now. But we’re still growing.

A clue to the real problem lies in the differing degrees to which social stability depends on income transfer. In the U.S., redistributionism is on the decline; we abolished federal welfare nearly a decade ago, national health insurance was defeated, and new entitlements are an increasingly tough political sell to a population that has broadly bought into conservative arguments against them. In fact, one of the major disputes everyone knows won’t be avoidable much longer is over privatizing Social Security — and opponents are on the defensive.

In Europe, on the other hand, merely failing to raise state pensions on schedule can cause nationwide riots. The dependent population there is much larger, much longer-term, and has much stronger claims on the other players in the political system. The 5%/10% difference in structural unemployment — and, even more, the 6%/40% difference in permanant unemployment — tells the story.

So what happened?

Essentially, Euro-socialism told the people that the State would buy as much poverty and dependency as they cared to produce. Then it made wealth creation difficult by keeping capital expensive, business formation difficult, and labor markets rigid and regulated. Finally, it taxed the bejesus out of the people who stayed off the dole and made it through the redistributionist rat-maze, and used the proceeds to buy more poverty and alienation.

Europeans responded to this set of incentives by not having children. This isn’t surprising. The same thing happened in Soviet Russia, much sooner. There’s a reason Stalin handed out medals to women who raised big families.

Human birth rates rise under two circumstances. One is when people think they need to have a lot of kids for any of them to survive. The other is when human beings think their children will have it better than they do. (The reasons for this pattern should be obvious; if they aren’t, go read about evolutionary biology until you get it.)

Europe’s experiment with redistributionism has been running for about a hundred and fifty years now (the beginnings of the modern welfare state date to Prussian state-pension schemes in the 1840s). Until recently, it was sustained by the long-term population and productivity boom that followed the Industrial Revolution. There were always more employed young people than old people and unemployed people and sick people and indigents, so subsidizing the latter was economically possible.

Until fairly recently, Euro-socialist governments couldn’t suck wealth out of the productive economy and into the redistribution network fast enough to counter the effects of the long boom. Peoples’ estimate of the prospects for their children kept improving and they kept breeding. In France they now call the late end of that period les trentes glorieuses, the thirty glorious years from 1945 to 1975. But as the productivity gains from industrialization tailed off, the demographic collapse began, not just in France but Europe-wide.

Meanwhile, the U.S. was not only rejecting socialism, but domestic politics actually moved away from redistributionism and economic intervention after Nixon’s wage/price control experiment failed in 1971. The U.S, famously had its period of “malaise” in the 1970s after the oil-price shock ended our trentes glorieuses— but while in Europe the socialists consolidated their grip on public thinking during those years, our “democratic socialists” didn’t — and never recovered from Ronald Reagan’s two-term presidency after 1980.

The fall of the Soviet Union happened fifteen years after the critical branch point. Until then, Westerners had no way to know that the Soviets, too, had been in demographic decline for some time. Communist myth successfully portrayed the Soviet Union as an industrial and military powerhouse, but the reality was a hollow shell with a failing population — a third-world pesthole with a space program. Had that been clearer thirty years sooner, perhaps Europe might have avoided the trap.

Now the millennium has turned and it looks like the experiment will finally have to end. It won’t be philosophy or rhetoric or the march of armies that kills it, but rather the accumulated poisons of redistributionism necrotizing not just the economy but the demographics of Europe. Euro-socialism, in a quite Marxian turn of events, will have been destroyed by its own internal contradictions.

Blogspot comments

Nov 28

Today’s treason of the intellectuals

The longest-term stakes in the war against terror are not just human lives, but whether Western civilization will surrender to fundamentalist Islam and shari’a law. More generally, the overt confrontation between Western civilization and Islamist barbarism that began on September 11th of 2001 has also made overt a fault line in Western civilization itself — a fault line that divides the intellectual defenders of our civilization from intellectuals whose desire is to surrender it to political or religious absolutism.

This fault line was clearly limned in Julien Benda’s 1927 essay Le trahison des clercs: English “The treason of the intellectuals”. I couldn’t find a copy of Benda’s essay on the Web. but there is an excellent commentary on it that repays reading. Ignore the reflexive endorsement of religious faith at the end; the source was a conservative Catholic magazine in which such gestures are obligatory. Benda’s message, untainted by Catholic or Christian partisanship, is even more resonant today than it was in 1927.

The first of the totalitarian genocides (the Soviet-engineered Ukrainian famine of 1922-1923, which killed around two million people) had already taken place. Hitler’s “Final Solution” was about fifteen years in the future. Neither atrocity became general knowledge until later, but Benda in 1927 would not have been surprised; he foresaw the horrors that would result when intellectuals abetted the rise of the vast tyrannizing ideologies of the 20th century,

Changes in the transport, communications, and weapons technologies of the 20th century made the death camps and the gulags possible. But it was currents in human thought that made them fact — ideas that both motivated and rationalized the thuggery of the Hitlers and Stalins of the world.

Benda indicted the intellectuals of his time for abandoning the program of the Enlightenment — abdicating the search for disinterested truth and universal human values. Benda charged that in
abandoning universalism in favor of racism, classism, and political particularism, intellectuals were committing treason against the humanity that looked to them for guidance — prostituting themselves to creeds that would do great ill.

And what are the sequelae of this treason? Most diagnostically, mass murder and genocide. Its lesser consequences are subject to debate, equivocation, interpretation — but when we contemplate the atrocities at the Katyn Forest or the Sari nightclub there can beno doubt that we confront radical evils. Nor can we disregard the report of the perpetrators that that those evils were motivated by ideologies, nor that the ideologies were shaped and enabled and apologized for by identifiable factions among intellectuals in the West.

An intellectual commits treason against humanity when he or she propagandizes for ideas which lend themselves to the use of tyrants and terrorists.

In Benda’s time, the principal problem was what I shall call “treason of the first kind” or revolutionary absolutism: intellectuals signing on to a transformative revolutionary ideology in the belief that if the right people just got enough political power, they could fix everything that was wrong with the world. The “right people”, of course, would be the intellectuals themselves — or, at any rate, politicians who would consent to be guided by the intellectuals. If a few kulaks or Jews had to die for the revolution, well, the greater good and all that…the important thing was that violence wielded by Smart People with the Correct Ideas would eventually make things right.

The Nazi version of this disease was essentially wiped out by WWII. But the most deadly and persistent form of treason of the first kind, which both gave birth to intellectual Naziism and long outlived it, was intellectual Marxism. (It bears remembering that ‘Nazi’ stood for “National Socialist”, and that before the 1934 purge of the Strasserites the Nazi party was explicitly socialist in ideology.)

The fall of the Soviet Union in 1992 broke the back of intellectual Marxism. It may be that the great slaughters of the 20th century have had at least one good effect, in teaching the West a lesson about the perils of revolutionary absolutism written in letters of human blood too large for even the most naive intellectual idealist to ignore. Treason of the first kind is no longer common.

But Benda also indicted what I shall call “treason of the second kind”, or revolutionary relativism — the position that there are no moral claims or universal values that can trump the particularisms of particular ethnicities, political movements, or religions. In particular, relativists maintain that that the ideas of reason and human rights that emerged from the Enlightenment have no stronger claim on us than tribal prejudices.

Today, the leading form of treason of the second kind is postmodernism — the ideology that all value systems are equivalent, merely the instrumental creations of people who seek power and other unworthy ends. Thus, according to the postmodernists, when fanatical Islamists murder 3,000 people and the West makes war against the murderers and their accomplices, there is nothing to choose between these actions. There is only struggle between contending agendas. The very idea that there might be a universal ethical standard by which one is `better’ than the other is pooh-poohed as retrogressive, as evidence that one is a paid-up member of the Party of Dead White Males (a hegemonic conspiracy more malign than any terrorist organization).

Treason of the first kind wants everyone to sign up for the violence of redemption (everyone, that is, other than the Jews and capitalists and individualists that have been declared un-persons in advance). Treason of the second kind is subtler; it denounces our will to fight terrorists and tyrants, telling us we are no better than they, and even that the atrocities they commit against us are no more than requital for our past sins.

Marxism may be dead, but revolutionary absolutism is not; it flourishes in the Third World. Since 9/11, the West has faced an Islamo-fascist axis formed by al-Qaeda, Palestinian groups including the Palestinian Authority and Hamas, the rogue state of Iraq, and the theocratic government of Iran. These groups do not have unitary leadership, and their objectives are not identical; notably, the PA
and Iraq are secularist, while al-Qaeda and Hamas and the Iranians and the Taliban are theocrats. Iran is Shi’a Islamic; the other theocratic groups are Sunni. But all these groups exchange intelligence and weapons, and they sometimes loan each other personnel. They hate America and the West, and they have used terror against us in an undeclared war that goes back to the early 1970s. The objectives of these groups, whether they are secular Arab nationalism or Jihad, require killing a lot of people. Especially a lot of Westerners.

Today’s treason of the intellectuals consists of equating suicide bombings deliberately targeting Israeli women and children with Israeli military operations so restrained that Palestinian children throw rocks at Israeli soldiers without fearing their guns. Today’s treason of the intellectuals tells us that because the U.S. occasionally propped up allied but corrupt governments during the
Cold War, we have no right to object to airliners being flown into the World Trade Center. Today’s treason of the intellectuals consists of telling us we should do nothing but stand by, wringing our hands, while at least one of the groups in the Islamo-fascist axis acquires nuclear weapons with which terrorists could repeat their mass murders in New York City and Bali on an immensely larger scale.

Behind both kinds of treason there lurks an ugly fact: second-rate intellectuals, feeling themselves powerless, tend to worship power. The Marxist intellectuals who shilled for Stalin and the postmodernists who shill for Osama bin Laden are one of a kind — they identify with a tyrant’s or terrorist’s vision of transformingthe world through violence because they know they are incapable of making any difference themselves. This is why you find academic apologists disproportionately in the humanities departments and the soft sciences; physicists and engineers and the like have more constructive ways of engaging the world.

It may be that 9/11 will discredit revolutionary relativism as throughly as the history of the Nazis and Soviets discredited revolutionary absolutism. There are hopeful signs; the postmodernists and multiculturalists have a lot more trouble justifying their treason to non-intellectuals when its consequences include the agonizing deaths of thousands caught on videotape.

It’s not a game anymore. Ideas have consequences; postmodernism and multiculturalism are no longer just instruments in the West’s intramural games of one-upmanship. They have become an apologetic for barbarians who, quite literally, want to kill or enslave us all. Those ideas — and the people who promulgate them — should be judged accordingly.

Nov 26

When to shoot a policeman

A policeman was
premeditatedly shot dead today.

Now, I don’t regard shooting a policeman as the worst possible
crime — indeed, I can easily imagine circumstances under which I
would do it myself. If he were committing illegal violence — or
even officially legal violence during the enforcement of an unjust
law. Supposing a policeman were criminally threatening someone’s
life, say. Or suppose that he had been ordered under an act of
government to round up all the Jews in the neighborhood, or confiscate
all the pornography or computers or guns. Under those circumstances,
it would be not merely my right but my duty to shoot the
policeman.

But this policeman was harming nobody. He was shot down in
cold blood as he was refueling his cruiser. His murderer subsequently
announced the act on a public website.

The murderer said he was “protesting police-state tactics”. If
that were his goal, however, then the correct and appropriate
expression of it would have been to kill a BATF thug in the process of
invading his home, or an airport security screener, or some other
person who was actively and at the time of the protest implementing
police-state tactics.

Killings of policemen in those circumstances are a defensible
social good, pour encourager les autres. It is right and proper
that the police and military should fear for their lives when they
trespass on the liberty of honest citizens; that is part of the
balance of power that maintains a free society, and the very reason
our Constitution has a Second Amendment.

But this policeman was refueling his car. Nothing in the
shooter’s justification carried any suggestion that the shooter’s
civil rights had ever been violated by the victim, or that the
murderer had standing to act for any other individual person whose
rights had been violated by the victim. This killing was not
self-defense.

There are circumstances under which general warfare against the
police would be justified. In his indymedia post The
Declaration of a Renewed American Independence
the shooter utters
a scathing, and (it must be said) largely justified indictment of
police abuses. If the political system had broken down sufficiently
that there were no reasonable hope of rectifying those abuses, then I
would be among the first to cry havoc.

Under those circumstances, it would be my duty as a free human
being under the U.S. Constitution not merely to shoot individual
policemen, but to make revolutionary war on the police. As Abraham Lincoln
said, “This country, with its institutions, belongs to the people
who inhabit it. Whenever they shall grow weary of the existing
government, they can exercise their constitutional right of amending
it or their revolutionary right to dismember it or overthrow
it.”

But the United States of America has not yet reached the point at
which the political mechanisms for the defense of freedom have broken
down. This judgment is not a matter of theory but one of practice.
There are not yet police at our door with legal orders to round up the
Jews, or confiscate pornography or computers or guns.

Civil society has not yet been fatally vitiated by tyranny. Under
these circumstances, the only possible reaction is to condemn. This
was a crime. This was murder. And I would cheerfully shoot not the
policeman but the murderer dead. (There would be no question
of guilt or due process, since the murderer publicly boasted of his
crime.)

But that this shooter was wrong does not mean that
everyone who shoots a policeman in the future will also be wrong. A
single Andrew McCrae, at this time, is a criminal and should be
condemned as a criminal. But his case against the police and the
system behind them is not without merit. Therefore let him be a
warning as well.

Blogspot comments

Nov 21

What a responsible American Left would look like

The congressional Democrats have made Nancy Pelosi their leader.
Whether or not this is conscious strategy, it means they’re going to
run to the left. And very likely get slaughtered in 2004.

It’s truly odd how self-destructive the American Left has become.
They’re like that famous line about the Palestinians, never missing an
opportunity to miss an opportunity. And there are so many
opportunities! So many good things Republican conservatives can
never do because they’re captive to their voter base.

Herewith, then, my humble offering of a program for the American
Left. This is not sarcasm and I’m not trying to score points here,
these are issues where the Left could take a stand and gain back some
of the moral capital it has squandered so recklessly since the
days of the civil rights movement.

  • Support war on Iraq, but insist on nation-building
    afterwards.
    Saddam Hussein is a genocidal fascist tyrant, exactly the
    sort of monster the Left ought to be against. Support deposing him
    — then be the conscience of the U.S., insisting on our duty to
    help rebuild Iraq as a free country afterwards. Push us to win the
    peace, not just the war.
  • Derail the Homeland Security Act and other intrusions on
    civil liberties.
    The left hates John Ashcroft. So why don’t
    we see more Left opposition to the law-enforcement power grab that’s
    going on right now, or to the gutting of the Freedom of Information
    Act? Many Americans would respond well to this.
  • Stop the War on (Some) Drugs. This is a civil-rights
    issue. Blacks and other minorities are disproportionately victims
    both of drug prosecution and of the criminal violence created by drug
    laws. It’s a civil-liberties issue for many reasons too obvious to
    need listing — how can any self-respecting liberal countenance
    no-knock warrants and asset forfeiture? For too long the Left has
    gone along with conservative anti-drug hysteria out of a craven fear
    of being dismissed as a bunch of dope-loving ex-hippies. Time to
    stand up and be counted.
  • Support school vouchers. Another civil-rights issue
    — it’s precisely minorities and the poor who most need to escape
    the trap that the public-school system has become, and black parents
    know this. Yes, it will be hard to take on the teachers’ unions
    — but you’re in serious danger of losing the black vote over
    this issue, so switching would be not just the right thing but a
    way to shore up your base as well.
  • Speak up for science. Religious conservatives are up to a
    lot of anti-scientific mischief — banning stem-cell research,
    excising evolutionary theory from textbooks. Make a principled stand
    for science, secularism, and the anti-Establishment clause. Remind
    the world that the U.S. is not a Christian nation, and seek to have
    the tax exemption for religious organizations ended because it puts the
    U.S. government in the position of deciding what’s a religion and
    what is not.
  • Stop the RIAA/MPAA from trashing consumers’ fair-use rights.
    The Left claims to be on the side of consumers and against corporate
    power elites. So where was the Left when the DMCA passed? If the
    RIAA and MPAA have their way, personal computers will be crippled
    and consumers will go to jail for the `crime’ of copying DVDs they
    have bought for their personal use. Young people, who are trending
    conservative these days, care deeply about the RIAA attack on
    file sharing. Wouldn’t you like to have them back?

Blogspot comment

Nov 14

Conspiracy and prospiracy

One of the problems we face in the war against terror is that al-Qaeda is not quite a conspiracy in the traditional sense. It’s something else that is more difficult to characterize and target.

(I wrote what follows three years before 9/11.)

Political and occult conspiracy theories can make for good propaganda and excellent satire (vide Illuminatus! or any of half a dozen other examples). As guides to action, however, they are generally dangerously misleading.

Misleading, because they assume more capacity for large groups to keep secrets and maintain absolutely unitary conscious policies than human beings in groups actually seem to possess. The history of documented “conspiracies” and failed attempts at same is very revealing in this regard — above a certain fairly small size, somebody always blows the gaff. This is why successful terrorist organizations are invariably quite small.

Dangerously misleading because conspiracy theories, offering the easy drama of a small group of conscious villains, distract our attention from a subtler but much more pervasive phenomenon — one I shall label the “prospiracy”.

What distinguishes prospiracies from conspiracies is that the members don’t necessarily know they are members, nor are they fully conscious of what binds them together. Prospiracies are not created through oaths sworn by guttering torchlight, but by shared ideology or institutional culture. In many cases, members accept the prospiracy’s goals and values without thinking through their consequences as fully as they might if the process of joining were formal and initiatory.

What makes a prospiracy like a conspiracy and distinguishes it from a mere subcultural group? The presence of a “secret doctrine” or shared goals which its core members admit among themselves but not to perceived outsiders; commonly, a goal which is stronger than the publicly declared purpose of the group, or irrelevant to that declared purpose but associated with it in some contingent (usually historical) way.

On the other hand, a prospiracy is unlike a conspiracy in that it lacks well-defined lines of authority. Its leaders wield influence over the other members, but seldom actual power. It also lacks a clear-cut distinction between “ins” and “outs”.

Prospiracy scales better than conspiracy, and thus can be far more dangerous. Because anyone can join simply by buying the “secret” doctrine, people frequently recruit themselves. Because the “secret” isn’t written on stone tablets in an inner sanctum, it’s totally deniable. In fact, members sometimes deny it to themselves (not that that ultimately matters). What keeps a prospiracy together is not conscious commitment but the memetic logic of its positions.

As an exercise (and to avoid any appearance of axe-grinding), I’ll leave the reader to apply this model for his or herself. There are plenty of juicy examples out there. I’m a “member” of at least two of them myself.

Blogspot comments

Nov 13

The Charms and Terrors of Military SF

I took some heat recently for describing some of Jerry Pournelle’s
SF as “conservative/militarist power fantasies”. Pournelle uttered a
rather sniffy comment about this on his blog; the only substance I
could extract from it was that Pournelle thought his lifelong friend
Robert Heinlein was caught between a developing libertarian philosophy
and his patriotic instincts. I can hardly argue that point, since I
completely agree with it; that tension is a central issue in almost
eveything Heinlein ever wrote.

The differences between Heinlein’s and Pournelle’s military SF are
not trivial — they are both esthetically and morally important.
More generally, the soldiers in military SF express a wide range
of different theories about the relationship between soldier,
society, and citizen. These theories reward some examination.

First, let’s consider representative examples: Jerry Pournelle’s
novels of Falkenberg’s Legion, on the one hand, and Heinlein’s
Starship Troopers on the other.

The difference between Heinlein and Pournelle starts with the fact
that Pournelle could write about a cold-blooded mass murder of human
beings by human beings, performed in the name of political order,
approvingly — and did.

But the massacre was only possible because Falkenberg’s Legion and
Heinlein’s Mobile Infantry have very different relationships with the
society around them. Heinlein’s troops are integrated with the society
in which they live. They study history and moral philosophy; they are
citizen-soldiers. Johnnie Rico has doubts, hesitations, humanity.
One can’t imagine giving him orders to open fire on a stadium-full of
civilians as does Falkenberg.

Pournelle’s soldiers, on the other hand, have no society but their
unit and no moral direction other than that of the men on horseback
who lead them. Falkenberg is a perfect embodiment of military
Fuhrerprinzip, remote even from his own men, a creepy and
opaque character who is not successfully humanized by an implausible
romance near the end of the sequence. The Falkenberg books end with
his men elevating an emperor, Prince Lysander who we are all supposed
to trust because he is such a beau ideal. Two thousand years of
hard-won lessons about the maintenance of liberty are thrown away
like so much trash.

In fact, the underlying message here is pretty close to that of
classical fascism. It, too, responds to social decay with a cult of
the redeeming absolute leader. To be fair, the Falkenberg novels
probably do not depict Pournelle’s idea of an ideal society, but they
are hardly less damning if we consider them as a cautionary tale.
“Straighten up, kids, or the hero-soldiers in Nemourlon are going to
have to get medieval on your buttocks and install a Glorious Leader.”
Pournelle’s values are revealed by the way that he repeatedly posits
situations in which the truncheon of authority is the only solution.
All tyrants plead necessity.

Even so, Falkenberg’s men are paragons compared to the soldiers in
David Drake’s military fiction. In the Hammer’s Slammers
books and elsewhere we get violence with no politico-ethical nuances
attached to it all. “Carnography” is the word for this stuff,
pure-quill violence porn that goes straight for the thalamus. There’s
boatloads of it out there, too; the Starfist sequence by
Sherman and Cragg is a recent example. Jim Baen sells a lot of it
(and, thankfully, uses the profits to subsidize reprinting the Golden
Age midlist).

The best-written military SF, on the other hand, tends to be more
like Heinlein’s — the fact that it addresses ethical questions
about organized violence (and tries to come up with answers one might
actually be more willing to live with than Pournelle’s quasi-fascism
or Drake’s brutal anomie) is part of its appeal. Often (as in
Heinlein’s Space Cadet or the early volumes in Lois
Bujold’s superb Miles Vorkosigan novels) such stories include elements
of bildungsroman.

The Sten sequence by Allan Cole and Chris Bunch was
both a loving tribute to and (in the end) a brutal deconstruction of
this kind of story. It’s full of the building-character-at-boot-camp
scenes that are a staple of the subgenre; Sten’s career is carefully
designed to rationalize as many of these as possible. But the Eternal
Emperor, originally a benevolent if quirky paternal figure who earns
Sten’s loyalty, goes genocidally mad. In the end, soldier Sten must
rebel against the system that made him what he is.

Cole & Bunch tip their hand in an afterword to the last book,
not that any reader with more perception than a brick could have
missed it. They wrote Sten to show where fascism leads
and as a protest against SF’s fascination with absolute power and the
simplifications of military life. Bujold winds up making the same
point in a subtler way; the temptations of power and arrogance are a
constant, soul-draining strain on Miles’s father Aral, and Miles
eventually destroys his own career through one of those
temptations

Heinlein, a U.S naval officer who loved the military and seems to
have always remembered his time at Annapolis as the best years of his
life, fully understood that the highest duty of a soldier may be not
merely to give his life but to reject all the claims of military
culture and loyalty. His elegiac The Long Watch makes
this point very clear. You’ll seek an equivalent in vain anywhere in
Pournelle or Drake or their many imitators — but consider
Bujold’s The Vor Game, in which Miles’s resistance to
General Metzov’s orders for a massacre is the pivotal moment at which
he becomes a man.

Bujold’s point is stronger because, unlike Ezra Dahlquist in
The Long Watch or the citizen-soldiers in Starship
Troopers
, Miles is not a civilian serving a hitch. He is the
Emperor’s cousin, a member of a military caste; his place in
Barrayaran society is defined by the expectations of military
service. What gives his moment of decision its power is that in refusing
to commit an atrocity, he is not merely risking his life but giving up
his dreams.

Falkenberg and Admiral Lermontov have a dream, too. The difference
is that where Ezra Dahlquist and Miles Vorkosigan sacrifice themselves
for what they believe, Pournelle’s “heroes” sacrifice others. Miles’s
and Dahlquist’s futures are defined by refusal of an order to do evil,
Falkenberg’s by the slaughter of untermenschen.

This is a difference that makes a difference.

Blogspot omments