Cal Godot asked a good question in response to last weekend's post. When I use the terms "will" and "desire" in the context of ethical philosophy, am I using the terms interchangeably?
Yes, in a strict logical sense, I am using the terms interchangeably. Both "will" and "desire" point to the same thing, the same mysterious and omnipresent phenomenon of human (and animal) life. Yet there is a world of difference between will and desire.
The difference is not in the thing the words points to, but in the connotations captured along the way. The term "will" calls to mind three provocative philosophical texts that have become classics of the modern Western tradition: Arthur Schopenhauer's The World as Will and Presentation, Friedrich Nietzsche's The Will to Power and William James's essay collection The Will to Believe. Thus, "will" connotes European romanticism, existentialism and American Pragmatism. It carries a muscular, vigorous, dramatic and conflict-ridden sense. It feels Napoleonic and Apollonian.
I was contemplating the mysteries of life while lying in a hammock in the relaxing Indiana backyard of my wife Caryn's family home when an answer I'd been seeking suddenly came to me.
As you know if you've been reading these Philosophy Weekend posts, I've been trying to put together a couple of puzzle pieces. First, a question I've asked repeatedly: what is the relationship of desire to our sense of self? Second, the most central question we've been debating since the inception of this blog series: to what extent does a group self, or a shared self, or a collective self, exist in everyday human life, and what does it mean to speak of a group self or collective self?
What suddenly occurred to me, as I lay on this hammock during the cooling early twilight hours of a stunningly hot summer day that kept us all indoors until the sun went down, is that these two separate puzzle pieces fit together. In fact, they fit together so well that I must have subconsciously understood the connection all along, even though I didn't realize it consciously.
What I now see is a formula (or, if you prefer, an argument) that answers my second question by answering the first. Here, in the simplest words I can manage, is the formula. Please note that I do not claim to have an absolute proof for either of the two premises numbered 1. and 2. below. It is up to each reader to weigh whether or not each premise is possible, plausible and compelling. I do believe that the two premises together form a ladder to a surprising conclusion (3.). So, here goes:
Okay, enough about what the US Supreme Court's historic ruling to uphold Obamacare means for the country. Let's talk about what our reaction told us about us. It sure was a strange reaction.
The decision was scheduled to be announced on Thursday morning, June 28, starting at 10:am. The first few sentences of the announcement appeared a few minutes later on the SCOTUSblog live stream, and as soon as the first sentences appeared, public hysteria ensued.
At least a full half hour of absolute hysteria followed, mostly caused by the fact that two cable news networks, CNN and Fox News, reported incorrectly that Obamacare had been overturned. The confusion was cleared up quickly, but now everybody was confused, and somehow the hysterical pitch of the first few minutes became the de facto tone of the news coverage for the entire day.
Even today, two days later, there is still an undertone of shock to all coverage and discussion of the Supreme Court verdict -- appreciative and relieved shock on the pro-Obamacare side, and indignant, infuriated shock on the anti- side.
I wasn't shocked. I've been following the healthcare debate closely for years, and I know the bill had been carefully designed to make it through the Supreme Court (the Obama administration is not stupid, after all). I was amazed that so many allegedly knowledgeable people were predicting that the Supreme Court would find ACA unconstitutional, because anybody who knows the history of the US Supreme Court knows how unusual a decision to overturn a law on such optional grounds would have been. The Supreme Court (as Chief Justice John Roberts would finally explain in his preamble) doesn't have a history of challenging legislation at this level, and makes an effort to steer clear of partisan politics. The honor and reputation of the court would clearly be at stake if it made a dramatic decision to overturn such a major piece of legislation, and it was Chief Justice John Roberts's responsibility above all to defend the integrity of the Supreme Court by moving cautiously.
I don't think we really want to solve the puzzle of desire. What would we do afterwards? But the puzzle seems to be impossible to solve anyway, so we can enjoy pondering it forever. Here's a passage that caught my attention in "Variations on Desire", the opening piece in Siri Hustvedt's appealing new collection of essays, Living, Thinking, Looking.
There are three misconceptions about philosophy that I'd like to clear up today. The first is that it's an academic discipline, carried out by professors and graduate students in quarterlies and journals while the rest of us breathlessly await reports of their findings. Actually, many people like me who care about philosophy don't pay any attention to the back-and-forth of insular academic journals. If anything useful emerges from one of these journals, we figure, we'll eventually read about it on a blog. This doesn't happen, we notice, very often.
It is a fact that many professors call themselves philosophers, and that some top professors at top colleges consider themselves very important philosophers. But there is little evidence that any academic work is getting noticed in the real world, and philosophy is thoroughly concerned with the real world. It's a telling fact that the most popular American philosopher of the past hundred years, Ayn Rand (who I have been knocking myself out here to refute) was not an academic, and also that the most popular European philosopher of the past two hundred years, Friedrich Nietzsche (who I have been knocking myself out to promote) began his career as an academic, but only managed to find a reading audience after leaving the University of Basel and slowly going insane, at the same time writing the great non-academic works that made him a star.
I hope the philosophy professors in all the colleges of the world are doing a great job teaching their students (this is, after all, the primary responsibility of a college professor). But as for the original work they are doing, it's mostly fan-fiction as far as I can tell. Lots of words, very little impact.
Our search for a great living ethical philosopher has so far turned up empty. We're only at the early stages of the search, having recently examined the work of Alain De Botton and Sam Harris, both of them young trendy philosophers who swing in the TED set. But preliminary results have been worrying.
We like the aesthetic approach of Alain De Botton, who has bold, fanciful ideas about many things. However, a close look shows that artistry may be all he has. De Botton has written books (mostly to polite applause) on moral philosophy, but he appears to be too much of a wonderer, and not enough of a fighter, to make his name in the muscular field of ethical debate. De Botton clearly likes to dress himself up in a philosopher's antique clothes, but one senses that it's all some kind of fetching show. A great philosopher? Not yet.
The young atheist firebrand Sam Harris is refreshingly pugnacious and argumentative, and he can turn a sharp phrase. But he's also unimaginative and unperceptive. He has lately specialized in "rational" Koran-bashing, with the upturned chin of a brave sophomore who isn't going to pussyfoot around this. Reading Sam Harris's angry diatribes about fundamentalist Islam, or about religion in general, one can't help feeling that one understands more about human nature than Sam Harris does, and that Sam Harris ought to be listening to all of us instead of the other way around. A great living philosopher? In his dreams.
After these bruising early results, I decided to get away from the hip young TED familiars and focus next on some heavier weights. I've been reading up on Richard Dawkins, Daniel Dennett, Derek Parfit, Slavoj Zizek and Sarah Sawyer, and hope to cover them all soon. However, two separate links to the work of a Virginia author named Jonathan Haidt appeared in two of my favorite blogs, Andrew Sullivan's Daily Dish and the Maverick Philosopher, and caught my attention. As far as appearances go, Haidt is another trendy young TED-ish ethics guy. However, he is showing signs of a wider mind. Even though he wears the same clothes:
I'm only two chapters into Haidt's new book, The Righteous Mind: Why Good People Are Divided by Politics and Religion, so I won't try to say much about him in my own words today. But many others have recently noticed Jonathan Haidt too, and I'd like to share a few pullquotes.
I can never guess which of my Philosophy Weekend blog posts will turn out to have legs.
Nine months ago, researching the origin of the word 'altruism', I learned that the term had been coined by Auguste Comte, a 19th Century French philosopher I had heard of but knew little about. Comte had developed a humane and optimistic system of political, ethical, scientific and metaphysical philosophy called Positivism, and during his lifetime Positivism was a gigantic sensation around the world. Intrigued, I wrote a blog post to wonder what it signified about our own culture that a major 19th Century philosopher with an ambitious platform of international peace, respect for human diversity and freethinking scientific rigor had fallen completely off the radar immediately after the disaster of the First World War.
What I didn't expect was that my blog post would start getting lots of hits from Google, and would become one of my more popular Philosophy Weekend posts (I do watch my traffic statistics, not to feed my ego but to discern trends in reader interest). Then, a mysterious late comment appeared on my Comte post that brought a big smile to my face. In response to my statement that Positivism was defunct today, and this commenter posted a single sentence reply:
Well, we are not quite that dead, are we?
This was accompanied by a link to Positivists.org, a well-designed website with an active Facebook page and a lively blog. The new web presence is apparently the work of an eager German philosopher named Olaf Simons who appears to have some clue how to use social media to spread a message. Positivism lives!
Last weekend's blog post "A Dollar's Worth of Morals" may turn out to be the most unpopular thing I've ever written on this site. Several typically friendly Litkicks commenters posted in no uncertain terms that they hated the piece ... including my own beloved wife.
Ironically, I didn't expect this reaction at all when I wrote the piece. I was only trying to tell an amusing story that had, I thought, a positive and good-natured moral.
Clearly, my writing skills failed me. As they say, "If three people tell you you're drunk, sit down." I now see what went wrong with this piece, and I understand why it left so many of my faithful readers cold. I'd like to explain where I went wrong, and maybe salvage some part of my original message, which completely got lost in this disaster.
The story I told is a simple one: as I was leaving work one day, a co-worker named John T. raced down the building lobby after me, causing a lot of public commotion, so he could give me back the dollar he'd borrowed earlier that day. He evidently lived in moral horror of ever forgetting a debt, and the point of my telling this story was that I found his priorities ridiculous, especially since he had recently disappointed me by failing to speak up to our boss about a workplace problem we were both concerned about.
I was trying to make a subtle and esoteric point, in a non-judgemental way, that we often put too much emphasis on petty issues involving small amounts of money or insignificant possessions, failing to emphasize instead the things that really matter in our lives. I'm very interested in the psychology of wealth and possessiveness, and I meant this piece to reflect upon the same questions I'd brought up in earlier Philosophy Weekend posts like this one or this one.
But a strange thing happened between my conception of the story and my telling of it. I thought I was writing in an amused and jokey voice, but somehow a vein of hidden anger became exposed, and the tone of my story became shrill. I began accusing John T. of following a shallow and legalistic code of ethics, and went off on a strange half-paragraph rant about how he had betrayed our friendship. This harsh stuff did not match the intended warm tone of my blog post at all, and I ended up making readers feel sorry for poor John T., who I was beating up mercilessly for the very minor crime of paying me back a dollar.
Years ago, when I was working for a small litigation software company in New York City, I was leaving the office one day when I thought I heard someone shout my name from far away. I stopped in the building lobby and looked around, but I didn't see anyone and couldn't imagine why somebody would be calling for me. So I continued on my way and was just about to reach the building's front door when I heard the muffled shout again, coming from the mezzanine above the escalator I'd just taken, along with the sound of pounding footsteps. A figure finally came into view, running down the escalator. It was my co-worker John T. "Did the servers crash?" I asked when he finally reached me.
"No," he said, breathless, grabbing his knees. He regained his composure and began digging around in his pocket, finally pulling out his wallet and handing me a single dollar bill. "The soda machine before," he sputtered.
The soda machine. Several hours earlier, he'd asked me for a dollar to buy a soda, and I had handed one over. I'd completely forgotten about it, and he could have too for all I cared. After all, we're both software developers, allegedly well paid -- what's a dollar to either of us? But I guess he takes great pride in being the kind of person who pays back every single dollar he ever borrows. I could see the pride on his face. "Thanks," I said, shrugging and turning away, shoving the dollar bill in my pocket.
Exactly sixty years ago, in May 1952, 81-year-old Zen Buddhist scholar D. T. Suzuki began teaching a regular course at Columbia University. 39-year-old modernist composer John Cage attended a few of his lectures, and this is the electric point of contact that starts everything buzzing in Nothing and Everything - The Influence of Buddhism on the American Avant Garde: 1942 - 1962, a new book by Ellen Pearlman.
Both men were trailblazers. Suzuki is remembered today as a premier ambassador for Eastern religion in the West, and as the author of the influential books Introduction to Zen Buddhism and Essays in Zen Buddhism. But, Ellen Pearlman reveals in the first chapter of Nothing and Everything, Suzuki had not been considered a very "successful" Buddhist as a young Zen student in Japan. He found a far greater calling as a highly visible foreigner in the West than he could have ever found if he'd stayed in Japan, since his idiosyncratic personality rubbed many Zen masters the wrong way. It was Suzuki's ability to translate key Asian texts into English that gave him a foothold in the United States of America, and he eagerly grabbed the opportunity to pursue his own unique vision of a global Buddhist awakening.
John Cage had already earned a reputation as a rule-breaker in the field of avant-garde music by the time he attended the elderly Suzuki's lectures at Columbia, but it wasn't until after he was exposed to Zen Buddhism (from Suzuki and several other sources) that he was able to conceive of his signature work, 4'33, which thrilled and outraged the world of classical music with its unspeakable simplicity. The composition indicated that the performer should sit at a piano (or any other instrument) and maintain four minutes and thirty-three seconds of silence.
It's impossible to encapsulate modern, avant-garde and experimental arts within any formula, but Nothing and Everything's purpose is to follow a single thread of excitement among several 20th century innovators within American art, music, theater and literary scenes that was caused by a rising awareness of traditional Buddhist religion and philosophy. The first to follow John Cage were the Dada-inspired innovators of the Fluxus movement in the early 1960s, Alison Knowles, Jackson Mac Low, Num June Paik, Toshi Ichiyanagi and Yoko Ono (who, beyond the scope of this book, would eventually collaborate with John Lennon to present crystalline expressions of Fluxus ideas to the entire world, and become its most famous practitioner).