Bumper Stickers and Zero-Sum Thinking

For the last year I’ve been developing an idea for a documentary film tentatively titled “Sticker Movie” about bumper stickers and how they express tribal identity and territory. Someone inquired about the status of the project and this was my response. Consider it a glimpse into the creative thought process.

“Sticker Movie is on hiatus for the moment, mostly while I seek funding and further develop the concept. However, the more I’ve meditated on the idea, the more I have come to believe that stickers indicate something very fundamental about the human psyche, specifically the amygdala and its function in society.

Humans behave differently when they feel threatened; brain function shifts from cerebral cortex to amygdala, which mediates flight/fight response. I believe stickers are associated with territorial instinct, as originated from the amygdala, and that much of the definition of “liberal” and “conservative” rests on how specifically people modulate their amygdala/cerebral functions. I believe this is partially genetic and partially environmental. People’s early experiences (bullying, racism, teasing, aggression, prison, certain sports) can definitely activate a “zero-sum” mentality which is dominated by the amygdala.

It’s been observed that siblings are more competitive but less successful than only children. Siblings fight amongst themselves for resources and attention, while only-children may have more opportunity to be creative. Zero-sum vs. positive sum thinking, IMHO.”

I’m an only-child. And I believe that if we can deactivate the amygdala in our political discourse, we can positively affect the world. What do you think about stickers, the amygdala, and zero-sum thinking?

iPad and the Brain


The iPad promises to be a very big deal: not just because it’s the next big over-hyped thing from Apple, but because it fundamentally shifts the way that humans will interact with computing.

Let’s call this the “fourth turning” of the computing paradigm.

Calculators

Early “computers” were electro-mechanical, then electric, and then later all electronic. But the metaphor was constant: you pushed buttons to enter either values or operators, and you had to adhere to a fixed notation to obtain the desired results. This model was a “technology” in the truest sense of the word, replacing “how” a pre-existing task got done. It didn’t fundamentally change the user, it just made a hard task easier.

8-Bit Computers: Keyboards

The early days of computing were characterized by business machines (CP/M, DOS, and character-based paradigms) and by low-end “graphics and sound” computers like the Atari 800, Apple II, and Commodore 64.

The promise here was “productivity” and “fun,” offering someone a more orderly typewriting experience or the opportunity to touch the edges of the future with some games and online services. But the QWERTY keyboard (and its derivatives) date back to at least 1905. And the first typewriters were made by Remington, the arms manufacturer.

The keyboard input model enforces a verbal, semantic view of the world. The command line interface scared the hell out of so many people because they didn’t know what they might “say” to a computer, and they were often convinced they’d “mess it up.” During this era, computing was definitely still not a mainstream activity.

More of the population was older (relative to computing) and had no experience with the concepts.

The Mouse, GUI, and the Web

Since the introduction of the Macintosh, and later Windows, the metaphors of the mouse, GUI, and the web have become so pervasive we don’t even think about them anymore.

But the reality is that the mouse is a 1970’s implementation of a 1950’s idea, stolen by Apple for the Lisa from Xerox PARC. Windows is a copy of the Macintosh.

The graphical computing metaphor, combined with the web, has opened the power of the Internet to untold millions, but it’s not hard to argue that we’re all running around with Rube Goldberg-like contraptions, cobbled together from parts from 1905, 1950, and 1984 respectively. Even so, the mouse alone has probably done more to open up computing than anything else so far.

The mouse enforces certain modes of use. The mouse is an analog proxy for the movement of our hands. Most people are right handed, and the right hand is controlled by the left hemisphere of the brain, which science has long argued is responsible for logic and reason. While a good percentage of the population is left handed, the fact remains that our interactions with mice are dominated by one half of the brain. Imagine how different your driving is when you only use one hand.

While we obviously use two hands to interact with a keyboard, some cannot do that well, and it continues a semantic, verbal mode of interaction.

iPad

The iPad will offer the first significant paradigm shift since the introduction of the mouse. And let me be clear: it doesn’t matter whether hardcore geeks like it now, or think it lacks features, or agree with Apple’s App Store policies.

The iPad will open up new parts of the human brain.

By allowing a tactile experience, by allowing people to interact with the world using two hands, by promoting and enabling ubiquitous network connections, the iPad will extend the range and the reach of computing to places we haven’t yet conceived.

Seriously. The world around us is reflected by our interactions with it. We create based on what we can perceive, and we perceive what we can sense. The fact that you can use two hands with this thing and that it appears to be quick and responsive is a really big deal. It will light up whole new parts of the brain, especially the right hemisphere — potentially making our computing more artistic and visual.

Just as the mouse ushered in 25 years of a new computing paradigm, pushing computing technology out over a much larger portion of the market, the iPad marks the beginning of the next 25 years of computing.

And before you get worried about how people will type their papers and design houses and edit video without traditional “computers,” let me answer: no one knows. We’ll use whatever’s available until something better comes along.

But computing platforms are created and shaped by raw numbers and the iPad has every opportunity to reach people in numbers as-yet unimagined. That will have the effect of making traditional software seem obsolete nearly overnight.

When the Macintosh was released, it was widely derided as a “toy” by the “business computing” crowd. We see how well that turned out.

This time, expect a bright line shift: BIP and AIP (before iPad and after iPad). It’s the first time that an entirely new design has been brought to market, answering the question, “Knowing everything you know now, what would you design as the ultimate computer for people to use with the global network?”

It’s 2010, and we don’t need to be tied down to paradigms from 1950 or 1905. Everything is different now, and it’s time our tools evolved to match the potential of our brains and bodies.

Nerds, Dreamers: Unite!

For too long, the educated class has held an unspoken compact: nerds, you worry about computers and gadgets and Battlestar Galactica; dreamers, you worry about art and experimental thought and the environment and plants and music.  And generally speaking, the less these two crowds had to see each other, the happier they tended to be.

This was OK in an era like the 60’s where, for the most part, computing was best reserved for invoices, and fine art had little to do with math. The computer guys were needed to figure out hard implementational problems: how to store all those invoices and be sure the numbers were right, or the math behind making sure a rocket flew straight. Good, tough problems of the era, to be sure, but almost entirely orthogonal to the guys dreaming up the tailfins on the cars and the ads that sold them. Think about the role of geeks in era-pieces like Mad Men and The Right Stuff and you get an idea of how oil-and-water these crowds were.

Fast forward to today, where computers are a creative instrument capable of fine-art quality interaction in multiple media: video, still photography, sound, music, animation, visualization, and even the creation of physical interactions and physical objects.  3D printing, computer controlled robots and art machines, physical art installations of awesome complexity, and autonomous digital art objects are not only possible, but they are accessible to average people who simply want to create. We have truly entered an era where the walls between technical and creative have been razed, however if we fail to realize it and move past them, we may find ourselves constrained by an older notion of what’s possible.

As an example, I’ll take last night’s Ignite Baltimore #1, at which I was proud, honored (and a tad nervous) to be speaker #1. The topics covered were vast and varied, and I’d argue were just the kind of fuel that Baltimore’s creative class needs as input as we set off to solve the challenges of the next 50 years. The topics, in no particular order: public transportation, urban gardening, public spaces, the bible, web apps, agile development, 100 mistakes, cognitive bias, east coast industrial landscapes as art, radio stories, writing vs. speaking, entrepreneurial experience, and much more.

I’d argue that this is the kind of wide ranging liberal arts discussion that most nerds would have opted out of in the past, and that nerds would not be the preferred audience of the dreamers, artists, and poets.  The magic of today, however — the true genius of the moment here in 2008 — is that this cross-fertilization is finally starting to happen. And freely and with passion. Why? Because these walls between creativity, art, science, and math, have finally started to wear down — and not just in some university’s interdisciplinary studies department — but in popular culture and conceptions. The mashup is now considered not just a valid art form, but a standard process for solving today’s toughest problems.

Creative thought has achieved primacy. It is now the idea that matters, because when the idea is properly and fully conceived, the design, presentation, and implementation are necessarily correct as well. What do I mean by this? If there is total integration between the processes of ideation and implementation, there is simply no separation between an idea, the thought models that underly it, and its implementation in digital form: they are one.

It used to be that there was a wall between a digital implementation and an idea; a digital implementation would involve “hacks” — making stuff work in spite of memory or display or other limitations — and the computer-enabling “portion” of a solution would be some subset (usually a rather compromised subset) of an overall idea.

Today, object oriented programming and database technology make it possible to model a solution end to end with few compromises; so, in fact, digital implementers become full partners in the design conversation, greatly eliminating waste, and empowering programmers creatively. Agile development practices (involving iteration rather than top-down design) and story-based development (giving non-programmers a “narrative” to follow about the “story” of their solution) make it so there is very little distinction between design, programming, and ideation. They are now effectively the same disciplines.

And this explains why so many have argued that we are entering a new era of the right brain and of the “rise of the creative class.” The fact is if any of this had been possible sooner, it would have happened sooner. Generally speaking, people don’t like being pigeonholed into some tiny specialty, or to have their thinking constrained. We are human; all of our brains have two halves. But for too long, we have all likely underutilized one side or the other.

So, now we are all free; now, united with better tools and better processes, it is time to turn our attention to the hard, human problems of our age: energy, hunger, the environment (built and natural), and meaning, to name a few. And the topics at last night’s Ignite Baltimore were just the right fuel for getting us started thinking about these hard problems.

Kennedy famously said that “we choose to go to the moon… not because it is easy, but because it is hard.” Our generation needs to start to figure out how to apply the massive wealth of talent (and newfound technical+creative skills) to the truly hard problems of our age.

It’s not going to happen overnight, and we all don’t need to go out and start wind power companies.  But, we all must make ourselves open to BOTH sides of our brains. We must realize that it is poetry and art which will provide the insight we need to make technical breakthroughs. We must listen to each other and be open to diverse viewpoints. We must become spiritual beings — it doesn’t matter whether your spirituality comes more from The Force than The Bible or The Koran — but to deny oneself any of the channels of thought that inform our basic human nature is to cut yourself off from the great insights and genius of one’s humanity.

Be open. Listen to people. Look at diverse kinds of art. Listen to diverse kinds of music. If you want to take part in the next great wave of innovation, these are the kinds of fuels you’ll need to do it.  And I hope to see you at the next Ignite Baltimore in February 2009, where we can continue this conversation!