November 23rd, 2008 — art, design, software, trends
I was 5 years old in 1977, and all-in-all, I’d say the aesthetics of the day made a big impression on me. Here are some of the things that, looking back on it 31 years later, seem to share a common visual language and which were most influential on the next 10 years in movies, computing, games, and package design.
The rich colors and ground-breaking special effects of Spielberg’s 1977 Close Encounters of the Third Kind marked the beginning of a new era in filmmaking and ultimately set a goal for computer graphics and video games. The nascent digital graphics industry was barely capable of producing color “high-res” graphics, but folks knew that when they could, these were the kinds of graphic effects they wanted to make.
Maybe it’s just me, but it seems to me that Close Encounters, Atari, Space Invaders, and Star Wars were all linked together with a common visual sense. I think it’s pretty obvious that Atari ripped off Close Encounters for the Space Invaders packaging.
Likewise, the colorful “light organ” used to communicate with the aliens in Close Encounters is a close cousin, visually, to the famous Atari game Breakout. Steve Jobs was one of the designers of the arcade version of Breakout. Note the similarity to the original “rainbow” Apple logo.
Computer-generated music and sound was still in its very earliest stages, but the simple John Williams melody put to such brilliant use in Close Encounters was the sort of musical coda that aspiring game designers and programmers could latch onto and reproduce. John Williams of course scored hit after hit in movie soundtracks, but the Close Encounters and Star Wars themes of 1977 were hugely influential.
Spielberg used the Rockwell International logo (center) to clever aesthetic effect in Close Encounters; contractors at the secret military base at Devil’s Tower sported it, visually quoting the Devil’s Tower landscape. Of course, it’s interesting to note how similar the logos are for Atari, Rockwell, and Motorola – all major corporations of the day.
Disney got into the act in 1977 with the opening of Space Mountain. While they may not have been directly influenced by imagery from Close Encounters, Atari, or Star Wars, it’s clear that the popular imagination was drawing from common influences like Kubrick’s 2001: A Space Odyssey from 1969.
Of course the biggest influence of 1977 was George Lucas’ seminal work, Star Wars, which interestingly was not initially marketed using its iconic title graphics in its movie poster. It took a little while, and for the film to settle into its status as an international blockbuster, for it to adopt the visual marketing language that would become familiar in the release of the subsequent films in the series.
Arguably, the latter sans-serif Star Wars bubble letters were more inline with the iconography of Close Encounters, Atari, and the other major visual influencers of 1977. I’d bet the previous, blockier Star Wars graphic was designed in 1975 or 1976, before the film and its title graphics were completed. And the very earliest Star Wars art from the 1973-1974 timeframe used a hand-drawn serifed font — a different look altogether.
The dirty, realistic “used universe” designed for Star Wars was also influential. Unlike previous science fiction and space films, Lucas imparted his universe with a lived-in, beat-up look that added a romantic touch of decay to an imagined future — or past.
The Apple ][ was a direct result of Jobs’ (and Wozniak’s) work on Breakout, and the color graphics circuitry has much in common. And I don’t think it’s any stretch to say that the generation of Silicon Valley idealists that designed the Apple ][ and Atari 800 were hugely influenced by the blockbuster science fiction films of the day. While the early Apple designs lacked sufficient economy of scale or budget to have a very “designed” aesthetic, the Apple II does look like something straight out of the Star Wars universe. And the ugly Disk ][ and portable monitor are things that just didn’t get attention yet. Maybe they’re dirty, lived-in artifacts of a galaxy far, far away?
Atari, on the other hand, with the success of the 2600 VCS and its computers, had fully embraced the 1977 aesthetics and by 1980 had full color graphic packaging and a line of “Star Wars” compliant peripherals. And the packaging for the programs borrowed from movie poster designs.
Quite clearly Star Raiders (1979) borrowed directly from Star Wars. In fact, looking at this graphic, I’m now surprised that Atari didn’t get a phone call from Lucas. I guess this was back in the day before tie fighters were Tie FightersTM.
Media critics have argued that Star Wars and Close Encounters of the Third Kind marked the start of the era of blockbuster films, and a general shift in popular culture away from smaller, more thoughtful cinema and towards a populist, anti-intellectual approach in art and film in particular.
Whether that’s true or not, I think it is fair to say that 1977 did mark the year of a seismic shift in aesthetics that has been felt all the way through today in computing, gaming, film, and product packaging. Perhaps 1977 is a kind of bright-line marker for popular art — before and after seem to be from entirely different eras.
The fact that I’ve spent most of my life selling products or working in technologies directly influenced by this powerful aesthetic sense is likely no coincidence: to be young in 1977 was to be indelibly marked by the look and feel of a new era.
November 21st, 2008 — art, business, design, economics, iPhone, mobile, programming, software, trends, visualization
At Xerox Parc in the 1970’s, Alan Kay fostered the innovations that form the foundation of modern computing. Windowing, mice, object oriented languages, laser printing, WYSIWYG, and lots of other stuff we take for granted today either had its start or was fleshed out at Xerox Parc.
The venerable mouse, which enabled direct manipulation of content on the screen, was just one of a few innovations that was screen-tested as a possible heir to the venerable cursor and text terminal metaphor which had predominated since the dawn of computing.
Mice, trackballs, light pens, tablets, and Victorian-looking headgear tracking everything from brainwaves to head and eye movements were all considered as the potential input devices of the future. No doubt there were other metaphors besides windows considered as well. Hypercard, anyone?
Steve Jobs, by selecting the mouse as the metaphor of choice for the Lisa and subsequent Macintosh computers, sealed the deal. Within a year, Bill Gates, by stealing the same design metaphor for use in Windows 1.0, finished the deed. By 1986, the mouse was a fait accompli.
Since the dawn of the Mac and Windows 1.0, we’ve taken for granted the notion that the mouse is and will be the primary user interface for most personal computing and for most software.
However, computing is embedded in every part of our lives today, from our cell phones to our cars to games and zillions of other devices around the house, and those devices have myriad different user interfaces. In fact, creating new user experiences is central to the identity of these technologies. What would an iPhone be without a touch screen? What would the Wii be without its Wiimotes? What, indeed, is an Xbox 360 but a PC with, uh, lipstick and a different user interface metaphor?
(An aside: How awesome would it be if the iPhone, Wii, and Xbox 360 all required the use of a mouse? People fidgeting on a cold day, taking out their iPhone, holding it in their left hand, plugging in their mouse, working it around on their pants to make a call. Kids splayed out on the rumpus room floor, mousing around their Mario Karts. Killer, souped up force-feedback mice made just for killing people in Halo. Mice everywhere, for the win.)
So, what’s with the rant? Simply that the web has taken a bad problem — our over-reliance on mice — and made it even more ubiquitous than it was in the worst days of windowing UI’s.
“And then if you click here…”
No, here — not over there. Click here first. Scroll down, ok, then click submit. Now click save.
See the problem? The reliance on the mouse metaphor on the web is fraught with two hazards.
- Mice require users to become collaborators in your design.
- Each user only brings so much “click capital” to the party.
Catch My Disease
We’ve all had the experience of using a site or app that requires a great deal of either time or advance knowledge to fully utilize.
You know the ones — the ones with lots of buttons and knobs and select boxes and forms just waiting for you to simply click here, enter the desired date, choose the category, then get the subcategory, choose three friends to share it with, then scroll down and enter your birthdate and a captcha (dude) and then simply press “check” to see if your selection is available for the desired date; if it is, you’ll have an opportunity to click “confirm” and your choice will be emailed to you, at which point you will need to click the link in the email to confirm your identity, and you’ll be redirected back to the main site at which point you’ll have complete and total admin control over your new site. Click here to read the section on “Getting Started”, and you can click on “Chat with Support” at any time if you have any questions.
What the hell do these sites want from you?
If these sites are trying to provide a service, why do they need you to do so much to make them work? Sure, some stuff is complex and requires information and processes and steps to empower them, but when you ask users to participate too much as key elements in your design, you create frustration, resentment, and ultimately rage. That’s cool if that’s your goal, but if you’re trying to get happy users, you’ve done nothing to advance that cause. So, it shouldn’t be about “all you have to do is click here and here.” Ask less of your users. Do more for them. Isn’t that what service is all about?
Limited Click Capital
Sometimes, people just want to be served — even entertained or enchanted. They don’t want to become the slavish backend to a maniacal computer program that requires 6 inputs before it can continue cialisviagras.com. Is the user in service of the computer, or is the computer serving the user? I always thought it was the latter.
I’ll never cease to be instructed by the lessons learned from developing my sites Twittervision and Flickrvision. Both sites do something uncommon — they provide passive entertainment, enchantment, and insight in a world where people are asked to click, select, participate, scroll, sign up, and activate. It’s sit back and relax and contemplate, rather than decipher, decide and interact. Surely there are roles for both, but people are so completely tired of deciphering, that having a chance to simply watch passively is a joyful respite in a world of what is mostly full of badly designed sites and interactions. This alone explains their continued appeal.
People come to sites with only so much “click capital,” or willingness to click on and through a site or a “proposed interaction.” This is why site bounce rates are usually so high. People simply run out of steam before they have a chance to be put through your entire Rube Goldberg machine. Make things easier for them by demanding fewer clicks and interactions.
Make Computing Power Work For Your Users
Truism alert: we live in an age with unprecedented access to computing power. What are you going to do with it? How are you going to use it to enchant, delight, and free your users? Most designs imprison their users by shackling them to the design, turning them into nothing more than steps 3, 6, 8, 9, and 11 of a 12 part process. How are you going to unshackle your users by making them — and their unfettered curiosity — the first step in a beautiful, infinitely progressive algorithm?
Predict and Refine
Forms and environments that rely on excessive interaction typically make one fatal assumption: that the user knows what they want. Most users don’t know what they want, or they can’t express it the way you need to know it, or they click the wrong thing. Remove that choice.
Do your best to help your users along by taking a good guess at what they want, and then allow them to refine or steer the process.
Remember, you’re the one with the big database and the computers and the web at your disposal: how are you going to help the user rather than asking the user to help you? You’re advantaged over the user; make it count for something.
Don’t Think About Mice
Mice lead to widgets. Widgets lead to controls. Controls lead to forms. Forms lead to hate. How are you going to break free from this cycle and give your users something compelling and useful with the minimum (and most appropriate) interaction? What is appropriate interaction?
It depends. What if you rely on gestures, or mouseovers, or 3 yes or no questions in big bold colors? That’s minimal and simple. It may be just what you need to empower your idea and serve your users.
I’ve been working with the WiiMote and the iPhone a lot lately, and trying to use touch screens, accelerometers, and the Wii’s pitch and roll sensors to create new kinds of interaction. Maybe this is right for your work.
Think about it and don’t assume traditional mouse/web/form interactions. Sure, sometimes they are the right and only tool for the job, but if you want to stand out and create compelling experiences, they surely can no longer be the central experience of your design.
Long Live the Cursor
Back in the early days of GUIs, there were lots of people who contended that no serious work would ever get done in a window and that the staple of computing and business would be the DOS metaphor and terminal interactions. There have been dead-enders as long as there have been new technologies to loathe. I’m sure somewhere there was a vehement anti-steel crowd.
The mouse, the window, and HTML controls and forms are the wooden cudgels of our era — useful enough for pounding grain, but still enslaving us in the end. How will you use the abundance of computing power, and new user interface metaphors to free people to derive meaning and value?
October 17th, 2008 — art, baltimore, business, design, economics, programming, social media, socialdevcamp, software, trends
For too long, the educated class has held an unspoken compact: nerds, you worry about computers and gadgets and Battlestar Galactica; dreamers, you worry about art and experimental thought and the environment and plants and music. And generally speaking, the less these two crowds had to see each other, the happier they tended to be.
This was OK in an era like the 60’s where, for the most part, computing was best reserved for invoices, and fine art had little to do with math. The computer guys were needed to figure out hard implementational problems: how to store all those invoices and be sure the numbers were right, or the math behind making sure a rocket flew straight. Good, tough problems of the era, to be sure, but almost entirely orthogonal to the guys dreaming up the tailfins on the cars and the ads that sold them. Think about the role of geeks in era-pieces like Mad Men and The Right Stuff and you get an idea of how oil-and-water these crowds were.
Fast forward to today, where computers are a creative instrument capable of fine-art quality interaction in multiple media: video, still photography, sound, music, animation, visualization, and even the creation of physical interactions and physical objects. 3D printing, computer controlled robots and art machines, physical art installations of awesome complexity, and autonomous digital art objects are not only possible, but they are accessible to average people who simply want to create. We have truly entered an era where the walls between technical and creative have been razed, however if we fail to realize it and move past them, we may find ourselves constrained by an older notion of what’s possible.
As an example, I’ll take last night’s Ignite Baltimore #1, at which I was proud, honored (and a tad nervous) to be speaker #1. The topics covered were vast and varied, and I’d argue were just the kind of fuel that Baltimore’s creative class needs as input as we set off to solve the challenges of the next 50 years. The topics, in no particular order: public transportation, urban gardening, public spaces, the bible, web apps, agile development, 100 mistakes, cognitive bias, east coast industrial landscapes as art, radio stories, writing vs. speaking, entrepreneurial experience, and much more.
I’d argue that this is the kind of wide ranging liberal arts discussion that most nerds would have opted out of in the past, and that nerds would not be the preferred audience of the dreamers, artists, and poets. The magic of today, however — the true genius of the moment here in 2008 — is that this cross-fertilization is finally starting to happen. And freely and with passion. Why? Because these walls between creativity, art, science, and math, have finally started to wear down — and not just in some university’s interdisciplinary studies department — but in popular culture and conceptions. The mashup is now considered not just a valid art form, but a standard process for solving today’s toughest problems.
Creative thought has achieved primacy. It is now the idea that matters, because when the idea is properly and fully conceived, the design, presentation, and implementation are necessarily correct as well. What do I mean by this? If there is total integration between the processes of ideation and implementation, there is simply no separation between an idea, the thought models that underly it, and its implementation in digital form: they are one.
It used to be that there was a wall between a digital implementation and an idea; a digital implementation would involve “hacks” — making stuff work in spite of memory or display or other limitations — and the computer-enabling “portion” of a solution would be some subset (usually a rather compromised subset) of an overall idea.
Today, object oriented programming and database technology make it possible to model a solution end to end with few compromises; so, in fact, digital implementers become full partners in the design conversation, greatly eliminating waste, and empowering programmers creatively. Agile development practices (involving iteration rather than top-down design) and story-based development (giving non-programmers a “narrative” to follow about the “story” of their solution) make it so there is very little distinction between design, programming, and ideation. They are now effectively the same disciplines.
And this explains why so many have argued that we are entering a new era of the right brain and of the “rise of the creative class.” The fact is if any of this had been possible sooner, it would have happened sooner. Generally speaking, people don’t like being pigeonholed into some tiny specialty, or to have their thinking constrained. We are human; all of our brains have two halves. But for too long, we have all likely underutilized one side or the other.
So, now we are all free; now, united with better tools and better processes, it is time to turn our attention to the hard, human problems of our age: energy, hunger, the environment (built and natural), and meaning, to name a few. And the topics at last night’s Ignite Baltimore were just the right fuel for getting us started thinking about these hard problems.
Kennedy famously said that “we choose to go to the moon… not because it is easy, but because it is hard.” Our generation needs to start to figure out how to apply the massive wealth of talent (and newfound technical+creative skills) to the truly hard problems of our age.
It’s not going to happen overnight, and we all don’t need to go out and start wind power companies. But, we all must make ourselves open to BOTH sides of our brains. We must realize that it is poetry and art which will provide the insight we need to make technical breakthroughs. We must listen to each other and be open to diverse viewpoints. We must become spiritual beings — it doesn’t matter whether your spirituality comes more from The Force than The Bible or The Koran — but to deny oneself any of the channels of thought that inform our basic human nature is to cut yourself off from the great insights and genius of one’s humanity.
Be open. Listen to people. Look at diverse kinds of art. Listen to diverse kinds of music. If you want to take part in the next great wave of innovation, these are the kinds of fuels you’ll need to do it. And I hope to see you at the next Ignite Baltimore in February 2009, where we can continue this conversation!
October 3rd, 2008 — design, economics, mobile, politics, travel, trends
With the price of gas where it is, along with my own desire to get more exercise, I’ve adopted a set of rules regarding bicycle usage, and encourage everyone to do the same. I think it represents a distinctly different attitude towards bicycling than we’re used to. See what you think.
- Ride a bike for a reason, not just for recreation; while riding a bike for recreation is fine, the idea is to promote replacement of cars with bikes where possible. Make a point of choosing trips where you actually are replacing a car trip.
- Don’t wear funny sports clothes. They preclude your ability to partake in normal society. If you’re going to a lunch meeting, no one wants to see bikerman in spandex. Furthermore, wearing sports clothing promotes the image that bikes are for ‘cyclists’ and not normal people. Do wear a helmet, and lock it to your bike when you need to go in someplace.
- Go where you need to go, including busier roads, if that’s what’s necessary to reach your destination. Bikes will never be used as replacements for cars unless they can truly substitute. By making yourself visible on major roads, you increase the visibility of bikes as a whole and help raise awareness of problem spots. Obviously use common sense and avoid limited access roads and unsafe situations. But DO go where you need to go to complete your trip.
- Obey traffic laws and signals. Being on a bike doesn’t give you a free pass to act like a maniac. Be courteous, intelligent, and follow traffic signals and laws. This puts cars on notice that bikers (even slow, non-athletic ones) deserve their fair share of the road, but you need to reciprocate by acting in a predictable, lawful, and measured way.
- Replace time at the gym (or other exercise efforts) with time on a bike as part of your daily routine. Isn’t it nonsensical to use a car to rush through your day so you can get home at 5 and then go to the gym (or bike or run) for an hour? If you slow down and use a bike for some tasks during the day, you won’t need to spend as much time doing mindless exercise. And you’ll save on gas (and carbon emissions), and get better connected to your community.
This week, I used my bike to go to three lunch meetings, a doctor appointment, and two trips to buy groceries. I put in over 60 miles just between Monday and Thursday, and it took only a few minutes more time than it would have to drive. I am sure I’ve lost weight doing this, though I don’t care how much. I feel better and that alone is worth it.
And two of the best perks about biking: you’re never stuck in traffic, and second, you always get a top-notch parking spot. Plus, you’re not circling around trying to find a place to park. More gas and time savings. Being on a bike in many ways is faster and more efficient than being in a car, especially when the distances you’re talking about are under 30 minutes of bike time (8-10 miles).
Anyone who lives in the Annapolis, Maryland area knows it’s a congested, frustrating experience to try to get ANYWHERE on a weekday afternoon by car. Why not try it on a bike and see how much quicker it can be?