Entries Tagged 'mobile' ↓

iPad and the Brain


The iPad promises to be a very big deal: not just because it’s the next big over-hyped thing from Apple, but because it fundamentally shifts the way that humans will interact with computing.

Let’s call this the “fourth turning” of the computing paradigm.

Calculators

Early “computers” were electro-mechanical, then electric, and then later all electronic. But the metaphor was constant: you pushed buttons to enter either values or operators, and you had to adhere to a fixed notation to obtain the desired results. This model was a “technology” in the truest sense of the word, replacing “how” a pre-existing task got done. It didn’t fundamentally change the user, it just made a hard task easier.

8-Bit Computers: Keyboards

The early days of computing were characterized by business machines (CP/M, DOS, and character-based paradigms) and by low-end “graphics and sound” computers like the Atari 800, Apple II, and Commodore 64.

The promise here was “productivity” and “fun,” offering someone a more orderly typewriting experience or the opportunity to touch the edges of the future with some games and online services. But the QWERTY keyboard (and its derivatives) date back to at least 1905. And the first typewriters were made by Remington, the arms manufacturer.

The keyboard input model enforces a verbal, semantic view of the world. The command line interface scared the hell out of so many people because they didn’t know what they might “say” to a computer, and they were often convinced they’d “mess it up.” During this era, computing was definitely still not a mainstream activity.

More of the population was older (relative to computing) and had no experience with the concepts.

The Mouse, GUI, and the Web

Since the introduction of the Macintosh, and later Windows, the metaphors of the mouse, GUI, and the web have become so pervasive we don’t even think about them anymore.

But the reality is that the mouse is a 1970’s implementation of a 1950’s idea, stolen by Apple for the Lisa from Xerox PARC. Windows is a copy of the Macintosh.

The graphical computing metaphor, combined with the web, has opened the power of the Internet to untold millions, but it’s not hard to argue that we’re all running around with Rube Goldberg-like contraptions, cobbled together from parts from 1905, 1950, and 1984 respectively. Even so, the mouse alone has probably done more to open up computing than anything else so far.

The mouse enforces certain modes of use. The mouse is an analog proxy for the movement of our hands. Most people are right handed, and the right hand is controlled by the left hemisphere of the brain, which science has long argued is responsible for logic and reason. While a good percentage of the population is left handed, the fact remains that our interactions with mice are dominated by one half of the brain. Imagine how different your driving is when you only use one hand.

While we obviously use two hands to interact with a keyboard, some cannot do that well, and it continues a semantic, verbal mode of interaction.

iPad

The iPad will offer the first significant paradigm shift since the introduction of the mouse. And let me be clear: it doesn’t matter whether hardcore geeks like it now, or think it lacks features, or agree with Apple’s App Store policies.

The iPad will open up new parts of the human brain.

By allowing a tactile experience, by allowing people to interact with the world using two hands, by promoting and enabling ubiquitous network connections, the iPad will extend the range and the reach of computing to places we haven’t yet conceived.

Seriously. The world around us is reflected by our interactions with it. We create based on what we can perceive, and we perceive what we can sense. The fact that you can use two hands with this thing and that it appears to be quick and responsive is a really big deal. It will light up whole new parts of the brain, especially the right hemisphere — potentially making our computing more artistic and visual.

Just as the mouse ushered in 25 years of a new computing paradigm, pushing computing technology out over a much larger portion of the market, the iPad marks the beginning of the next 25 years of computing.

And before you get worried about how people will type their papers and design houses and edit video without traditional “computers,” let me answer: no one knows. We’ll use whatever’s available until something better comes along.

But computing platforms are created and shaped by raw numbers and the iPad has every opportunity to reach people in numbers as-yet unimagined. That will have the effect of making traditional software seem obsolete nearly overnight.

When the Macintosh was released, it was widely derided as a “toy” by the “business computing” crowd. We see how well that turned out.

This time, expect a bright line shift: BIP and AIP (before iPad and after iPad). It’s the first time that an entirely new design has been brought to market, answering the question, “Knowing everything you know now, what would you design as the ultimate computer for people to use with the global network?”

It’s 2010, and we don’t need to be tied down to paradigms from 1950 or 1905. Everything is different now, and it’s time our tools evolved to match the potential of our brains and bodies.

Fiber Economics

With the release of the FCC’s National Broadband Plan, Google’s announced intention to build gigabit fiber-to-the-home networks, and Verizon’s indications that they are not likely to be expanding their FIOS service to new areas, it’s a good time to review where we really stand with fiber.

The Real Reasons You Don’t Have Fiber

What are the real economics of broadband infrastructure? It’s not so simple as market opportunity, investment, and subscribers; Verizon and Comcast have different regulatory histories and see the world differently apotheke-zag.de. Google, as a potential new entrant, has completely different motivations.

Let’s take a look at the regulatory background, and then get a sense of what’s really motivating Verizon, Comcast, and Google.

Regulatory Background

We have gradually come to think of Verizon and Comcast as equals: big, for-profit telecom companies — competitors for TV, Internet, and telephone service. But they got to their current positions through very different routes. Here’s a brief (and rather incomplete) history.

In 1984, the former AT&T was busted up into seven Baby Bells: Ameritech, Bell Atlantic, BellSouth, NYNEX, Pacific Telesis, Southwestern Bell, and US West. Terminator-like, these companies have been spending the last 26 years reconstituting themselves, merging into very large firms. Bell Atlantic changed its name to Verizon in 2000 after acquiring GTE.

Telecommunications regulation in the United States has a long history and reflects theory originally applied to railroads and other public utilities. The idea was that communications was a public good and because the network had to be large and interoperable to be effective, it was best served by a natural monopoly. So, assets like public rights-of-way were made available for the monopoly to use, in exchange for an agreement to provide Universal Service, covering the entire population.

To keep the monopoly from charging unreasonable prices, regulators mandated that their services should be marketed at cost, plus a reasonable and sustaining profit margin. This means that there is no incentive for them to keep costs down; in fact, the higher their costs, the more raw dollars they make.

Verizon today operates under this kind of regulatory background, which was outlined initially in the Communications Act of 1934, and then amended by the Telecommunications Act of 1996 — which has subsequently been eroded and modified by case law and other FCC actions.

The FCC, under Bush-appointed Chairmen Michael Powell and Kevin Martin, tended towards the opinion that the best way to foster competition and innovation would be to empower a small number of well-capitalized firms and let them compete together in the marketplace.

Comcast, for its part, came together very differently. Cable TV franchises were primarily granted by local municipalities, starting in the 1950’s. Comcast acquired dozens of these small firms, each with their own regulatory agreements with cities and counties. By 2000 or so, this aggregation started to resemble the sort of “large firm” that the FCC thought could be an effective competitor to the telephone companies.

So that’s how we got here. Verizon is heir to the top-down, cost-based monopoly regulation subsidized by the Universal Service Fund, which requires that it provide telephone service even in rural areas. Comcast is the product of the roll-up of dozens (if not hundreds) of small cable TV firms. Now let’s take a look at their interests in the current landscape.

Verizon

Verizon, in many ways, is just the current-day incarnation of a big chunk of the original AT&T. It’s still the primary telephone infrastructure provider and the bulk of its physical wiring plant is copper. It operates the same switching facilities that AT&T did back in 1984. In many important ways, nothing has changed.

What about FIOS? Isn’t Verizon innovating there? Aren’t they making this investment to “make money?” It’s complicated.

Verizon made the decision to install FIOS primarily to block competition. The Telecom Act of 1996 required that telcos make copper wireline infrastructure for competitors to run alternative services. This is where alternative telcos like Cavalier, Covad, Adelphia, and many others came into the market. You’ll notice that almost all of those companies are now defunct or severely hamstrung.

This is in part because Verizon (and its peers) set out on a strategy to make a competitive business model all but impossible. FIOS was part of that strategy.

When Verizon installs FIOS, they almost always remove the copper wires that could otherwise have been used by competitors; and this has effectively shut them out.

Verizon has spent over $20 Billion to build out FIOS in its service area. Ostensibly this might look like “investment in innovation” to observers. But in fact, this spending was mostly done to block competitors and to destroy the pro-competition provisions of the Telecom Act of ’96.

It should thus come as no surprise that Verizon has recently announced that they are unlikely to expand their FIOS network further. This isn’t because they can’t get more new subscribers in new areas (like downtown Boston, which is still not served); rather, it’s because they have calculated that the costs of future expansion exceed the downside risk of lost profits caused by competitors in the areas that remain.

They have put FIOS in all the places where it was either easy to do so or where the competition was too strong to ignore. Now that the competitors are mostly defunct or severely weakened, the threat is just not there to justify expansion.

Like feudal warlords, they invested just enough in FIOS to block out competitors, rejigger the regulations, and maintain a status quo of mediocrity. And we’re supposed to think this is innovation?

Comcast

Comcast has different problems. Because all of their regulatory agreements are negotiated with individual municipalities, it’s more difficult for them to make investments across their entire footprint. This is why Comcast often rolls out new products and services in trial communities, and then rolls them out to new areas one at a time.

Comcast does have a very large television service footprint, and their acquisition of NBC and other content providers over the years, like HTS, is an attempt to establish themselves as a vertically integrated entertainment provider. They control the entire stack, from the physical cable, to the viewer, to the content itself. This means that they are protected from threats from content providers who might try to command high rates for popular content. Disney (who controls ESPN and ABC) often finds itself in battles about rates with cable providers. Acquiring NBC/Universal means one less potential threat of rate hikes for Comcast, and higher overall profits.

But Comcast’s physical plant is dominated by aging coaxial cable infrastructure. While local head-ends are fed by fiberoptic backbones, local distribution to the home is through co-axial cable, which can degrade in performance when it rains, is subject to lightning damage, and can only go so fast. Fiber-to-the-home is vastly superior, but it would cost Comcast billions to upgrade its plant. In the absence of competitive pressure (such as that which was faced by Verizon), they have no incentive to do so. Instead they are happy to push their existing plant as hard as it can go, using standards such as DOCSIS 3, and invest in fiber-to-the-home infrastructure only as necessary or convenient.

Google

Google has recently announced that they would like to spawn innovation, and potentially build out gigabit fiberoptic infrastructure in one or more communities in the United States. I helped organize Baltimore’s municipal response to Google’s Request For Information for this project.

Google’s proposing something very different from what Verizon and Comcast offer: an open-access network, over which new entrants could provide Internet or other services. This is exactly the paradigm that Verizon has fought to destroy with FIOS.

Comcast has also fought open access repeatedly; before Verizon settled on FIOS as its primary anti-competitive strategy, Verizon tried to force cable companies to become subject to the same kind of infrastructure-sharing to which it was subject under the Telecom Act of 1996.

And Comcast fought this effort mightily; in 2002, I testified before the Maryland House of Delegates in support of a bill that would force Comcast to open its network, and Comcast’s lobbyists managed to defeat it.

Also in 2002, working alongside Verizon-supplied lobbyists, I testified before the FCC with TCP/IP co-inventor Vint Cerf (now a VP at Google) arguing that cable companies should be forced to provide “open access” to their networks because it would promote competition and entrepreneurship. At that hearing, FCC Commissioner Robert Pepper made it clear that the FCC believed that Verizon and Comcast could provide all the competition we would ever need. We see how that’s turned out.

To date, there has not been any significant open access network deployment in the United States. And with the decline of competitive telco-based services, telecom innovation has now stalled entirely. It’s time for something new.

Net Neutrality

Google has a different potential problem on its radar. In the US, Comcast and Verizon control access to a large percentage of its customers. Currently, the Internet operates under a doctrine of “Net Neutrality,” which is to say that customers and Google all just pay for access to the network, and each can communicate freely with anyone else on the network.

Various telecom executives (most notably former AT&T CEO Ed Whitacre, now CEO of GM) have argued that companies like Google should no longer get free access to their customers. Folks like Whitacre believe that the natural role of a company like Verizon or Comcast is to act as a toll-gate, charging both content providers (Google) as well as customers for access to each other.

As you might imagine, Google heartily opposes this idea: it could dramatically increase their costs and would destroy the “level playing field” which has dominated the Internet from the beginning. Startups could be stifled because they might need to negotiate an agreement with broadband providers to get access to customers. This is a war that Verizon and Comcast appear ready to start, and people like News Corp’s Rupert Murdoch are fanning the flames.

Google’s Fiber Plan

Google’s announcement that it intends to build ultra-fast open access fiber networks is its declaration of war against the threatened end of net-neutrality. Further, this is a productive use of Google’s vast stockpile of cash; it’s something tangible it can do to ensure its market position.

And it’s a move that’s ideologically compatible with its mission. Google genuinely believes that the expansion of a fast, net-neutral Internet has positive effects on society, and it’s also good for its bottom-line. More people online means more ad-views which means more advertising, and more dollars for Google. There’s no downside for them; it’s an expensive proposition to be sure, but it’s less expensive than paying for access to customers in a world without net-neutrality.

By promoting itself as a good citizen, wrapped in the banner of open-access, innovation, and net-neutrality, ideologically-sympathetic regulators such as the FCC’s new Obama-appointed Chairman Julius Genachowski are likely to view Google’s approach favorably. This would allow Google to establish a vertically-integrated, long-term market position which would be hard for Verizon or Comcast to disrupt.

And the kicker? The open-access network Google’s proposing really would promote innovation and entrepreneurship. The United States is ranked 15th in broadband penetration worldwide today. This is a chance to change that.

Don’t believe that Verizon or Comcast will make these investments unless forced to do so. And while Google may also feel it has no choice but to build its own network, Google at least has a vision that goes far beyond just sustaining a mediocre status-quo; they truly believe in the level playing field that has given birth to so much innovation.

It’s time for America’s bandwidth to finally match our ambitions and our talent. Let’s go Google!

The Case Against Newspaper Companies

Here in Baltimore there is a great deal of uncertainty about the future of journalism, as there is everywhere. I have been involved in organizing some efforts by local new media publishers to study options for the future; my interest in this topic is purely personal.

Yesterday I attended a two-hour symposium arranged by the University of Maryland’s Merrill School of Journalism. In attendance on this panel were Monty Cook (Editor, Baltimore Sun), Tim Franklin (Former editor, Baltimore Sun), Jayne Miller (WBAL Television), Jake Oliver (Afro American Newspapers), Mark Potts (founder, WashingtonPost.com). It was moderated by Kevin Klose (former president, NPR) and sponsored by Abell Professor Sandy Banisky.

The discussion was mostly a paean to times long gone: to well-staffed newsrooms rich with sources, and benefit plans to match. It was an apologia from television to print, explicating the ability that cable-subscriber funded news operations have had to survive via subsidies that the press could never extract. It was a cursory overview of myriad efforts to invent new modes of journalism online. And it was a predictable declaration of heresy: “these so-called wanna-be websites” (Jake Oliver) “will never hold a candle to traditional journalism.” (Jayne Miller)

I quote directly.

And herein lies the problem. As observers, these trained journalists accurately state that a small, unfunded website run by “these kids” (many of whom are 20 year veterans of the press) can not effectively compete with some imagined newsroom of the past. However, these “small unfunded websites” are just starting out. They will grow. And these imagined news operations no longer exist, and the ones that still do are shrinking. The old and the new are on a collision course.

While the traditional media sticks its head in the sand and belittles the startup efforts of entrepreneurs and journalists, the world is shifting beneath its feet. And all the time spent on internal infighting, in denial, in testimony before congress, and in bankruptcy courts is time not spent reinventing the future of journalism. Their legacy costs, on health plans and labor unions and real estate and “right-sizing” are costs that aren’t being spent solving the market need.

What are the odds that the existing companies (the ones with the problem) will be the ones who come up with the solution? They are astronomically small. That’s almost never how things play out in markets.

A new, reasonably-funded journalistic startup today has access to all kinds of assets: a large pool of trained, laid-off journalists; incredible inexpensive distribution technology in the form of web, mobile, and Kindle; a motivated pool of citizen journalists; and most importantly, a startup mindset that is focused on being lean, nimble, and experimentational.

If I had to bet on whether a bloated 172-year old company that’s in bankruptcy will find the model, or whether it would be one of a field of startups, I’d bet on the field of startups every time. Why wouldn’t you?

The only coherent argument against new startups is really one of mass and heft – both in terms of startup capital and in terms of depth of connections. However, it is reasonable to expect that a reasonably-funded startup staffed with experienced businesspeople and journalists is going to be every bit as rich with contacts as a comparably-sized post-bankruptcy old-media concern. The difference? Less legacy DNA, less legacy expenses, and a lean, nimble, humble mindset that’s focused on finding the answers in an open market.

Failure of Imagination

Just as the failure to prevent the September 11 attacks was attributed to a “failure of imagination,” we see a comparable failure of imagination in journalism today.

The traditional media companies fail to imagine what the confluence of web, mobile, and citizen journalism might ultimately be able to deliver, and that it might be better than anything journalism has delivered to date.

Potential funders see all options as risky and want to bet first on “traditional” outlets. They see these brands not only as less risky, but as a restoration to a prior order.

“Restorations” are not how markets work. Things don’t get restored. They are creatively torn apart and reassembled.

The first investors to imagine the possibilities present in new journalistic startups will ultimately reap the rewards; rewards which will never be seen again in newspaper companies.

The companies that bring you local news today will most likely not be around in 10 years. A host of new companies will take their place.

The only question for those in the industry today is whether they want to be part of those solutions.

I Hate Mice

At Xerox Parc in the 1970’s, Alan Kay fostered the innovations that form the foundation of modern computing. Windowing, mice, object oriented languages, laser printing, WYSIWYG, and lots of other stuff we take for granted today either had its start or was fleshed out at Xerox Parc.

The venerable mouse, which enabled direct manipulation of content on the screen, was just one of a few innovations that was screen-tested as a possible heir to the venerable cursor and text terminal metaphor which had predominated since the dawn of computing.

Mice, trackballs, light pens, tablets, and Victorian-looking headgear tracking everything from brainwaves to head and eye movements were all considered as the potential input devices of the future. No doubt there were other metaphors besides windows considered as well. Hypercard, anyone?

Steve Jobs, by selecting the mouse as the metaphor of choice for the Lisa and subsequent Macintosh computers, sealed the deal.  Within a year, Bill Gates, by stealing the same design metaphor for use in Windows 1.0, finished the deed.  By 1986, the mouse was a fait accompli.

Since the dawn of the Mac and Windows 1.0, we’ve taken for granted the notion that the mouse is and will be the primary user interface for most personal computing and for most software.

However, computing is embedded in every part of our lives today, from our cell phones to our cars to games and zillions of other devices around the house, and those devices have myriad different user interfaces.  In fact, creating new user experiences is central to the identity of these technologies.  What would an iPhone be without a touch screen?  What would the Wii be without its Wiimotes?  What, indeed, is an Xbox 360 but a PC with, uh, lipstick and a different user interface metaphor?

(An aside: How awesome would it be if the iPhone, Wii, and Xbox 360 all required the use of a mouse?  People fidgeting on a cold day, taking out their iPhone, holding it in their left hand, plugging in their mouse, working it around on their pants to make a call.  Kids splayed out on the rumpus room floor, mousing around their Mario Karts. Killer, souped up force-feedback mice made just for killing people in Halo.  Mice everywhere, for the win.)

So, what’s with the rant?  Simply that the web has taken a bad problem — our over-reliance on mice — and made it even more ubiquitous than it was in the worst days of windowing UI’s.

“And then if you click here…”

No, here — not over there.  Click here first.  Scroll down, ok, then click submit.  Now click save.

See the problem?  The reliance on the mouse metaphor on the web is fraught with two hazards.

  1. Mice require users to become collaborators in your design.
  2. Each user only brings so much “click capital” to the party.

Catch My Disease

We’ve all had the experience of using a site or app that requires a great deal of either time or advance knowledge to fully utilize.

You know the ones — the ones with lots of buttons and knobs and select boxes and forms just waiting for you to simply click here, enter the desired date, choose the category, then get the subcategory, choose three friends to share it with, then scroll down and enter your birthdate and a captcha (dude) and then simply press “check” to see if your selection is available for the desired date; if it is, you’ll have an opportunity to click “confirm” and your choice will be emailed to you, at which point you will need to click the link in the email to confirm your identity, and you’ll be redirected back to the main site at which point you’ll have complete and total admin control over your new site.  Click here to read the section on “Getting Started”, and you can click on “Chat with Support” at any time if you have any questions.

What the hell do these sites want from you?

If these sites are trying to provide a service, why do they need you to do so much to make them work?  Sure, some stuff is complex and requires information and processes and steps to empower them, but when you ask users to participate too much as key elements in your design, you create frustration, resentment, and ultimately rage.  That’s cool if that’s your goal, but if you’re trying to get happy users, you’ve done nothing to advance that cause.  So, it shouldn’t be about “all you have to do is click here and here.” Ask less of your users.  Do more for them.  Isn’t that what service is all about?

Limited Click Capital

Sometimes, people just want to be served — even entertained or enchanted. They don’t want to become the slavish backend to a maniacal computer program that requires 6 inputs before it can continue cialisviagras.com.  Is the user in service of the computer, or is the computer serving the user?  I always thought it was the latter.

I’ll never cease to be instructed by the lessons learned from developing my sites Twittervision and Flickrvision. Both sites do something uncommon — they provide passive entertainment, enchantment, and insight in a world where people are asked to click, select, participate, scroll, sign up, and activate. It’s sit back and relax and contemplate, rather than decipher, decide and interact.  Surely there are roles for both, but people are so completely tired of deciphering, that having a chance to simply watch passively is a joyful respite in a world of what is mostly full of badly designed sites and interactions. This alone explains their continued appeal.

People come to sites with only so much “click capital,” or willingness to click on and through a site or a “proposed interaction.”  This is why site bounce rates are usually so high.  People simply run out of steam before they have a chance to be put through your entire Rube Goldberg machine.  Make things easier for them by demanding fewer clicks and interactions.

Make Computing Power Work For Your Users

Truism alert: we live in an age with unprecedented access to computing power.  What are you going to do with it?  How are you going to use it to enchant, delight, and free your users?  Most designs imprison their users by shackling them to the design, turning them into nothing more than steps 3, 6, 8, 9, and 11 of a 12 part process.  How are you going to unshackle your users by making them — and their unfettered curiosity — the first step in a beautiful, infinitely progressive algorithm?

Predict and Refine

Forms and environments that rely on excessive interaction typically make one fatal assumption: that the user knows what they want. Most users don’t know what they want, or they can’t express it the way you need to know it, or they click the wrong thing.  Remove that choice.

Do your best to help your users along by taking a good guess at what they want, and then allow them to refine or steer the process.

Remember, you’re the one with the big database and the computers and the web at your disposal: how are you going to help the user rather than asking the user to help you?  You’re advantaged over the user; make it count for something.

Don’t Think About Mice

Mice lead to widgets. Widgets lead to controls. Controls lead to forms. Forms lead to hate. How are you going to break free from this cycle and give your users something compelling and useful with the minimum (and most appropriate) interaction? What is appropriate interaction?

It depends.  What if you rely on gestures, or mouseovers, or 3 yes or no questions in big bold colors?  That’s minimal and simple.  It  may be just what you need to empower your idea and serve your users.

I’ve been working with the WiiMote and the iPhone a lot lately, and trying to use touch screens, accelerometers, and the Wii’s pitch and roll sensors to create new kinds of interaction.  Maybe this is right for your work.

Think about it and don’t assume traditional mouse/web/form interactions. Sure, sometimes they are the right and only tool for the job, but if you want to stand out and create compelling experiences, they surely can no longer be the central experience of your design.

Long Live the Cursor

Back in the early days of GUIs, there were lots of people who contended that no serious work would ever get done in a window and that the staple of computing and business would be the DOS metaphor and terminal interactions.  There have been dead-enders as long as there have been new technologies to loathe.  I’m sure somewhere there was a vehement anti-steel crowd.

The mouse, the window, and HTML controls and forms are the wooden cudgels of our era — useful enough for pounding grain, but still enslaving us in the end.  How will you use the abundance of computing power, and new user interface metaphors to free people to derive meaning and value?