Entries Tagged 'software' ↓

The Math Behind Peoplemaps

For the last few years I’ve been working on mapping out social relationships in cities with my project peoplemaps.org. I had the chance to speak about this work recently at the TED Global conference, and the talk was featured this week on the ted.com website. This has led to many, many inquiries about the project, how it works, and the limits of its applicability.

So I wanted to take an opportunity to explain in high-level terms how I’m doing this work, what it tells us, and what it does not. I’ll attempt here to cover the approach in enough detail so others can reproduce it, without going into deep implementation details which are frankly not important to understanding the concepts. However, I’m happy to collaborate with anyone who would like to discuss implementations.

Theoretical Foundation

In the real world, each of us has personal relationships: our family, friends, co-workers, people we interact with in social settings, children and their parents and friends, and the like. The conventional wisdom is that we can maintain roughly 150 real-world relationships — this is called the “Dunbar Number,” named after the anthropologist who identified this phenomenon. Some people may have more, some people less, but 150 is probably average.

In an online setting, people may have many more relationships — perhaps a few thousand that they interact with in some meaningful way. However, the offline, real-world relationships that people have will likely be an overlapping subset of the relationships they have in an online setting. Online relationships are often more flexible (they can be global and operate at all times of day), though not always as meaningful. Still, if you are able to look at people’s online relationships in a specific geography, you have at least a proxy for understanding their offline, real-world relationships as well.

The ability to infer things about both offline and online interactions is derived from the principle of homophily, which sociologists identified as the powerful tendency for people to cluster into groups of people who are similar to themselves. Colloquially we know this through the saying, “birds of a feather flock together,” and it is powerfully demonstrated in network data.

So if we accept the notion that people do, in fact, have relationships that both shape and are shaped by their interactions, then it follows that there may be some ways to measure these relationships with at least some level of fidelity. Social network data appears to offer at least a window into these real-world relationships, though each dataset has biases which are not yet well understood. However, by comparing results from different networks, we can start to get a sense for what those biases might be.

This is where the state of the art for this research is right now: trying to understand how these different data sets are biased, the nature of those biases, and whether the biases are material in terms of distorting the “real world” network we are trying to understand. However, this lack of perfect understanding isn’t preventing people from using this data to do all kinds of things: from recommending movies to helping you find “people you may know” to identifying terror cells using cell-phone “metadata.” All of these activities use essentially the same approach.

These Maps Are NOT Geographic

A quick caveat: while we typically think of city maps as geographic, these maps are explicitly NOT geographic in nature. Rather, they are showing communities and their relationships to each other. The position of communities relative to the page is always arbitrary, their position relative to each other is determined by the presence or lack of relationships between them. I bring this up now to dissuade any notion you might have that these maps are geographic, despite whatever resemblance (real or imagined) they might have to geography. This all said, there are ways to tie these maps back to geography and use it as an additional investigative tool, but you should assume that all discussion of maps here is non-geographic, unless otherwise noted.

Gathering the Data — and Avoiding Bieber Holes

Depending on the dataset you are looking to explore, the exact details of how you gather data will vary somewhat; I have used data from Twitter, Facebook, LinkedIn, AngelList, email, and other sources. In all cases, you will want to gather a set of “nodes” (people, users, or companies, depending on the data source) and “edges” (relationships between them — typically “friend” or “follow” relationships.)

For the case of the city network maps, this is the approach I have used with Twitter data:

  1. Define a target geography using a geographic bounding box.
  2. Determine a set of keywords and location names that users inside the target geography may use to identify their location/geography.
  3. Identify a set of “seed” users within the bounding box, or which otherwise appear to verifiably be within the target geography.
  4. Determine which user identifiers are followed by a given user, and record that in a list that maintains the number of followers for a given user identifier.
  5. When a given user identifier is followed by a number of people exceeding a threshold, request the full user information for that user to see if it appears to be within the target geography.
  6. If that user is within the target geography, then feed it into step 4, requesting the user identifiers followed by that user.
  7. Repeat steps 4-7 until there are few new additions to the dataset.

Doing this process once produces a “first draft” data set, which can then be visualized for inspection — to look for improper inclusions, obvious exclusions, and any particular data artifacts or pathologies.

At this stage, various problems may appear. As an example, if you are trying to visualize Birmingham UK, you will likely end up with some data for Birmingham, Alabama, due to legitimate confusion which may exist between the two communities. At this point, you can modify the test used to determine whether something is inside the target or not, and regenerate the dataset, as well as perform additional data gathering iterations to get more of the “right” data. This process typically takes a few iterations to really drill down into the data you’re looking for.

One persistent problem I’ve come across is a phenomenon called Bieber Holes, which are essentially regions of the network occupied by Justin Bieber fans. They are so virulent, and their networks so dense, that only aggressive exclusion filters can prevent the algorithm from diving down into these holes and unearthing millions of Beliebers — only a fraction of which may pass location tests. Anyway, I’ve developed good techniques to avoid Bieber holes (and similar phenomena) but it’s a reminder that when working with data from the public with algorithmic approaches, editorial discretion is required.

Laying out the Network Graph

There are dozens of algorithms for laying out network graph data, each optimized to illustrate different properties of the graph. Since I’m primarily interested in homophily and clustering, I’m looking for layouts that express communities of relationships. A good way to do that is to use a force-directed graph layout algorithm; with this approach, relationships act like springs (expressing Hooke’s law), and each user or node repels nearby nodes (expressing Coulomb’s law). By iteratively drawing the graph based on these forces, the graph will eventually reach a steady state which exhibits the following properties:

  1. People with many relationships between them will be arranged into tight clusters.
  2. People with the fewest relationships between them will appear at opposite edges of the graph.
  3. People who have many relationships at both ends of the graph will appear in the middle.
  4. Clusters with few or no relationships between them will appear very far apart on the graph.

You can think of this in very simple terms. If you have a room full of 10 people, there can be a total of 45 relationships between them (n * (n-1)/2, since someone is not friends with themselves). If every person is friends with every other person, this network will appear as a perfect and symmetric “ball” under the rules of a force directed graph layout.

Likewise, let’s suppose that same room of 10 people was grouped into two groups of 5 people, and that those two groups hated each other and refused to speak, but each member of each group knows every other member of their own group. You would see two perfect “ball” layouts (each with 10 relationships expressed), but with no connection between them.

When we visualize data from a city in this way, we are essentially measuring the separateness of communities — whether we see several of these separate groups or whether we see one unified community.

Note that in the example with 10 people in one group, we have a total of 45 relationships, while in the example with two groups of 5, there are only 20 (2 x 10). Scaling that up to a city of 500,000 people, if the city was fully meshed, there would be 124,999,750,000 total relationships, while if that city is segregated into two groups of 250,000, there can only be 31,249,875,000 relationships in each group, or 62,499,750,000 total relationships across both groups, which is a little less than half of the number of relationships possible if the two groups merged.

Detecting Communities and Adding Color

Within the network, we can use algorithms to detect distinct communities. Communities are defined by the number of shared relationships within a given subgroup. We use an algorithm called the Louvain community detection algorithm, which iteratively determines communities of interest within the larger network, and assigns a community membership to each user accordingly.

We can then assign each of these assigned communities a color. For my work so far, I have assigned these colors arbitrarily, with the only goal of visually differentiating one community from another. This helps to generate an aesthetically pleasing visual representation.

While communities often correlate to clustering exhibited by the layout algorithm, for nodes that are not clearly members of only one cluster, color can be used to indicate their primary community affiliation. This is helpful in the visualization because generates blended color fields that can give a sense of the boundary between two communities.

For example, a group primarily concerned with politics (blue) and a group primarily concerned by music (yellow) may have a mix of both blue and yellow nodes in the space between those distinct communities, and you can get a sense that those people are very interested in both communities and which their primary affiliation might be.

Assigning Node Size with In-Degree

In a graph of Twitter users, it can be helpful to indicate how many people are following a given user within our specific graph (note that this is calculated for our subgraph, not taken from a user’s “global” follower count as displayed by Twitter.)

We can do that by making the “dot” associated with each user bigger or smaller based on the number of followers. This has no real effect on the shape of the network, but it can be helpful in determining what kinds of users are where, and how people have organized themselves into communities of interest.

Determining Community Interests

After you have a colored graph with communities and clusters, it’s time to try to figure out what these clusters seem to be around. The first thing to do is to start by manually inspecting nodes to see who they are — often starting with the biggest nodes first. Typically you’ll find that people often organize themselves into groups like these: sports, music, mainstream media, politics, food, technology, arts, books, culture, and the like. These clusterings vary somewhat from city to city, but you’ll see some common patterns between cities.

After communities are detected, we can start to monitor traffic coming out of these communities to look for topics of conversation and other characteristics which may be helpful in explicating the observed clustering. For instance, we can gather a corpus of Twitter traffic for each community, recording:

  1. hashtags
  2. commonly shared links
  3. languages in use
  4. operating systems in use (desktop vs. mobile et al)
  5. client software in use
  6. geographic coordinates (as provided in GPS)
  7. age of user accounts
  8. user mentions

With this kind of data recorded as a histogram for each community, we can start to get a pretty good sense that a given group is mostly concerned with sports, music, politics, and the like. By working with a collaborator who is well-versed with the culture of a given place, we can also get a sense of local subtleties that might not be immediately obvious to an outside observer. These insights can be used to generate legends for a final graph product, and other editorial content.

Investigating Race

Another phenomenon is very apparent in American cities: people separate by race. American cities like Baltimore and St. Louis are polarized into black and white communities, with some people bridging in the middle. These cities present roughly as eccentric polygons, with differences in mesh density at each end of a racially polarized spectrum. Relatively prosperous European cities, like Munich or Barcelona, present more like “balls,” with no clear majority/minority tension displayed. Istanbul, by contrast, shows a strong divide between the rich, establishment and a large emergent cohort of frustrated young men.

In many American cities, we do observe strong racial homophily in the data. For example, in the data for a city like Baltimore, people at opposite ends of the spectrum are generally strongly identifiable as “very white” or “very black.” This is a touchy subject, and it’s difficult to discuss this topic without offending people; however, what we are aiming to do here is to try to understand what the data is showing us, not make generalizations or prescriptive measures about race.

One way to ground this discussion about race is to look at the profile photos associated with user accounts. We can gather these photos in bulk, and when displaying them together, it becomes quickly clear that people have often organized themselves around skin color. When they have not organized themselves around skin color, they are organized around other cultural signifiers like fashion or style. This can give us a concrete sense that certain communities consist primarily of one race or another. However, this is not to suggest that outliers do not exist, or to make any statement either about any given individual — and certainly does not suggest an inverse relationship between race and the ability or proclivity to participate in any given cluster or clusters.

The Final Product

Once we have refined the data, identified communities, investigated trends and topics within communities, and potentially looked at photos, profiles, and other cultural signifiers, we are in a position to annotate the network map with editorial legends. This is a fundamentally human process. I generally use a tool like Photoshop or Keynote to annotate an image, but this could be done in a number of ways. Once this step is done, a final product can be exported.

This entire process of taking data from one or more biased sources, refining data, and then using a human editor who also has a bias produces a final end product which is comparable to an “op-ed” in a newspaper: it’s an expression of one possible mental model of the world, informed by a combination of facts, errors, and pre-existing opinions. It’s up to the reader to determine its utility, but to the extent it offers a novel view and the reader deems the data and editorial biases acceptable, such renderings can be an informative lens through which to see a city.

A Note About Bias, Inclusions, and Exclusions

A common criticism of large-scale social network analysis is that it is not representative because it is biased in some way. A typical statement is, “I don’t use Twitter, so the results don’t include me, therefore I question the validity of the approach.” Likewise, people may say the same thing about LinkedIn or Facebook, or start into a long explanation of how their online habits are very different from their offline reality. These are important facts to consider, but the question is really whether network analysis is representative enough to start to deliver information that we didn’t have until now.

To answer this, it’s helpful to think about this in terms of recent history. In 2004 or 2007, it probably wasn’t helpful to say anything in particular about data from social networks, because none of them had enough penetration to deliver insights beyond a very biased community of early adopters: whether it was geeks in the case of Twitter, or young music fans in the case of MySpace, or college students in the case of Facebook.

As these networks have continued to evolve, however, their penetration is increasing rapidly. They are also accreting a great deal of historical data about people and their position in social networks which give us clues as to their offline interactions. This accretion of data will only continue and begin to paint a more complete and multi-dimensional picture of our culture — especially if you correlate the data from multiple social networks.

I believe we are now at a point where these data sets are large and detailed enough to offer important insights about our “real world” culture. This belief is based on two facts. First, it’s possible to get a good working image of a community by gathering data over just a few hours. While we make every effort to gather as much data as is realistically possible before making statements about a community, the effect of adding more data is additive: we add members to the community, but we do not fundamentally alter its shape.

To understand this effect, it is helpful to consider an analogy from astronomy. A more powerful telescope can yield a better, sharper image of a star formation, but it neither changes the shape of the star formation or our basic understanding of its structure. Better tools simply yield more detailed data. I believe we are at a point where we have enough data to begin to understand structures, and that more data will yield more detailed understanding — but not alter the fundamental shapes we are beginning to uncover. This line of thinking is helpful in comprehending “exclusions.”

On the subject of “false inclusions,” these are generally sussed out in the layout and review process, but all of the graphs I have produced have generated a limited number of false inclusions as a practically inevitable artifact of the process. As a result, any statements one may make about a specific individual based on this kind of analysis may or may not be valid: they should be viewed through the lens of the biases disclosed alongside the visualization. However, repeated experience has shown that removing false small number of false inclusions does not have a material effect on the community structure.

On the subject of “network bias” (I don’t use x, therefore conclusions are not valid), when comparing data from Facebook, Twitter, LinkedIn, AngelList and other sources, the same patterns of homophily are exhibited. While each network has its own bias (LinkedIn towards professionals, Twitter towards both youths and professionals, Facebook towards Grandparents), if we limit analysis to a given geography, we will inevitably see homophilic tendencies which are correlated in each network.

While it’s difficult to speak about this in detail yet, as obtaining full comparable data sets from each source is currently quite challenging, early investigations indicate that the same patterns are exhibited across all networks. This squares up with the notion that what we are really doing in examining these networks is but a proxy for real world relationships. Ultimately, these networks must converge into something that approximates this abstract, Euclidian reality, and as we get more data and correlate it, it’s likely that a unified data set will closely reflect the actual geometry of our communities.

Future Research Directions

This kind of analysis is forming the basis for the emergent field of “computational sociology,” which is currently being explored by a variety of researchers around the world. This work has a number of important implications, and poses questions like:

  • What do we mean by diversity? If we should be looking to bridge networks, is race a helpful proxy for that function? Or should we be looking to develop new better measures of diversity?
  • What is the nature of segregation? Is physical segregation a product of our social networks? Or is it a manifestation of them?
  • What is the role of urban planning? Is our social fabric something that’s shaped by urban planning, or are our cities simply a manifestation of our social fabric?
  • What kind of interventions might we undertake to improve a city’s health? Should they be based first on creating and improving relationships?
  • What is creative capital and how can we maximize it? If creative capital is a byproduct of relationships between people with diverse backgrounds, then we should be able to increase creative capital by orders of magnitude by bridging networks together and increasing the number of relationships overall.

The level of detail we are able to extract from network data in cities is more detailed than any data source we have ever had, and can supplement many other indicators currently used to measure community health and make decisions about resource allocation. For example, the census may tell us someone’s race, family name and physical address, but it tells us very little about their participation in the social fabric. And if we believe that the city is primarily manifested as the sum total of social relationships, then clearly data surrounding social interactions is more useful than other attributes we may harvest.

Discerning Stable Structures vs. Topical Discussions

Likewise, many researchers have been using social network data like Tweets to characterize conversations around a topic; and it is true that by harvesting massive quantities of Tweets, one can discern trends, conversation leaders, and other insights. However, this type of analysis tends to be topical and shift a great deal depending on whether people are active online, and contains biases about why they might be active online. Regardless, this kind of research is well worth understanding, and may ultimately lead to a better understanding of network structure formation. However, I want to differentiate it from the kind of inquiry I am pursuing.

So, rather than “you are what you tweet,” I believe the truth is something more like “you tweet what you are.” Network follow structures tend to be very stable; while they do change and evolve over time, one interesting feature of the process I am using is that it tends to be very stable over time and the results of analysis are repeatable. That is to say, if I analyze Baltimore one day, and then repeat the analysis a month later, I will obtain a comparable result. This is not likely to be the case with semantic analysis as topics and chatter may change over the course of a few weeks.

Tracking (and Animating) Changes Over Time

Because network structure analysis is fundamentally labor intensive and somewhat difficult to compute, it’s difficult to automate and to perform on an ongoing basis. For example, it would be nice in Baltimore to know whether things are getting “better” or “worse.” Right now, only by taking periodic snapshots and comparing them can we get a sense for this. It would be helpful to be able to animate changes over time, and while this is theoretically possible, tools to do this have to be built by hand right now. My hope is to apply such tools to the process and dramatically increase our ability to monitor network structure in real time, and spot trends.

Ultimately, this will give us some ability to see our social structure change in near real-time, developing a sense as to whether interventions are having a positive effect, or any effect at all.

Healthy vs. Unhealthy Network Patterns – and Brain Development

Dr. Sandy Pentland, an MIT researcher who is perhaps the leading figure in the field of computational sociology, has suggested that there are certain patterns that characterize “healthy” networks, as well as patterns that characterize “unhealthy” networks.

Healthy networks are characterized by:

  • Frequent, short interactions
  • Broad participation by all nodes and meshing
  • Acknowledgement of contributions
  • A tendency to explore other parts of the network

By contrast, unhealthy networks exhibit the opposite patterns: broadcast vs. peer-to-peer interactions; fewer network connections; isolated subnetworks; a lack of exploration of other parts of the network.

Perhaps the most telling finding is that these patterns are not limited to just human networks, but also appear in other colony-based organisms, like bees. It appears that there are universal patterns of network health that apply to life in general.

The other important finding is that lack of network exploration affects brain development in young people. Specifically, young people who grow up in an environment where network exploration is not valued tend to exhibit structural changes in the brain which appear to dampen their desire to explore networks as adults. This produces a multi-generational effect, where children who grow up in isolated networks tend to persist in isolated networks, and to pass that on to their children.

This suggests that one possible intervention is to promote network exploration at an early age across the entire population. How we might do this is certainly open to discussion, but it seems to be a very powerful tool in breaking down divisions in our social networks.

What’s next?

As this project advances, we are gathering ever-larger datasets. This requires more and more computational power and distributed algorithms for visualization. If this is something you’re also working on, I would like to talk to you — please contact me. There are some interesting challenges in scaling this up but there are interesting opportunities emerging to apply these approaches!

Mapping Your City

We have a long list of places that we’re looking at mapping, and trying to prioritize opportunities. If you have a mapping project you would like us to consider, please contact us. We will try to get back to you as quickly as possible, but in general, we’re looking for projects that can make a serious social impact. It takes a serious amount of effort to generate these maps, so be thinking about possible partners who could potentially fund this work and help advance this science.

Stop Talking and Build Your Business

While I’m deeply involved in entrepreneurship, breathing it day in and day out, and actively follow many discussions around it, I tend to shy away from writing about it most of the time. Why? Because I’m generally too busy doing the entrepreneurship to talk about it. In the end, it’s the facts of our accomplishments that will speak far louder than some hollow words.

But every once in a while it can be helpful to take note of where one stands and what one has learned. This is one of those moments, and I’d like to take a moment to share a few thoughts on what it takes to start a product company. (Most recently, I’ve been hard at work building and launching Mailstrom through my company 410 Labs.)

  • Building a sustainable business is the only thing that matters. Entrepreneurs get caught up worrying about all kinds of shiny objects: tech trends, investors, hot markets, compensation schemes, founder personalities, community — you name it. Those things are interesting, but there’s only one thing that matters: building a sustainable business. That means recurring revenue and controllable costs. Watch those two items and then maybe you’ll have something to talk about.
  • Know your CAC/CPA and LTV. If you’re building a recurring revenue business, you need to know your customer acquisition cost and lifetime value of the customer. You need to understand this really, really well. To understand customer acquisition cost, you need to know all of the costs that go into acquiring a new, paying customer. To understand lifetime value, you need to understand your customer retention rates really well. If this makes your eyes glaze over, you’re not an entrepreneur.
  • The hard part comes after the funding. It may seem like securing funding is a great milestone, and you’ll be tempted to pat yourself on the back. Do that for a minute, but get the hell back to work. You have real work to do now. If you don’t understand how to deploy that funding, and more importantly have a plan for what happens when (not if ) you run out, you’ll be out of options. Be humble. This stuff is hard.
  • The Series A crunch is real. The funding environment today is quite brutal and you can expect to spend several months of your time working just on securing a “Series A” size funding round (for the sake of simplicity let’s define that as a post-seed round of $1M or more). Investors are looking for not only traction, but real revenues, social proof, and growth. They want to know why you are the entrepreneurial team that’s going to survive and thrive. There are thousands of other teams out there who won’t, so the odds are against you.
  • My dog can raise a seed round in this climate. With the myriad startup funds, accelerators, and crowdfunding available today, it’s easier than ever to raise a seed round. While raising a seed or small angel round is definitely a validation that you may not be insane, it’s no validation that you’re sane either. Startups are hot and these days, everyone’s an “angel investor” (even me.) But just because you raise that seed round doesn’t mean you will have a clear path forward.
  • You are responsible for your success — not your investors or advisors. It’s tempting to think that having a rockstar list of investors and advisors is going to catapult you to success. In fact, that’s just not the case. While having “name brand” people on your team can accelerate the growth and create additional options for your successful business, the onus is on you to deliver that successful business. They are not going to make it happen for you. It’s ALL on you.
  • Building products is hard and requires tireless analysis and iteration. Building products is ridiculously hard work. Every day you need to devise new experiments and analyze the results. You need to continuously iterate and improve every aspect of your product. You have to make clear headed choices about what is important and what’s not, frequently leaving even good ideas unfinished. In short, if you’re not comfortable with scientific method, numeric and statistical analysis, and ambiguity (yes, ambiguity ALL THE TIME) then you have no business being a founder.

I hear new entrepreneurs talk all the time about how everything is going so great — they had a great pitch meeting or secured a funding commitment, just hired a new person, or setup some shiny new piece of technology. And those things are all great. But remember, until you have a product that is generating real recurring revenue and you fully understand the dynamics, you don’t have a business, you have a conjecture.

And don’t get me wrong, as a fellow entrepreneur, I love a good conjecture. But what is really impressive is when you can start to make make it sing and scale. That requires tons of hard work — hard work that’s not flashy, doesn’t let you write upbeat self-congratulatory status updates, or put out self-serving press releases.

All that matters is your customers, the value you’re creating for them, and the dollars and goodwill that value generates. Let’s talk about that, because the rest is noise.

Design, Affordances, Emergence, Appeal: An Innovator’s Primer

A lot of people talk about innovation in terms of fulfilling an unmet market need. Specifically, there’s a lot of emphasis on “solving problems.” (I’m looking at you, Dave McClure.) The theory is that entrepreneurs should work on solving a problem that lots of people have, and not get too focused on some technology. That’s fair advice.

However, when entrepreneurs hear this, their first instinct is to often to go ask people about their problems and then try to solve them. Or they look for markets where there is a lot of money being spent.

“The best innovations are those that solve a problem that people didn’t even know that they had,” says Paola Antonelli, curator of design and architecture at MoMA. Twitter certainly falls in this category. In fact most people were sure they didn’t need Twitter, but now it’s a central part of our media landscape.

This class of innovation is the sort you have to shove down people’s throats at first, but then changes the world forever. And they’re tricky to find because no one will tell you they need them. And there’s no market study that outlines the opportunity.

Thinking about this, and stealing some good ideas from design thinking pioneers like Don Norman, Tim Brown, and Daniel Pink, I’ve settled on four key elements that entrepreneurs can use to think about innovation: design, affordances, emergence, and appeal.

Design

Steve Jobs is famously quoted as saying, “design is how it works,” and he’s right. How it works is determined by the design specifications and constraints. If it is software, the major design elements include aspects like synchronous vs. asynchronous, private vs. public, one-to-one vs. one-to-many vs. many-to-many, market size, viral reach, and mode of access. There are many other elements that determine the nature of a product’s design.

The outward aspects – how it looks and feels – are important insofar as they impose an additional set of operational constraints: what’s possible, what’s most likely, how the “happy path” feels, and how brittle the experience is.

When most people think about design, they think about “how it looks.” We’ll get to that in a minute. When you think about design, you really are determining “how it works,” and it’s the most critical part of creating an innovative product.

Affordances

Affordances are the possibilities that a particular design allows. If your product allows for a particular use, then its design affords that possibility. Sometimes there are negative affordances (a part allows for a hinge to open too widely, possibly damaging the product), as well as positive affordances (an iPod Touch can display streaming video, so it afforded the possibility for HBO to make a mobile subscription TV app.)

Every design offers a wide range of affordances, and you should think critically about what they are.

Emergence

Sometimes a design enables new behaviors that its creators did not predict. Users of the product start behaving in a new way that was not anticipated, though it is allowed by the original affordances (say hashtags on Twitter).

Sometimes the emergent behavior is incorporated back into the original design (such as when Twitter adopted hashtags and @ replies, and tracked their trends).

Emergence is usually a happy accident. Biz Stone, co-founder of Twitter, says, “always allow a seat at the table for the unknown.” That is an excellent design goal. By leaving a few doors open, one allows for this kind of emergent behavior to occur, and to capitalize on it.

Designers almost never consider all of the emergent possibilities that their designs afford. Being open to emergence, and incorporating it into later designs, is key to innovation.

Appeal

This is really a subset of design, but it’s worth discussing all by itself. Your product should have curb appeal and create an emotional connection with people that causes them to return to it again and again.

The finest Swiss clockwork will not go anywhere if it is packaged in an ugly shell. While design is “how it works,” your product’s human appeal has everything to do with “how it works with people.” Because without ongoing engagement from people, most products cannot survive.

So, how it “looks” certainly matters, but only insofar as it affects its ongoing appeal, and “how it works with people.” We know the best products are those that create that emotional, nearly-religious connection, and this can’t be overlooked.

Utility Is Difficult to Predict

I think asking about utility is often the worst way to evaluate a design in its early phases. “Why would I use this? What’s it good for? Who needs this?” are questions that are worth contemplating, but it’s also OK if the answer is “I don’t know yet.”

If a design affords a range of emergent behaviors, if it can be distributed to a large group of users, and it can be made appealing and inspire devotion, odds are it’s something worth experimenting with. The odds that the ultimate utility of an interesting design will exceed early predictions is very high.

I love engineers, and do some engineering, but engineers are particularly prone to evaluate concepts in the frame of “how is it different from XYZ that already exists,” or “what technology does it employ?”

The success of the Wii is one of the wins that stymied many engineers. “The graphics sucked, the games were primitive, and there were better technologies on the market.” And those things were not the point. The Wii won because of its design, it affordances, its appeal, and the emergent behaviors (and user communities) it enabled and reached.

So be playful in your designs. Give things a chance. See what happens. Learn from emergent behaviors. And always leave a seat at the table for the unknown.

Real Innovation Takes Time

Combinatorial Innovation

There are so many new technologies today: tablets, geolocation, video chat, great app frameworks. It is easy to cherry-pick off “combinatorial” innovations that seem compelling, and can maybe even be monetized readily.

But all those innovations are inevitable. If our technologies afford a certain possibility, they will occur. “That’s not a company, that’s a feature,” is one criticism I’ve heard of many “startups.”

These combinatorial, feature-oriented “X for Y” endeavors are often attractive because they can often be built quickly.

Startup Weekend events send an implicit message that a meaningful business can be fleshed out in just a couple of days. And I argue that is not true. That might be a good forum to get practice with building a quick combinatorial technology and working with others, but a real innovation, much less a meaningful business, takes real time.

I think people are often looking in the wrong places for innovation, often because they don’t really take the time to do the homework, observation, and deep reflection necessary to arrive at a true insight. We want things to be quick and easy.

Changing Minds, and Behaviors

The biggest innovations require asking people to change their beliefs, habits, and behaviors.

iPhone: “why would I want a smartphone without a physical keyboard? It’s too expensive. I can’t install apps.”

Twitter: “what is this for? Why would anyone do this? Who cares what I had for breakfast?”

iPad: “an expensive toy. Could never replace a real laptop. Can’t run real office applications. The enterprise will never adopt it.”

Foursquare: “only hipsters and bar hoppers would ever do this. They are letting people know when to rob them. I don’t want people to know where I am.”

And these innovations have taken years of constant attention to bring to their current state. And they are not done.

One Innovator’s Story

Dennis Crowley, founder of Foursquare, was in the room at Wherecamp in 2007 where I was giving a talk about location check-in habits via Twitter (a subject I knew well because of my Twittervision service, which allowed this.)

Dennis, of course, also founded the precursor to Foursquare, Dodgeball, which he sold to Google in 2004 (they promptly killed it.)

But Dennis wanted to see his vision come to pass, and he knew it would someday be possible — though at that point the iPhone had not been released and it would be nearly two years before it supported GPS location technology.

But there Dennis was, doing his homework in 2007, studying user behavior to figure out exactly what behaviors he would have to encourage to make Foursquare work.

He asked me, “so, people are really putting their home and work locations formatted inside tweets in order to update their location?”

“Yep, a few thousand times a day,” I replied.

“That’s cool. That’s really cool stuff,” he said. And from that, and years of similar evidence-gathering and study, Foursquare would be born.

So, creating Foursquare took about five years. (I could have “stolen” the idea and built Foursquare myself. But I didn’t execute on that; it was his vision to pursue.) Dennis did his homework. He was prepared. And his vision preceded the technology that enabled it.

Why, not How

Real innovation doesn’t come from a weekend. It comes from passion, years of study, understanding deep insights and the “why,” and persistence in seeing something new to market, along with the marketing and cheerleading that will make it successful.

The iPad owes much to Steve Jobs’ love of calligraphy. He cultivated a sense of aesthetics because of that initial interest. He didn’t set out to “make money” but rather dedicated himself to changing the world for the better using the entirety of his humanity. Time studying art wasn’t “lost,” it was R&D for the Mac, iPhone, and iPad.

Many of today’s entrepreneurs could stand to do less “hustling” and more reading, exploring, reflecting, and gathering input — and when it is time to make stuff, set their sights as high as possible.

There is more to this world than money, and there are countless opportunities to make it a vastly better place. Rather than using our CPU cycles just playing with combinatorial innovations, let’s devote ourselves to making the world as amazing as possible. Try to take time to reflect on how you can make the world better, and not just on what current technology affords.