Can Obama’s Get Out the Vote Success be Replicated Outside the Context of a Multi-million Dollar Campaign?

The 2008 and 2012 US presidential elections heralded a new era of using technology to activate voters, donors and volunteers.

Part of technology’s role in this success story was data: Campaigners were able to collect more data than ever before, update data more frequently than ever, and strategize on how to use data in new ways. Data acquisition and management had improved greatly since past elections, and drew from voter registration records, consumer data warehouses, and past campaign contacts.[1] Campaigners could poll voters and test campaign messages faster, cheaper and with greater ease. And savvy analysts used complex statistical models to accurately predict voter preferences and even election outcomes.[2] As Sasha Issenberg wrote in 2012, “a new political currency [sic] predicted the behavior of individual humans. The campaign didn’t just know who you were; it knew exactly how it could turn you into the type of person it wanted you to be.”

Technology also enabled fun and meaningful engagement for volunteers, often replicating the social media experience. MyBarackObama.com enabled a personalized campaigning experience, and its available digital tools let regular people easily plug into volunteer activities that would help the campaign, like door-to-door canvassing or calling voters. Computational management let dispersed teams thrive all over the country, thanks to technological-enabled benchmarking and reporting.

With success stories like this one, it’s easy to see why so many people believe technology is a panacea for civic and political engagement. But as someone who works on engagement via technology, I can tell you, this isn’t the case.

Sure, the Obama people overcame apathy by tapping into motivations and social networks and lowering barriers to certain forms of participation. But even they admitted that they often left certain people out, in favor of targeting the easy wins. And beyond this, the Obama campaign had a few key things that are hard for others to replicate.

  1. Money. The Obama campaigns in 2008 and 2012 spent millions of dollars on data analysts, coders, and strategists, not to mention media buying and ads. Other public sector entities like governments and non-profits don’t have Obama’s bankroll to churn up interest or action. Causes like public education, racial equality and community development don’t draw the same kind of cash, and this don’t have the same resources at their disposal to develop sophisticated tools, hire talented analysts and strategists or purchase consumer information.
  1. Urgency. With the fate of the free nation hanging in the balance and a firm deadline looming in November, presidential campaigns benefit from a sense of urgency that motivates people to act now, rather than later. Ongoing causes may be just as important, but don’t have the psychological benefit of a heightened sense of urgency driving people to immediate action.
  1. Simplicity. Presidential elections involve, at most, a few clear choices. For or against. Him or her. This party or that one. Once the choice is made, all efforts can be channeled toward a simple, shared goal: win the election. Few causes—whether legislative initiatives or activism agendas—have this kind of simplicity. Disagreement pulls people apart. Complexity slows things down and causes discouragement. Information management removes representatives from their constituents.

While there are lessons to be gleaned from the Obama campaign’s get out the vote success, those lessons are not all universalizable. Public sector organizations should copy the campaign’s personalization of websites, activation of social networks, and focus on individual motivations. But they cannot be expected to run sophisticated big data operations on the limited resources available to them.

[1] http://www.technologyreview.com/featuredstory/509026/how-obamas-team-used-big-data-to-rally-voters/

[2] http://www.technologyreview.com/featuredstory/509026/how-obamas-team-used-big-data-to-rally-voters/

Standard

The Future of Data Sharing

Thanks to technology, the extent to which everyday citizens can be monitored and tracked is rapidly increasing. Anyone with interest can learn to GPS track a cell phone. Facial recognition software now answers correctly 97% of the time, and commercial versions—such as the one used by Facebook—are as powerful as those used by the FBI. The ability of computers to store and process massive amounts of data increases both supply and demand of data on everyday citizens in countries all over the world.

I do not say this to be alarmist, but rather to remind us of the current technological landscape. These facts are neither good nor bad, they are simply the facts.

And as technology keeps rapidly advancing, the possibilities for surveillance are also expanding.

Meanwhile, Silicon Valley hypocritically markets itself as the new ground zero of freedom, while also cooperating heavily with the NSA and international agencies. As James Risen and Nick Wingfield wrote in their 2013 article for the New York Times, both Silicon Valley and the National Security Agency “hunt for ways to collect, analyze and exploit large pools of data about millions of Americans. The only difference is that the N.S.A. does it for intelligence, and Silicon Valley does it to make money.”

But beyond their shared interests in data mining, sites like Facebook actually end up sharing a large amount of the information they collect: In addition to cooperating with the NSA when legally compelled, “current and former industry officials say the companies sometimes secretly put together teams of in-house experts to find ways to cooperate more completely with the N.S.A. and to make their customers’ information more accessible to the agency.”

It is particularly troubling to know that the technology itself is being built in a way so as to make spying and data sharing easier, faster and more efficient! It seems that a choice could be made by companies to actually make this harder, if that were what they wanted. They are in a position to erect obstacles to spying, based on their position.

This power highlights the complex and problematic situation discussed by Rebecca MacKinnon in her book, Consent of the Networked: Internet companies, often operating without significant regulation—and, importantly, without the democratic legitimacy of a governmental institution—control the back end of a system that shapes our very rights and privacies.

What’s fascinating to me is that, while some vocal critics are outraged about data sharing, digital spying, and its implications, many Americans simply acquiesce.

I myself am guilty of this. While it bothers me to know how much information is gathered on me online, and I cringe slightly each time I enter my bank information, social security number, or email address into a secure web form, I still do it. It feels inevitable, and I feel powerless to change it.

And I am not alone. A 2013 survey conducted by the USC Annenberg Center for the Digital Future and Bovitz Inc shows that Millenials accept that online privacy is dead, indicating a major shift in online behavior.

At the same time, a 2013 Pew Research Report found that “most internet users would like to be anonymous online at least occasionally,” and “some 68% of internet users believe current laws are not good enough in protecting people’s privacy online.”

Given our disapproval of the status quo, why are Americans so willing to acquiesce? Have we become so addicted to and dependent upon Facebook that we are unwilling to go without it? And this is not just a social thing where we want to feel “in the loop” and that we have a lot of friends—for many, myself included, Facebook is an essential part of one’s profession. There are more and more services that require Facebook as a way to sign in or sign up. And Google is the best out there for searching, are we really going to forgo using it?

We seem to have fewer and fewer options for removing ourselves from online data collection, short of going off the grid altogether. And we know that Americans are uneasy about what’s going on. But why aren’t we outraged? And why aren’t we doing anything?

What fascinates me is that people in the developing world are even more skeptical of putting their data online—repression is not so long ago in their collective memory, nor has it been as well-hidden as the insidious NSA spying. And in other countries, that information has been acted upon, whereas in the US we are still just teeing up and setting the stage for future potential action.

Standard

It’s 1500 A.D, and I Just Bought a Printing Press

Over the last 40 years, the Internet has turned journalism and the newspaper industry upside down. Long-standing forms of financing the news and organizing media outlets have crumbled. Theorists like Clay Shirky have argued that “there is no general model for newspapers to replace the one the Internet just broke.”

Like many scholars, Shirky has likened the current media revolution to the revolution created by the printing press, which is visible in the dramatic contrast between before- and after-printing press phases: In the early 1400’s, before Gutenberg’s invention and the widespread popularization of moveable type, literacy was limited, books were almost exclusively Bibles, and the Catholic Church had a firm grip in the European continent.[1] By the late 1500’s, after Gutenberg’s invention had started to spread, literacy was on the rise, science was challenging religion, and the spread of information was destabilizing political power.[2]

But during this transition, around 1500, when rapid changes were taking place and various actors experimented with combining old and new methods and technologies, it was essentially chaos. As Shirky puts it, “The old stuff gets broken faster than the new stuff is put in its place.”[i]

That is precisely the time we are living in now. Experiments are taking place every day that will define how information gets created, verified, and distributed. As Dave Winer describes in his blog post “Readings from News Execs,” volunteer bloggers and amateur writers pose a serious challenge to reporters and newspapers, because of their willingness (and I would argue, eagerness) to do traditionally compensated work for no pay.

Just a few years ago, I was one of those aspiring amateur media-makers Winer describes, eager to find my place in the changing media landscape, and willing to sacrifice pay and even my own safety for carve out my niche.

It was 2006, and I had just graduated from college. I had a few years of student journalism under my belt, and had recently taken an ethnographic filmmaking class that had changed my life. I wanted to make documentary videos for a living, and in addition to taking a side job as a wedding videographer, I started surveying the lay of the land for how this “industry” worked—how did people get paid? How did their work get published? What steps could I follow to set up a stable career for myself?

The answers I found were both disappointing and constantly changing. I remember that for a short time, Current TV, a now-nonexistent TV channel, would pay $500 for user-generated video segments. (A fellow student and I had a serious discussion about earning our livings this way.) I remember when YouTube was acquired by Google, and so many of us wondered what the fate and impact would be of such a seemingly audacious video sharing website. And I remember countless conversations with other documentary filmmakers and videomakers about the realities of this industry.

My six years in the industry can I be summarized by the words of wisdom I received from one award-winning filmmaker with an advanced degree from Harvard: “Jessie, every documentary filmmaker has some combination of the following: a trust fund, a patron or a day job.”

In my experience, this rings true. People were constantly trying to get my colleagues and I to work for free. It became such a joke that a graphic circulated my workplace with a flowchart on “whether to work for free.”

should-i-work-for-free-flowchart

 

After six years in the industry, I realized what many in journalism have realized: that in order to achieve stability, I needed to attach myself to an institution. Unfortunately, there are very few institutions to choose from. As a result, I’ve been moving further and further from film work and into other sectors. In the world of documentary film, the stakes are not as high as in hard news. Documentary film isn’t charged with the responsibility of being the watchdog of politics. But when considering the parallel path that so many journalist friends have gone down, I begin to worry about how we will make it to the other side of this chaotic moment, and what knowledge will be lost in the meantime.

 

 

 

 

 

[1] http://www.shirky.com/weblog/2009/03/newspapers-and-thinking-the-unthinkable/

[2] http://www.shirky.com/weblog/2009/03/newspapers-and-thinking-the-unthinkable/

[i] http://www.shirky.com/weblog/2009/03/newspapers-and-thinking-the-unthinkable/

Standard

No Need to Reinvent the Wheel: Applying Social Network Analysis to Digital Literacy in the Developing World

There’s a strong temptation in today’s digital age to believe that we are living through “unique, revolutionary times, in which the previous truths no longer hold.”[1] This phenomenon has been described by many technology theorists, and was termed “Internet-centrism” by writer Evgeny Morovoz.

As Morovoz and other scholars have pointed out, many of today’s technology-related problems (such as Inbox overload and censorship battles) have historical parallels.[2] Clay Shirky has famously explained how the sense of information overload we feel towards our email inbox and the Internet in general was also made possible by the printing press, which generated so much information at such a fast pace that no human could ever consume all of the knowledge in all of the books in their lifetime. Sounds familiar, right?

One of the main disadvantages with framing today’s social problems as entirely new is that it leads us to think we need solutions that are entirely new, leading us to ignore existing scholarship that could provide insight or even solutions to today’s problems.

Take, for example, the international movement towards using computer-based technology for international development. Soon after its invention, the Internet-enabled “digital revolution” was celebrated as a potential miracle cure for the developing world. World leaders like former UN Secretary General Kofi Annan touted new technology’s revolutionary potential to fix economic, social and political problems, and agencies like the UN and the World Bank invested millions in this hope.[3]

One of the first lessons learned in the technology for development movement was that in addition to access to a computer, people needed digital literacy—the know-how and social capital to utilize digital technology. This requires not only technical training in how to use technology, but also motivation, and interest and awareness of the possibilities technology can offer.

Since this revelation, international organizations and governments worldwide have invested millions in campaigns, programs and curricula to promote digital literacy. But, like the tech theorists who seek to understand social media networks and why things “go viral,” digital literacy advocates could learn a great deal from pre-Internet scholarship about social networks.

As Howard Rheingold explains in chapter 5 of “Net Smart: How to Thrive Online,” both online and offline networks are made up of individuals connected through strong and weak ties, and through direct and indirect channels.[4] Some people have a lot of influence and a lot of connections, termed “supernodes” by Rheingold. Rheingold argues that these supernodes have a lot of influence, and can direct a great deal of attention towards certain information.

In the online world, supernodes could be popular bloggers, or people on Twitter with thousands of followers. In the offline world, they’re simply influential individuals in our workplaces, neighborhoods and communities. Just as online supernodes can make Internet memes or videos go viral, offline supernodes hold tremendous potential for promoting education and adoption of technology in the developing world.

In Colombia, for example, data shows that citizens gain digital literacy skills more often from contacts in their own offline social networks than they do from official digital educators. According to national research surveys, only 41% of Colombians learned to use the Internet at an educational institution, and 32% learned from a friend or family member.[5] In addition, 63% of Colombians reported having taught someone else to use the Internet or applications.

Unfortunately, there haven’t been large-scale digital literacy programs to date that leverage these networks. The Colombian Ministry of Information and Communications Technology’s Redvolución campaign showed traces of potential: it provided limited training for high school students and average citizens to help introduce other Colombians to the Internet for the first time.

But Redvolución didn’t go so far as to examine strong and weak ties, seek to identify supernodes, map the hubs and bridges in community networks, or generate feedback loops about who was reaching people and at what pace. Based on Rheingold’s synthesis of social nework analysis, these would be keep steps in utilizing existing social networks to advance digital literacy.

We don’t have to reinvent the wheel to understand online social networks—there are plenty of theories from the pre-Internet age that help us understand them and how to use them. But we do need to recognize the importance of offline social networks, the ones that predate the Internet, if we want to be successful at advancing digital literacy. Things like hairstyles have been “going viral” since before viral was even a term—it simply means an idea spreading like wildfire through social networks. And digital literacy could be the next big thing.

[1] See “Evgeny vs. the Internet.” http://www.cjr.org/cover_story/evgeny_vs_the_internet.php?page=all

[2] See Clay Shirky on Information Overload vs. Filter Failure. https://www.youtube.com/watch?v=LabqeJEOQyI.

[3] See United Nations press release on launch of ICT Task Force, http://www.un.org/News/Press/docs/2001/dev2353.doc.htm

[4] Howard Rheingold, “Net Smart: How to Thrive Online.”

[5] Colombian Digital Culture Survey, 2013.

Standard

The Evolution of Trust in the Digital Age

Trust is the axis around which the Internet revolves. A deep level of trust prompted the invention of the Internet. A base level of trust is what popularized the Internet. Shifts, trends and developments around trust shape online tools and behaviors. And trust is at the center of today’s crises over the Internet.

In the 1960’s, when engineers at the US government’s Advanced Research Projects Agency (ARPA) laid the groundwork for what would later become the Internet, all of the major players trusted each other deeply: computer scientists at the Pentagon needed to share high-level information with colleagues spread all over the US, and merged early computers into a single network to interact in real time.[i]

Fast forward 40 years, and the Internet has gone from a government-academic project to a ubiquitous resource that a large portion of the global population has in their home or even in their pocket.

As Clay Shirky examines in his book, Here Comes Everybody, the Internet became a broadcast tool for the average human to share even the most trivial details of their personal lives. The simple act of globally publishing unfiltered personal information is an act of trust, because it means total strangers can learn your name, home town, habits, and perhaps even the names of your family and friends.

In 2008, when Shirky published Here Comes Everybody, personal blogging tools like LiveJournal were exploding with popularity, and new sites like Wikipedia were capitalizing on “radical trust,” as it is called some experts,[ii] to incite collaborative production.

These two examples teach us about the 2 sides of trust when it coms to online activity. The Wikipedia example illustrates how humans can flourish when trust and freedom are imparted: Wikipedia’s success relies on a “spontaneous division of labor”[iii] and giving “as much freedom as possible to the average user.”[iv] Shirky also argues that Wikipedia’s success reveals our basic human desires to do good, which accounts for the unpaid time so many users put into their contributions to Wikipedia.[v]

The evolution of LiveJournal teaches us a slightly more serious, but equally important lesson about online trust. In the years since the publication of Shirky’s book, the content of material online has changed dramatically. Much “amateurism,” as Shirky calls it, has been professionalized. And while there is still a possibility to publish first and filter later, users are much more cognizant (and cautious) of the ramifications of publishing things online. 2014, compared to 2008, demonstrates a rise in privacy concerns, and a decline in popularity of personal broadcast tools like LiveJournal.[vi] Today’s Internet users are extremely concerned about online identity theft, the protection of financial information online, and other aspects of online privacy.[vii]

Shirky tells us that “Communications tools don’t get socially integrated until they get technologically boring…It’s when a technology becomes normal, then ubiquitous, and finally so pervasive as to be invisible, that the really profound changes happen…”[viii] According to Shirky, we are quickly approaching that tipping point. And as a result, battles are heating up over control of the Internet, and who can be trusted to control what.

In May 2012, Vanity Fair published “World War 3.0,” analyzing the “four fronts” of the Internet crisis: piracy, privacy, security and sovereignty. [ix] All if these issues center around trust: between nations, corporations, users, and citizens. It remains to be seen how these controversies will shake out, but we can be sure that trust will continue to be a guiding factor in the future of the Internet.

[i] Keenan Mayo and Peter Newcomb, “How the Web Was Won,” Vanity Fair, 2008.

[ii] Tim O’Reilly, “What is Web 2.0?,” 2005.

[iii] Clay Shirky, Here Comes Everybody, 2008. Page 118.

[iv] Clay Shirky, Here Comes Everybody, 2008. Page 122.

[v] Clay Shirky, Here Comes Everybody, 2008. Page 133.

[vi] Aja Romano, “The Demise of a Social Media Platform: Tracking LiveJournal’s Decline.”

[vii] Consumers of All Ages More Concerned About Online Data Privacy, 2014. http://www.emarketer.com/Article/Consumers-of-All-Ages-More-Concerned-About-Online-Data-Privacy/1010815

[viii] Clay Shirky, Here Comes Everybody, 2008. Page 105.

[ix] Michael Joseph Gross, “World War 3.0,” Vanity Fair, 2012.

Standard