“[…] Priority 6: Big Data. The explosive growth in the sources and quantity of data available to firms is leading them to employ new methods of analysis and reporting, such as machine learning and data visualization. Unless the skill sets of professional marketers evolve, it is likely that some of the activities historically associated with marketing and customer service will migrate to other functional areas of the organization, such as information technology or engineering. Academic work, too, in its assumptions, approaches, theories, models, and methodologies, will increasingly be found inadequate to deal with this change..” […] (extract and image from: Marketing Science, Vol. 31, No. 6, November–December 2012, pp. 873–877)
(selection from Figure 1 A Mapping of Priority Topics Over the Past Quarter Century); image from: Marketing Science, Vol. 31, No. 6, November–December 2012, pp. 873–877)
“Priority 1: Insight into People in Their Roles as Consumers. The 2012–2014 Research Priorities call for research in any of the following three distinct subtopics: new methods, new data sources, and new theories. Many MSI members are questioning traditional methods of insight generation, such as surveys and focus groups, and traditional frameworks for thinking about consumption. Long-form surveys are hard to reconcile with today’s modes of communication. The climate is ripe for innovation in the gathering and construction of insights into why people buy and use products and services. With respect to methods, our members want to see evidence of the validity of the application of advanced technologies to generate consumer insights, such as mobile devices used for geopolling, social media monitoring, online or in-store tracking of behavior, and technologies as yet unexplored (for example, augmented reality). With respect to data sources, we are particularly interested to see research on the rapid generation of consumer and business insights from large, relatively unstructured data. With respect to theory, we would like to see applications to consumption, at scale and with evidence of validity, of frontier theories in the social sciences, for example, those from psychology, sociology, and anthropology, but also from less frequently applied disciplines such as linguistics and neuroscience. However, we caution researchers to avoid fragmentary laboratory results, unless there is reason to think that the insights will hold up in the marketplace” (from Marketing Science).
“In this essay, I develop an understanding of a technicity of attention in social networking sites. I argue that these sites treat attention not as a property of human cognition exclusively, but rather as a sociotechnical construct that emerges out of the governmental power of software. I take the Facebook platform as a case in point, and analyse key components of the Facebook infrastructure, including its Open Graph protocol, and its ranking and aggregation algorithms, as specific implementations of an attention economy. Here I understand an attention economy in the sense of organising and managing attention within a localised context. My aim is to take a step back from the prolific, anxiety-ridden discourses of attention and the media which have emerged as part of the so-called ‘neurological turn’ (see Carr, 2012; Wolf, 2007).1 In contrast, this essay focuses on the specific algorithmic and ‘protocological’ mechanisms of Facebook as a proactive means of enabling, shaping and inducing attention, in conjunction with users” (Taina Bucher, Culture Machine, 3, 2012)
by Noortje Marres, Goldsmiths, University of London and Esther Weltevrede, University of Amsterdam (2012)
from the abstract
“What makes scraping methodologically interesting for social and cultural research? This paper seeks to contribute to debates about digital social research by exploring how a ‘medium-specific’ technique for online data capture may be rendered analytically productive for social research. As a device that is currently being imported into social research, scraping has the capacity to re-structure social research, and this in at least two ways. Firstly, as a technique that is not native to social research, scraping risks to introduce ‘alien’ methodological assumptions into social research (such as an pre-occupation with freshness). Secondly, to scrape is to risk importing into our inquiry categories that are prevalent in the social practices enabled by the media: scraping makes available already formatted data for social research. Scraped data, and online social data more generally, tend to come with ‘external’ analytics already built-in”
“From the beginning, data was a rhetorical concept. “Data” means that which is given prior to argument. As a consequence, its sense always shifts with argumentative strategy and context—and with the history of both. The rise of modern natural and social science beginning in the eighteenth century created new conditions of argument and new assumptions about facts and evidence. But the pre-existing semantic structure of the term “data” gave it important flexibility in these changing conditions. It is tempting to want to give data an essence, to define what exact kind of fact it is. But this misses important things about why the concept has proven so useful over these past several centuries and why it has emerged as a culturally central category in our own time. When we speak of “data,” we make no assumptions about veracity. It may be that the electronic data we collect and transmit has no relation to truth beyond the reality that it constructs. This fact is essential to our current usage. It was no less so in the early modern period; but in our age of communication, it is this rhetorical aspect of the term that has made it indispensable” (“Data before the Fact”, by D. Rosenberg, 2012).
“The Origins of ‘Big Data’ : An Etymological Detective Story”
(by Bits, The New York Times)
“The paper approaches Twitter through the lens of “platform politics” (Gillespie, 2010), focusing in particular on controversies around user data access, ownership, and control. We characterise different actors in the Twitter data ecosystem: private and institutional end users of Twitter, commercial data resellers such as Gnip and DataSift, data scientists, and finally Twitter, Inc. itself; and describe their conflicting interests. We furthermore study Twitter’s Terms of Service and application programming interface (API) as material instantiations of regulatory instruments used by the platform provider and argue for a more promotion of data rights and literacy to strengthen the position of end users” (Puschmann and Burgess, 2013)
Note 1: from a logic of “set” to a logic of “emergence”
“So-called #Bigdata are not relevant because they support a “micro-segmentation” of customers. This is the trivial discourse about the “data-intensive” marketing. This outdated perspective depends on the persistent vision of a marketing approach based on the logic of “set” (micro or macro, it does not matter). My viewpoint is that we are currently (but not consciously) shifting to a new logic: the logic of the “emergence”. Ontologically speaking, “customer” does not belong to a micro-segment (to be/not to be an element of a set); instead, the probability to be a “customer” emerges and varies (microsecond by microsecond) according to the computational and scoring capabilities of customer databases and devices to produce a modulated, temporally and spatially situated and embodied, “dividuality” (Cosimo Accoto, 2013)
“The article uses an established three-dimensional conceptual framework to systematically review literature and empirical evidence related to the prerequisites, opportunities, and threats of Big Data Analysis for international development. On the one hand, the advent of Big Data delivers the cost-effective prospect to improve decision-making in critical development areas such as health care, employment, economic productivity, crime and security, and natural disaster and resource management. This provides a wealth of opportunities for developing countries. On the other hand, all the well-known caveats of the Big Data debate, such as privacy concerns, interoperability challenges, and the almighty power of imperfect algorithms, are aggravated in developing countries by long-standing development challenges like lacking technological infrastructure and economic and human resource scarcity. This has the potential to result in a new kind of digital divide: a divide in data-based knowledge to inform intelligent decision-making. This shows that the exploration of data-based knowledge to improve development is not automatic and requires tailor-made policy choices that help to foster this emerging paradigm”