Quotes from my next book … ;-) #bigdata #beyondbigdata

“Quantified selves (Ostherr 2013), social machines (Semmelhack 2013), ambient commons (McCullough 2013) are data actants” | from my next book |

 “Data Ontologies: Totality, Immediacy, Premediation are the ontological vectors reshaping businesses and organizations” | from my next book |

 “In a data deictic perspective, a quantified, networked and anticipated self is emerging as new marketing platform” | from my next book |

“In a data-intensive age, “real-time” is an ontological continuum spanning from subperceptuality to embedded temporalities” | from my next book |

 “Data deixis changes the logic of the customer segmentation. It’s no longer a logic of set, rather a logic of emergence” | from my next book |

 “The ‘data continuum’ paradigm is reshaping customer information markets and systems as well as industry boundaries” | from my next book |

“Looking at data as new personal and partecipatory markets devices is a way to deeply understand our data-intensive age | from my next book |

 “Market, marketing or marke-things intelligence? In an ubiquitous data age, the situated analytics performs operations | from my next book |

Schermata 08-2456524 alle 10.01.16

Customer segmentation as ideology and practice #bigdata #beyondsegmentation

“The use of consumer segmentations is also pervasive. Consumer segments are models, whether we consider them as an example of virtualism (Miller, 2002), or as manufactured (Zwick and Knott, 2009), or in the terms of dividuals” (Deleuze, 1992). Segments are managerial models of “consumers” and “consumers” are not (living, breathing) people, yet segments quite literally are brought to life by virtue of naming and psychological profiling (in this case in the “marriage” of transaction-based data with attitudinal research and the assumptions attached to the segment names). As quasi-people,target segment characterizations afford intimacy; they offer a façade of verisimilitude for actual consumption. From these personalized parameters comes deep attachment. We would contend that such abstractions also adhere because they mesh with reigning ideas of personhood, individual control and personal choice, and corresponding notions that consumption is best understood as a single individual making a choice. Such intuitive models of behavior and consumption are thus also likely to persist because such models offer fewer surprises and therefore accrue greater buy-in from managers (Zaltman and Deshpandé, 2001). Of course these individually oriented models of consumption grate in the context of our own allegiance to anthropological modes of analysis. If consumer segmentation annoys us as researchers, it is not only because of the recruiting dilemmas; it is also because of the analytic frame in which they tend to push our work. They do not facilitate an examination of how consumption actually happens in everyday life. They do not allow for an analysis of consumption that would consider processes that involve more than an individual; they close down the possibilities of examining consumption in other terms, for instance in terms of market-things(see Cochoy, 2007, this volume)” | From “Consumer Segmentation in Practice: An Ethnographic Account of Slippage, by Patricia L. Sunderland and Rita M. Denny. In Inside Marketing. Practices, Ideologies, Devices)

Schermata 05-2456431 alle 18.44.31

“Gurus and Oracles” or marketing the information #bigdata #data #informationmarkets

“What is common to these three companies beyond their lasting success? They actually belong to the same industry, the information industry  . Like Reuters and Google, McKinsey & Co. is essentially an information or knowledge provider. This book is about the universe of similar companies, organizations, or individuals whose core business is to “sell” information to decision makers. A few prominent examples are listed in table 0.1. The information industry is larger and broader than it seems. In 2010, “business information” alone accounted for about $358 billion worth of sales with over two hundred providers. 3  Some of these companies’ business consists of collecting and selling data (this is the case of Reuters or credit rating agencies), while others sell market analysis (e.g., market research firms, financial analysts, or macroeconomic forecasters). There are companies that use their complex expertise to generate customized business strategies for their clients (e.g., management consultants). Part of the media also belongs to the information industry: newspapers and news programs on television are clearly in the business of selling information, as are many Internet services that provide online information to the public (e.g., online newspapers, weather forecasting sites, some blogs, or search engines). Even large social media sites such as Facebook, LinkedIn or Twitter can be considered information vendors as user- generated content becomes a genuine information source for their members. Besides thousands of large corporations, the information industry also includes the millions of small companies and individual experts who make a living selling advice in various domains including finance, accounting, law, engineering, and medicine. Even some doctors who specialize in providing medical diagnoses belong to the information industry. But why lump these diverse businesses together? A key argument of this book is that they have more in common than it seems. Indeed, information is such a special product that it requires special business practices. But what is so special about information? Before providing an answer, it is important to define exactly what an “information product” is” (Savary M. (2012)  Gurus and Oracles. The Marketing of Information, The MIT Press)

Schermata 08-2456521 alle 18.38.41

Data itself, from a critical perspective, is a problematic concept… #socialdata #bigdata #socialmedia #digitalresearch

“[…] It is important, furthermore, to understand that this contextual paradox of research between transparent communication and platform obfuscation is not just limited to what kind of data is accessible. Data itself, from a critical perspective, is a problematic concept: should it be seen as a faithful representation of human behaviour or as a dehumanized recording that artificially parcels out existence into quantifiable bits? As we said above, corporate social media do not simply transmit communication among users, they transform it and impose specific logic on it. To borrow from Lawrence Lessig (2006), the platform’s code imposes specific regulations, or laws, on social acts. The consequence of this is that corporate social media give the impression that they merely render social acts visible, whereas in fact they are in the process of constructing a specific techno-social world. For instance, while I can ‘like’ something on Facebook and have ‘friends’, I cannot dislike, or hate or be bored by something and have enemies or people that are very vague acquaintances. The seeming social transparency that is the promise of corporate social media is a construct: the platform imposes its own logic, and in the case of Facebook, this logic is one of constant connectivity. The promise that social media data is in the first place a transparent trace of human behaviour is thus false: what data reveals is the articulation of participatory and corporate logics. As such, any claim to examine a pre-existing social through social media is thus flawed. Thus, in studying modes of participatory culture on corporate social media platforms we encounter two main challenges: one concerning access to data and the ethics of data research, the other data itself and what it claims to stand for” (Langlois and Elmer, 2013, THE RESEARCH POLITICS OF SOCIAL MEDIA PLATFORMS, Culture Machine, v.14/2013)

 

Schermata 07-2456504 alle 19.55.35

The Ontology of Digital Data #bigdata #digitaldata #code

[…] The Ontology of Digital Data. Digital data is formless, plastic and leveling. Stored as binary bits, it has no form as  such. As Justin Clemens and I have written (2010), ‘Data is data. Data is absolutely not  a phenomenological thing. It cannot be experienced as such, like Aristotelian prime  matter. Unlike Aristotelian prime matter, however, we can manipulate data with ease.’ The  fundamentally plastic nature of digital data is what allows us to manipulate it, but until  we do manipulate it – until we modulate it into some kind of display register – any set of  digital data is indistinguishable from any other set of digital data, until modulated into a display register, and this is the leveling nature of digital data. All information is reduced to  an indistinguishable set of binary bits. To illustrate this, consider a digital image, such as  may have been taken by a digital camera of a material scene. Once this visual information  is stored as digital data, it can then be opened in, for example, a sound editing program and played as sound. It could equally be used as input to determine a height-map in a realtime  3D environment. The point is that once it is stored as digital data, it loses any determining  connection with its semantic source. Therefore, as I said above, parameters must be  rigorously established that govern how any given digital data is de- and re-modulated. The  notion of protocols or standardised processes that abound in the contemporary technical  sphere (such as govern the internet, image compression, audio reproduction and so on) are expressions of this codification of parameters – both sides of a modulation exchange agree to adhere to a set of parameters in order that the intended result is achieved. Naturally, once protocols are required, questions of intentionality, ideology and cultural convention arise. […] (from “Affect and the Medium of Digital Data”, by Adam Nash, The FibreCulture Journal, 2012)

FCJ-148 Affect and the Medium of Digital Data.

 Schermata 07-2456502 alle 22.07.48

“… instead of focusing on bodies in space, the new forms of observation focus on detecting and predicting the emergence of specific patterns of code” #bigdata #code #database #datapolitics

[…] “One of the key characteristics of the new forms of observation, as pointed out above, is that they are pre-emptive (Massumi 2009, 167), that is, they are aimed at anticipating actions before they actually occur. In short, the new forms of observation are characterized by the fact that they aim to recognize patterns of code generated on the machine level. This code is produced whenever we do something, or are observed doing something, by way of a digital machine, whether this be our action as the action of an individual, or our action as part of a population, or, indeed, both. It is significant that this form of observation does not operate in perspectival space, which is in direct contrast to how observation functions in the disciplinary machine. As I briefly touched on above, discipline organizes spaces so as to produce specific forms of conduct. One of the key elements in making spaces work – making them productive – is precisely the use of the instrument of hierarchical observation, as Foucault’s example of the panopticon demonstrates so well. However, rather than looking through or behind something, the new forms of observation always project onto a screen (Bogard 1996, 21), and, indeed, when no humans are involved screens themselves are superfluous. In short, instead of focusing on bodies in space, the new forms of observation focus on detecting and predicting the emergence of specific patterns of code. Since they are not spatial, nor are necessarily aimed at modifying an individual’s behavior, it suggests that they form part of a very different mechanism of power. For this reason that first mechanism can usefully be termed the recognition of patterns, and it is a key mechanism in the modulatory mode of power” (Savat, Uncoding the Digital, Palgrave, 2013)

Schermata 07-2456502 alle 21.13.14

“Entangling mobility and interactions in social media” #socialmobile #locationintelligence #bigdata #analytics

“Daily interactions naturally de ne social circles. Individuals tend to be friends with the people they spend time with and they choose to spend time with their friends, inextricably entangling physical location and social relationships. As a result, it is possible to predict not only someone’s location from their friends’ locations but also friendship from spatial and temporal co-occurrence. While several models have been developed to separately describe mobility and the evolution of social networks, there is a lack of studies coupling social interactions and mobility. In this work, we introduce a new model that bridges this gap by explicitly considering the feedback of mobility on the formation of social ties. Data coming from three online social networks (Twitter, Gowalla and Brightkite) is used for validation. Our model reproduces various topological and physical properties of these networks such as: i) the size of the connected components, ii) the distance distribution between connected users, iii) the dependence of the reciprocity on the distance, iv) the variation of the social overlap and the clustering with the distance. Besides numerical simulations, a mean- eld approach is also used to study analytically the main statistical features of the networks generated by the model. The robustness of the results to changes in the model parameters is explored, finding that a balance between friend visits and long-range random connections is essential to reproduce the geographical features of the empirical networks” (from “Entangling mobility and interactions in social media”, 2013)

Click to access 1307.5304.pdf

Schermata 07-2456501 alle 18.54.05

The Value of #BigData in Digital Media Research #socialdata #digitalresearch #analytics

“[…] While we do not argue that deriving measurement concepts from data rather than theory is problematic, per se, researchers should be aware that the most easily available measure may not be the most valid one, and they should discuss to what degree its validity converges with that of established instruments. For example, both communication research and linguistics have a long tradition of content-analytic techniques that are, at least in principle, easily applicable to digital media content.Of course, it is not possible to manually annotate millions of comments, tweets, or blog posts. However, any scholar who analyzes digital media can and should provide evidence for the validity of measures used, especially if they rely on previously unavailable or untested methods. The use of shallow, ‘‘available’’ measures often coincides with an implicit preference for automatic coding instruments over human judgment. There are several explanations for this phenomenon: First, many Big Data analyses are conducted by scholars who have a computer science or engineering background and may simply be unfamiliar with standard social science methods such as content analysis (but some are discussing the benefits of more qualitative manual analyses; Parker et al., 2011). Moreover, these researchers often have easier access to advanced computing machinery than trained research assistants who are traditionally employed as coders or raters […]  (from “The Value of Big Data in Digital Media Research”, by Merja Mahrt  & Michael Scharkow, 2013)

http://www.tandfonline.com/toc/hbem20/57/1#.UfLSd2S9-IU

Schermata 07-2456500 alle 21.50.12