“Abstract: The number of devices on the Internet exceeded the number of people on the Internet in 2008, and is estimated to reach 50 billion in 2020. A wide-ranging Internet of Things (IOT) ecosystem is emerging to support the process of connecting real-world objects like buildings, roads, household appliances, and human bodies to the Internet via sensors and microprocessor chips that record and transmit data such as sound waves, temperature, movement, and other variables. The explosion in Internet-connected sensors means that new classes of technical capability and application are being created. More granular 24/7 quantified monitoring is leading to a deeper understanding of the internal and external worlds encountered by humans. New data literacy behaviors such as correlation assessment, anomaly detection, and high-frequency data processing are developing as humans adapt to the different kinds of data flows enabled by the IOT. The IOT ecosystem has four critical functional steps: data creation, information generation, meaning-making, and action-taking. This paper provides a comprehensive review of the current and rapidly emerging ecosystem of the Internet of Things (IOT).”
Author: Cosimo Accoto
Collisions and Collaboration. The Organization of Learning in the ATLAS Experiment at the LHC
“High-energy physics, for example, ‘observes’ many phenomena that cannot be readily seen in nature. Some particles exist for only a tiny fraction of a second and can therefore be observed only under strictly defined laboratory conditions. But what, exactly, does it mean to say that phenomena generate data? When, for example, we look at the moon, are we seeing the moon or the data of the moon? As we go about our daily business, the distinction hardly matters, but when looking, say, for, subatomic particles, the link between the data we observe—a track on some substrate in a detector—and the phenomena that we infer from the data—a given type of particle—may be quite indirect. The issue is a subtle one but quite relevant toour analysis”.
(Images and text from Boisot (2012), Collision and Collaboration, Oxford University Press)The Concept of Information Overload: A Review of Literature from Organization Science, Accounting, Marketing, MIS, and Related Disciplines
“In this article, we present a review of the literature on information overload in management-related academic publications. The main elements of our approach are literature synopsis, analysis, and discussion (Webster & Watson, 2002). These three elements serve, in our view, the threemain purposes of a literature review, namely, to provide anoverview of a discourse domain (e.g., compiling the main terms, elements, constructs, approaches and authors), to analyze and compare the various contributions (as well as their impact), and to highlight current research deficits and future research directions. These three objectives should be met, with regard to the topic of information overload, as a clear overview, an analysis of the major contributions, and an identification of future research needs still missing for this topic”
http://www.scribd.com/doc/42457026/Eppler-Information-Overload-1
Information Overload: the #bigdata challenge for engineers and businesses
” […] If we assume that most consumers of information, for instance, as engineers or business and technical managers, have to find and use information within the real constraints of time to make decisions of all kinds, information overload is reduced to a matter of the management of data (facts without any interpretation), information (data interpreted meaningfully in a communicative chain of writers and readers), and knowledge (information that refers to a learning cycle) [11]. Knowledge comes in two basic forms relevant to engineering and technical communication: declarative, which addresses what, and procedural, which addresses how [12]. When it comes to decision making within the constraints of time, frustration may arise not only from information overload but also from its opposite—information underload—which occurs when there is not enough information available to make the right decision. Information overload is closely linked to high cognitive load”.
(from: Information Overload, 2012)
http://media.wiley.com/product_data/excerpt/32/11182301/1118230132-118.pdf"When marketers develop segmentations as a market device, they do so not as something that ???unveils segments already there". Rather this is an ontological process ??? an attempt to produce segments"
“These market devices, from measurements to architecture to pricing to analysts, and so on, encompass the “material and discursive assemblages that intervene in the construction [and performance] of markets”(Muniesa et al. 2007, p. 2). When marketers develop segmentations as a market device, they do so not as something that “unveils segments already there” (Kjellberg and Helgesson 2007, p. 144). Rather this is an ontological process – an attempt to produce segments. Performance-oriented understanding of markets and ‘market making’ are about particular enactments of the social that serve to produce both consumption and production through sets of material and discursive practices (see Law and Urry 2004). This demonstrates how the notion of “consumers” is fully susceptible to change, and how the process of assembling and reassembling consumers is continual and indicative of a dynamic and iterative form of surveillance. As corporations attempt to meet the needs of consumers that they “know” and define, they co-construct consumers through the ascription of needs, desires, socio-economic status..”
Database Ethnographies Using Social Science Methodologies to Enhance Data Analysis and Interpretation
“Data are the basis for many decisions ranging from assessing credit applications, to determining societal risk of criminals to adjudicating grant applications. Data collection and use constitute social practices, yet once data are placed in tables, their social lineage is forgotten. Database ethnographies are a unique means of using insights from science and technology studies and practices from the social sciences to enhance data analysis. The goal of this methodology is to elicit information from data stewards about the data in multiple-use databases in order to provide an archive that describes the context and meaning of the data at a particular point in time. This article provides a review of a composite literature that contributed to the concept and implementation of database ethnographies. In addition, it illustrates how database ethnographies contribute to more nuanced metadata and act as the basis for informed decision-making involving data from multiple sources”
"This modulation effect of the database represents the gist of Deleuze???s (1992) notion of the dividual, which encapsulates the process of soliciting dispersed consumer information and reorganizing it according to a speci???c code on a different plane of
” […] Clearly then, computerized information networks that continuously integrate dispersed sites of information solicitation with simulational feedback loops do not produce stable and enclosed repositories of meaning such as ‘individuals’, ‘individuality’ and ‘identities’, but dynamic and functional modulations of these, or what Deleuze (1992) calls ‘dividuals’. For Deleuze, control (e.g. of risk, as in the case of the bank) happens inside digital networks and it is measured and administered not through the use of static media and ???xed architectures but by codes. Codes are ???exible systems of capture in ways that ???xed enclosures are not (Bogard, 2007). They are easily recon???gured to re-evaluate value, reassess risk, and regulate access to space, information and resources. […] This modulation effect of the database represents the gist of Deleuze’s (1992) notion of the dividual, which encapsulates the process of soliciting dispersed consumer information and reorganizing it according to a speci???c code on a different plane of reality.
Hence, dividuation is, according to Bogard (2007), fundamental to the logic of capitalist accumulation that breaks down life into measures of information. Unlike technologies of differentiation that aim at disciplining, dividuating technologies aim at modular control. Market information as constituted by the database can hence be understood as over-layering the established social reality of individuals and their actions with another plane made up of measures of information mapping associations, intensities, ???ows and values toward which recoding and production efforts of the database are directed. This productive act, then, does not so much produce identities imposed on concrete bodies in the way disciplinary power effects such individuation as much as it produces modulation points on which marketers can anchor their efforts to structure ???ows of money and attention”
#Bigdata does not simply allow us to do new things. Rather, it forces us to change the way we think about and interact with the world.
“The term “big data” has recently taken hold in the technology industry. It is bandied about at conferences, in advertisements and is de rigeur at presentations to venture capitalists. But no one really knows what it means. The amount of data to process has always seemed to outstrip the tools available; data has always seemed “big” to someone. After all, the guidance control computer on the Apollo 11 mission to the moon in 1969 had all of 64 kilobytes of memory.
In Big Data: A Revolution That Will Transform How We Live, Work, and Think, Viktor Mayer-Schönberger and Kenneth N. Cukier define what is new and why it matters. The media and business analysts have so far only skirted the surface of the issue, focusing on the size of the data deluge and the fancy new tricks that data-crunching can do. Both are interesting, but they miss a more important point. Big Data tells of a much more significant transformation. Big data does not simply allow us to do new things. Rather, it forces us to change the way we think about and interact with the world.
With big data, things that could never be measured, stored, analyzed or shared are becoming data-ized: quantified in digital form. Harnessing all the data rather than a sample, and privileging more data of less exactitude, opens the door to new ways to understand the world. It enables society to give up its time-honored preference for causality, and in many instances tap the benefits of correlation. The search to know why something happens is no longer the be-all and end-all, big data overturns it. The certainties we believed in are changing–replaced, ironically, by better evidence. Big Data explains where we are, how we got here, and provides a roadmap for what lies ahead.”
By interacting with these interfaces, we are also mapped: data-driven machine learning algorithms process our collective data traces
[… ] Computers as future depend on computers as memory machines, on digital data as archives that are always there. This future depends on programmable visions that extrapolate the future—or, more precisely, a future—based on the past. […] Computers embody a certain logic of governing or steering through the increasingly complex world around us. By individuating us and also integrating us into a totality, their interfaces offer us a form of mapping, of storing files central to our seemingly sovereign—empowered—subjectivity. By interacting with these interfaces, we are also mapped: data-driven machine learning algorithms process our collective data traces in order to discover underlying patterns (this process reveals that our computers are now more profound programmers than their human counterparts)
(from: http://mitpress.mit.edu/books/programmed-visions)

