Information Overload: the #bigdata challenge for engineers and businesses

” […] If we assume that most consumers of information, for instance, as engineers or business and technical managers, have to find and use information within the real constraints of time to make decisions of all kinds, information overload is reduced to a matter of the management of data (facts without any interpretation), information (data interpreted meaningfully in a communicative chain of writers and readers), and knowledge (information that refers to a learning cycle) [11]. Knowledge comes in two basic forms relevant to engineering and technical communication: declarative, which addresses what, and procedural, which addresses how [12]. When it comes to decision making within the constraints of time, frustration may arise not only from information overload but also from its opposite—information underload—which occurs when there is not enough information available to make the right decision. Information overload is closely linked to high cognitive load”.

(from: Information Overload, 2012)

http://media.wiley.com/product_data/excerpt/32/11182301/1118230132-118.pdf 

"When marketers develop segmentations as a market device, they do so not as something that ???unveils segments already there". Rather this is an ontological process ??? an attempt to produce segments"

“These market devices, from measurements to architecture to pricing to analysts, and so on, encompass the material and discursive assemblages that intervene in the construction [and performance] of markets(Muniesa et al. 2007, p. 2). When marketers develop segmentations as a market device, they do so not as something that unveils segments already there” (Kjellberg and Helgesson 2007, p. 144). Rather this is an ontological process an attempt to produce segments. Performance-oriented understanding of markets and market makingare about particular enactments of the social that serve to produce both consumption and production through sets of material and discursive practices (see Law and Urry 2004). This demonstrates how the notion of consumersis fully susceptible to change, and how the process of assembling and reassembling consumers is continual and indicative of a dynamic and iterative form of surveillance. As corporations attempt to meet the needs of consumers that they knowand define, they co-construct consumers through the ascription of needs, desires, socio-economic status..”

Database Ethnographies Using Social Science Methodologies to Enhance Data Analysis and Interpretation

“Data are the basis for many decisions ranging from assessing credit applications, to determining societal risk of criminals to adjudicating grant applications. Data collection and use constitute social practices, yet once data are placed in tables, their social lineage is forgotten. Database ethnographies are a unique means of using insights from science and technology studies and practices from the social sciences to enhance data analysis. The goal of this methodology is to elicit information from data stewards about the data in multiple-use databases in order to provide an archive that describes the context and meaning of the data at a particular point in time. This article provides a review of a composite literature that contributed to the concept and implementation of database ethnographies. In addition, it illustrates how database ethnographies contribute to more nuanced metadata and act as the basis for informed decision-making involving data from multiple sources”

"This modulation effect of the database represents the gist of Deleuze???s (1992) notion of the dividual, which encapsulates the process of soliciting dispersed consumer information and reorganizing it according to a speci???c code on a different plane of

” […] Clearly then, computerized information networks that continuously integrate dispersed sites of information solicitation with simulational feedback loops do not produce stable and enclosed repositories of meaning such as ‘individuals’, ‘individuality’ and ‘identities’, but dynamic and functional modulations of these, or what Deleuze (1992) calls ‘dividuals’. For Deleuze, control (e.g. of risk, as in the case of the bank) happens inside digital networks and it is measured and administered not through the use of static media and ???xed architectures but by codes. Codes are ???exible systems of capture in ways that ???xed enclosures are not (Bogard, 2007). They are easily recon???gured to re-evaluate value, reassess risk, and regulate access to space, information and resources. […] This modulation effect of the database represents the gist of Deleuze’s (1992) notion of the dividual, which encapsulates the process of soliciting dispersed consumer information and reorganizing it according to a speci???c code on a different plane of reality.

Hence, dividuation is, according to Bogard (2007), fundamental to the logic of capitalist accumulation that breaks down life into measures of information. Unlike technologies of differentiation that aim at disciplining, dividuating technologies aim at modular control. Market information as constituted by the database can hence be understood as over-layering the established social reality of individuals and their actions with another plane made up of measures of information mapping associations, intensities, ???ows and values toward which recoding and production efforts of the database are directed. This productive act, then, does not so much produce identities imposed on concrete bodies in the way disciplinary power effects such individuation as much as it produces modulation points on which marketers can anchor their efforts to structure ???ows of money and attention”

#Bigdata does not simply allow us to do new things. Rather, it forces us to change the way we think about and interact with the world.

“The term “big data” has recently taken hold in the technology industry. It is bandied about at conferences, in advertisements and is de rigeur at presentations to venture capitalists. But no one really knows what it means. The amount of data to process has always seemed to outstrip the tools available; data has always seemed “big” to someone. After all, the guidance control computer on the Apollo 11 mission to the moon in 1969 had all of 64 kilobytes of memory.

In Big Data: A Revolution That Will Transform How We Live, Work, and Think, Viktor Mayer-Schönberger and Kenneth N. Cukier define what is new and why it matters. The media and business analysts have so far only skirted the surface of the issue, focusing on the size of the data deluge and the fancy new tricks that data-crunching can do. Both are interesting, but they miss a more important point. Big Data tells of a much more significant transformation. Big data does not simply allow us to do new things. Rather, it forces us to change the way we think about and interact with the world.

With big data, things that could never be measured, stored, analyzed or shared are becoming data-ized: quantified in digital form. Harnessing all the data rather than a sample, and privileging more data of less exactitude, opens the door to new ways to understand the world. It enables society to give up its time-honored preference for causality, and in many instances tap the benefits of correlation. The search to know why something happens is no longer the be-all and end-all, big data overturns it. The certainties we believed in are changing–replaced, ironically, by better evidence. Big Data explains where we are, how we got here, and provides a roadmap for what lies ahead.”

(from http://www.garamondagency.com/index.php?id=421)

By interacting with these interfaces, we are also mapped: data-driven machine learning algorithms process our collective data traces

[… ] Computers as future depend on computers as memory machines, on digital data as archives that are always there. This future depends on programmable visions that extrapolate the future—or, more precisely, a future—based on the past. […] Computers embody a certain logic of governing or steering through the increasingly complex world around us. By individuating us and also integrating us into a totality, their interfaces offer us a form of mapping, of storing files central to our seemingly sovereign—empowered—subjectivity. By interacting with these interfaces, we are also mapped: data-driven machine learning algorithms process our collective data traces in order to discover underlying patterns (this process reveals that our computers are now more profound programmers than their human counterparts) 

(from: http://mitpress.mit.edu/books/programmed-visions)

That visceral act is actually an interaction: you have just participated in a data-mining operation. Your input feeds a marketing analysis apparatus, and that feeds a product development machine

“[…] For example, you buy things with your credit card, presumably to satisfy needs or desires in your life. Needs, desires: you purchase at your soft points. That visceral act is actually an interaction: you have just participated in a data-mining operation. Your input feeds a marketing analysis apparatus, and that feeds a product development machine. The system eventually gets back to you with new products responding to the input, and with new ways to reach you, massage your rhythms, air out your viscera, induce you to spend. New needs and desires are created. Even whole new modes of experience, which your life begins to revolve around. You have become, you have changed, in interaction with the system. You have literally shopped yourself into being. At the same time, the system has adapted itself . It ’ s a kind of double capture of mutual responsiveness in a reciprocal becoming” (B. Massumi, 48).

 (from Brian Massumi, “Semblance and Event”, The MIT Press, 2012)

Perspectives on #BigData and Big Data Analytics

“Nowadays companies are starting to realize the importance of using more data in order to  support decision for their strategies. It was said and proved through study cases that “More data usually beats better algorithms”. With this statement companies started to realize that they can chose to invest more in processing larger sets of data rather than investing in expensive algorithms. The large quantity of data  is better used as a whole because of the possible correlations on a larger amount, correlations that can never be found if the data is analyzed on separate sets or on a smaller set. A larger amount of data gives a better output but also working with it can become a challenge due to processing limitations.  This article intends to define the concept of Big Data and stress the importance of Big Data Analytics”

http://www.dbjournal.ro/archive/10/10.pdf

 

From Punched Cards to #BigData: A Social History of Database Populism

“Since the diffusion of the punched card tabulator following the 1890 U.S. Census, mass-scale information processing has been alternately a site of opportunity, ambivalence and fear in the American imagination. While large bureaucracies have tended to deploy database technology toward purposes of surveillance and control, the rise of personal computing made databases accessible to individuals and small businesses for the first time. Today, the massive collection of trace communication data by public and private institutions has renewed popular anxiety about the role of the database in society. This essay traces the social history of database technology across three periods that represent significant changes in the accessibility and infrastructure of information processing systems. Although many proposed uses of “big data” seem to threaten individual privacy, a largely-forgotten database populism from the 1970s and 1980s suggests that a reclamation of small-scale data processing might lead to sharper popular critique in the future”