“[…] After months of data mining on makes and models of cars and which orders correlate to each type of vehicle, the system reliably estimated what the short-order cooks should deliver as customers drove up. Bob extends the Internet landing page strategy to the parking lot. Even privacy advocates have trouble finding fault with Bob. The computer system is only recognizing a car and making a guess about what the car’s occupants will order. If the company does not sell that information and does not associate the purchaser’s identity with the car’s details, then the invasion of privacy can seem minor. The company can even expunge details about when the car visits, removing information that otherwise could have legal value in a court case, for instance, establishing the validity of an alibi. Recognizing cars well enough—perception—and making the right decisions about what to cook—cognition—were unthinkable at this level ten years ago. Today this is nearly standard practice” (by Illah Reza Nourbakhsh, Robot Futures, The MIT Press, 2013, p.11)
Author: Cosimo Accoto
This is not just data visualization but also data formation… #bigdata
[…] This design has reduced cognitive load by assuming the form of a physical object. This is not just data visualization but also data formation. It is an interface you do not operate, and as a part of the scene it is ambient. Such developments in ambient interface may as yet be a sideshow in comparison to how disembodied information media blanket urban space with their screens, but it’s a start. As yet, the capacity to tag, to project, or even to inhabit one’s own contributions or one’s group’s curations of augmented urban space is at a very early stage. The challenge is to find the right contexts, scale, texture, timescale, and spatial resolution, and then, as this inquiry attempts, to combine insights on attention with insights on the history of the built environment. For all of this prospect, it seems wise to note that information can take form” (from McCullough, “Ambient Commons. Attention in the Age of Embodied Information, The MIT Press, 2013, p.88).
#BigData come into existence through any of several different mechanisms (cit)
[from the introduction to “Principles of Big Data” (by Jules J. Berman, Elsevier, 2013, p.xxiii)
“Generally, Big Data come into existence through any of several different mechanisms.
1. An entity has collected a lot of data, in the course of its normal activities, and seeks to organize the data so that materials can be retrieved, as needed. The Big Data effort is intended to streamline the regular activities of the entity. In this case, the data is just waiting to be used. The entity is not looking to discover anything or to do anything new. It simply wants to use the data to do what it has always been doing—only better. The typical medical center is a good example of an “accidental” Big Data resource. The day-to-day activities of caring for patients and recording data into hospital information systems results in terabytes of collected data in forms such as laboratory reports, pharmacy orders, clinical encounters, and billing data. Most of this information is generated for a one-time specific use (e.g., supporting a clinical decision, collecting payment for a procedure). It occurs to the administrative staff that the collected data can be used, in its totality, to achieve mandated goals: improving quality of service, increasing staff efficiency, and reducing operational costs.
2. An entity has collected a lot of data in the course of its normal activities and decides that there are many new activities that could be supported by their data. Consider modern corporations—these entities do not restrict themselves to one manufacturing process or one target audience. They are constantly looking for new opportunities. Their collected data may enable them to develop new products based on the preferences of their loyal customers, to reach new markets, or to market and distribute items via the Web. These entities will become hybrid Big Data/manufacturing enterprises.
3. An entity plans a business model based on a Big Data resource. Unlike the previous entities, this entity starts with Big Data and adds a physical component secondarily. Amazon and FedEx may fall into this category, as they began with a plan for providing a data-intense service (e.g., the Amazon Web catalog and the FedEx package-tracking system). The traditional tasks of warehousing, inventory, pickup, and delivery had been available all along, but lacked the novelty and efficiency afforded by Big Data.
4. An entity is part of a group of entities that have large data resources, all of whom understand that it would be to their mutual advantage to federate their data resources. An example of a federated Big Data resource would be hospital databases that share electronic medical health records.
5. An entity with skills and vision develops a project wherein large amounts of data are collected and organized to the benefit of themselves and their user-clients. Google, and its many services, is an example (see Glossary items, Page rank, Object rank).
6. An entity has no data and has no particular expertise in Big Data technologies, but it has money and vision. The entity seeks to fund and coordinate a group of data creators and data holders who will build a Big Data resource that
can be used by others. Government agencies have been the major benefactors. These Big Data projects are justified if they lead to important discoveries that could not be attained at a lesser cost, with smaller data resources”
(source: http://www.sciencedirect.com/science/book/9780124045767
#bigdata … crucial challenges that ubiquitous and pervasive computing pose for cultural theory and criticism
[abstract] Ubiquitous computing and our cultural life promise to become completely interwoven: technical currents feed into our screen culture of digital television, video, home computers, movies, and high-resolution advertising displays. Technology has become at once larger and smaller, mobile and ambient. InThroughout, leading writers on new media–including Jay David Bolter, Mark Hansen, N. Katherine Hayles, and Lev Manovich–take on the crucial challenges that ubiquitous and pervasive computing pose for cultural theory and criticism. The thirty-foure contributing researchers consider the visual sense and sensations of living with a ubicomp culture; electronic sounds from the uncanny to the unremarkable; the effects of ubicomp on communication, including mobility, transmateriality, and infinite availability; general trends and concrete specificities of interaction designs; the affectivity in ubicomp experiences, including performances; context awareness; and claims on the “real” in the use of such terms as “augmented reality” and “mixed reality (Ulrik Ekman, ed., Throughout: Art and Culture Emerging with Ubiquitous Computing, Cambridge, Mass.; MIT Press, 2012).
…to separate analytical from operational system #bigdata
“[…] In the early days of BI, running queries was only possible for IT experts. The tremendous increase in available computational power and main memory has allowed us to think about a totally different approach: the design of systems that empower business users to define and run queries on their own. This is sometimes called self-service BI. An important behavioral aspect is the shift from push to pull: people should get information whenever they want and on any device [142]. For example, a sales person can retrieve real-time data about a customer, including BI data, instantly on his smart phone. Another example could be the access of dunning functionality from a mobile device. This enables a salesperson to run a dunning report while on the road and to visit a customer with outstanding payments if she or he is in the area. These examples emphasize the importance of sub-second response time applications driven by in-memory database technology. The amount of data transferred to mobile devices and the computational requirements of the applications for the mobile devices have to be optimized, given limited processing power an connection bandwidths. As explained previously, . An exception was the need to consolidate complex, heterogeneous system landscapes. As a result of the technological developments in recent years, many technical problems have been solved. We propose that BI using operational data could be once again performed on the operational system. In-memory databases using column-oriented and row-oriented storage, allow both operational and analytical workloads to be processed at the same time in the same system” (“In-Memory Data Management, Plattner and Zeier, Springer, p.183).
Social Interactions in Databases #bigdata #socialdata
[…] we argue that database technology should and can be adapted to provide the needed capabilities to support user interaction, user communities, and the social dynamics that arise from them : l) Database technology should be used to support user interaction because databases tend to have communities of users (i.e., not a single or small group), so they are a perfect environment to enable social interaction . Furthermore, there are several ways in which database systems can benefit from user-created content: it can help interpret the data in the database, enrich it, and fill in any gaps in (very needed, but hardly present) metadata. Also, by allowing users to store their own data in the database, we make them more likely to explore the data and, in general, use the database for their tasks; 2) Database technology can be used to support user interaction because the relational data model can be seen as a general platform on top of which flexible schemas can be developed so that almost arbitrary content can be captured and stored (from Antonio Badia, Social Interaction in Databases, in “Community-Built Databases”, p. 160, Springer)
“Post-demographics machines” #bigdata
“Post-demographics? Leading research into social networking sites considers such issues as presenting oneself and managing one’s status online, the different ‘social classes’ of users of MySpace and Facebook and the relationship between real-life friends and ‘friended’ friends (Boyd & Ellison, 2007). Another set of work, often from software-making arenas, concerns how to make use of the copious amounts of data contained in online profiles, especially interests and tastes. I would like to dub this latter work ‘postdemographics’. Post-demographics could be thought of as the study of the data in social networking platforms, and, in particular, how profiling is, or may be, performed. Of particular interest here are the potential outcomes of building tools on top of profiling platforms, including two described below. What kinds of findings may be made from mashing up the data, or what may be termed meta-profiling? [from Rogers, “Post-demographics Machines”]
A chapter on “Social Media and Post-demographics Machines” in [Rogers, Digital Methods, The MIT Press, 2013)
“a tenth of a second is .. constitutive of modernity” #bigdata
“At first glance, it may seem that we can ignore the history of a tenth of a second. After all, most events occurring within this short period of time cannot be perceived. Most persons take more than a tenth of a second to react. But in looking more carefully at this moment, it appears strangely constitutive of modernity. The tenth of a second was repeatedly referenced in debates about the nature of time, causality, free will, and the difference between humans and nonhumans. Understanding this short, “invisible” period of time is as important as understanding other equally small and invisible things. When in the seventeenth century Robert Hooke used a newly invented microscope to reveal the shocking wealth of the micro-world, he claimed that the “shadow of things” no longer needed to be taken for their “substance.” Microscopy led him away from “uncertainty,” “mistakes,” “dogmatizing,” and forms of knowledge based largely on “discourse and disputation.” This new technology appeared to him as important as a series of other revolutionary inventions. Hooke listed it among gun powder, the seaman’s compass, printing, etching, and engraving, which together saved man from misguided attempts to advance on knowledge through wasteful “talking,” “arguing,” and “opining.” (from “A tenth of a second. An history”, Jimena Canales, University of Chicago Press)
#bigdata […] any visualization of data must invent an artificial set of translation rules that convert abstract number to semiotic sign
“[…]Data, reduced to their purest form of mathematical values, exist first and fo remost as number, and, as number, data’s primary mode o f existence is not a visual one. Thus to say “no necessary” means that any visualization of data requires a contingent leap from the mode of the mathematical to the mode of the visual. This does not mean that aestheticization cannot be achieved. And it does not mean that such acts of aestheticization are unmotivated, nugatory, arbitrary, or otherwise unimportant. It simply means that any visualization of data must invent an artificial set of translation rules that convert abstract number to semiotic sign. Hence it is not too juvenile to point out that any data visualization is first and foremost a visualization of the conversion rules themselves, and only secondarily a visualization of the raw data. Visualization wears its own artifice on its sleeve. And because of this, any data visualization will be first and foremost a theater for the logic of necessity that has been superimposed on the vast sea of contingent relations. So with the word “form ” already present in the predicate of the first thesis, and if the reader will allow a sloppy syllogism, it is possible to re jigger the first thesis so that both data and information may be united in something of an algebraic relationship . Hence now it goes, data have no necessary information” (A.R. Galloway, The Interface Effect. Polity, 2013, 83)









