“Big Data, data mining, and analytics at this point are lightning rods for both the promise of digital technologies and the uncertainty surrounding their implications for the future. (…) In analytic operations, algorithms (statistical computations with clearly defined steps) are central, having attained new significance in searches for meaning in digital depositories. They are involved in sense-making (pattern detection) as well as in meaning-making (pattern building) through existing recognition algorithms and by actively constructing algorithms that will lead to a desired outcome. Typical data mining requires multiple, conjoined sets of algorithms and multiple iterations during which the correct series of steps is determined. Algorithms are a crucial feature of the digital transformation. But it is important to remember that they are not neutral; they have a language and a politics. They incorporate a certain worldview.In analytics, we are dealing with a concatenation of different algorithms whose relationships and assumptions interact and quickly become untraceable. Ethnographers need to understand what kinds of algorithms affect their research and what interests, technical knowledge, and resources drive their construction. Significantly, we often do not know
(from Jordan, ed. “Advancing Ethnography in Corporate Environments: Challenges and Emerging Opportunities”, 2013)
“Interest in Big Data analytics (BDA) has certainly skyrocketed in the past few years to reach a fevered pitch, with the market for this technology projected to reach a 58% compounded annual growth rate over the next five years.1 Indeed, when I walked the vendor exhibit halls at several TDWI World Conferences during the past year, it seemed that nearly all the application vendors had introduced a new package offering a “Big Data” solution. At every booth, plenty of curious attendees lined up to hear about these new features. The vendors were certainly happy for the attention, but they also confided to me that they had grown tired of answering the same question day after day, namely “What is Big Data?”
here a link
[…] According to Gray, we are seeing the evolution of two branches in every discipline: a computational branch and a data-processing branch. For example, in ecology there is now “both computational ecology, which is to do with simulating ecologies, and eco-informatics, which is to do with collecting and analyzing ecological information” (xix). How will the social sciences be affected by these developments? This chapter aims to contribute to a better understanding of the implications of data-intensive and computational research methodologies for the social sciences by focusing on two social science fields: sociology and economics. We address the implications of this debate for sociology and economics by uncovering what is at stake here. Although different kinds of “new data” are collected by both disciplines (transactional versus brain data), they serve as good examples to demonstrate how disciplines are responding to the availability of new data sources”
(from “SLOPPY DATA FLOODS OR PRECISE SOCIAL SCIENCE METHODOLOGIES? DILEMMAS IN THE TRANSITION TO DATA-INTENSIVE RESEARCH IN SOCIOLOGY AND ECONOMICS”,
in Virtual Knoowledge, The MIT Press, 2013)
“When working in the field of community-built databases (CBD), we are dealing with several kinds of data which can be divided according to their nature into the following categories (see Fig. 3.1 ):
1) Database-specific data (e.g., movie information in movie databases); 2) User-specific data (information about user accounts); 3) Mixed data combining users with database-specific data (user/movie rating, user discussion, etc.)
From this point of view, one may consider CBD as direct products of social network formed around this database. Technically speaking, the database should be considered in a broader sense, such as a set of Web pages containing knowledge or information, not only as the technological base for saving data. It is natural to employ techniques of social network analysis (also referred to as SNA) in the field of CBD. SNA has proven to be useful in understanding complex relations among subjects with hidden implications. Uncovering these relations and implications can give us further insight into information contained in the database, the quantity and quality”
(from Pardede, “Community-Build Databases”, Springer)
“Discover powerful hidden social “levers” and networks within your company? then, use that knowledge to makes light “tweaks” that dramatically improve both business performance and employee fulfillment! In People Analytics, MIT Media Lab innovator Ben Waber shows how sensors and analytics can give you an unprecedented understanding of how your people work and collaborate, and actionable insights for building a more effective, productive, and positive organization.Through cutting-edge case studies, Wabershows how: Changing the way call center employees spent their breaks increased performance by 25% while significantly reducing stress; Quantifying the failure of marketing and customer service to communicate led to a more cohesive and profitable organization; Tweaking the balance of in-person and electronic communication can enhance the value of both; Sensor data can help you discover who your internal experts really are; Identifying employees involved in “creative” behaviors can help you promote innovation throughout your business; Sensors and simulations can help you optimize your sick-day policies; Measuring informal interactions can improve the chances that a merger, acquisition, or “mega-project” will succeed.
Drawing on his cutting-edge work at MIT and Harvard, Waber addresses crucial issues ranging from technology to privacy, revealing what will be possible in a few years, and what you can achieve right now. In bringing the power of analytics to organizational development, he offers immense new opportunities to everyone with responsibility for workplace performance”
“Abstract: The number of devices on the Internet exceeded the number of people on the Internet in 2008, and is estimated to reach 50 billion in 2020. A wide-ranging Internet of Things (IOT) ecosystem is emerging to support the process of connecting real-world objects like buildings, roads, household appliances, and human bodies to the Internet via sensors and microprocessor chips that record and transmit data such as sound waves, temperature, movement, and other variables. The explosion in Internet-connected sensors means that new classes of technical capability and application are being created. More granular 24/7 quantified monitoring is leading to a deeper understanding of the internal and external worlds encountered by humans. New data literacy behaviors such as correlation assessment, anomaly detection, and high-frequency data processing are developing as humans adapt to the different kinds of data flows enabled by the IOT. The IOT ecosystem has four critical functional steps: data creation, information generation, meaning-making, and action-taking. This paper provides a comprehensive review of the current and rapidly emerging ecosystem of the Internet of Things (IOT).”
“High-energy physics, for example, ‘observes’ many phenomena that cannot be readily seen in nature. Some particles exist for only a tiny fraction of a second and can therefore be observed only under strictly defined laboratory conditions. But what, exactly, does it mean to say that phenomena generate data? When, for example, we look at the moon, are we seeing the moon or the data of the moon? As we go about our daily business, the distinction hardly matters, but when looking, say, for, subatomic particles, the link between the data we observe—a track on some substrate in a detector—and the phenomena that we infer from the data—a given type of particle—may be quite indirect. The issue is a subtle one but quite relevant toour analysis”.
(Images and text from Boisot (2012), Collision and Collaboration, Oxford University Press)
“In this article, we present a review of the literature on information overload in management-related academic publications. The main elements of our approach are literature synopsis, analysis, and discussion (Webster & Watson, 2002). These three elements serve, in our view, the threemain purposes of a literature review, namely, to provide anoverview of a discourse domain (e.g., compiling the main terms, elements, constructs, approaches and authors), to analyze and compare the various contributions (as well as their impact), and to highlight current research deficits and future research directions. These three objectives should be met, with regard to the topic of information overload, as a clear overview, an analysis of the major contributions, and an identification of future research needs still missing for this topic”
” […] If we assume that most consumers of information, for instance, as engineers or business and technical managers, have to find and use information within the real constraints of time to make decisions of all kinds, information overload is reduced to a matter of the management of data (facts without any interpretation), information (data interpreted meaningfully in a communicative chain of writers and readers), and knowledge (information that refers to a learning cycle) . Knowledge comes in two basic forms relevant to engineering and technical communication: declarative, which addresses what, and procedural, which addresses how . When it comes to decision making within the constraints of time, frustration may arise not only from information overload but also from its opposite—information underload—which occurs when there is not enough information available to make the right decision. Information overload is closely linked to high cognitive load”.
(from: Information Overload, 2012)