[…] As a leaky pipe for communication, Enterprise Social Media (ESM) create special opportunities for analyzing social relations and producing insights based on social analytics. The digital traces of communication can be processed with algorithms that can help employees make connections, and help managers understand the organization’s informal information economy. A study by Green, Contractor, and Yao (2006) showed how a social networking application with algorithms to make emergent associations between people and user-generated content spurred cross-boundary interactions and knowledge sharing in environmental engineering and hydrological science research. This increased collaboration occurred because once users learned that others were interested in similar topics to them individuals were more willing to work to overcome disciplinary differences and understand one another, even if they did not share a common store of domain knowledge. The use of digital communication traces that have leaked out of secure channels and are available for mining with machine learning algorithms can also have disadvantages for organizational action.. (from “Enterprise Social Media: Definition, History, and Prospects for the Study of Social Technologies in Organizations, Paul M. Leonardi et alii, 2013)
Author: Cosimo Accoto
Data governance: cultural readiness is the ability to collaborate #socbiz
[…] A specific area of cultural readiness is the ability to collaborate. This activity measures the amount of collaboration or other cooperative behaviors in existence and, in some organizations, may include Facebook-type constructs or even Twitter. This assessment is important when content management, document management, and workflow are within the realm of the DG team. Additionally, the assessment is handy if the business has picked up on social networking as a possible enabler of business goals. This assessment is usually done via examination of the technology available, its extent of deployment, and its usage. Additionally, a brief survey, similar to the “Change Capacity,” can be used to see if the organization even wants to collaborate. This is not a trivial subject. As organizations become more sophisticated in their ability to reach across organizational boundaries, the need to leverage and manage the collaboration increases. There is also an opportunity to improve how an enterprise makes decisions by instituting and managing collaborative and social technologies. If anything like this is on the enterprise radar, then this assessment should be considered. Lastly, often companies have a situation where SharePoint or Lotus files are out of control. This assessment offers a chance to zero in on this issue” (Data Governance, Ladley, 2013: 78)
#BigData come into existence through any of several different mechanisms…
From “Principles of Big Data” (J.J. Berman, 2013: p.xxiii, Morgan Kauffman)
“Generally, Big Data come into existence through any of several different mechanisms.
1. An entity has collected a lot of data, in the course of its normal activities, and seeks to organize the data so that materials can be retrieved, as needed. The Big Data effort is intended to streamline the regular activities of the entity. In this case, the data is just waiting to be used. The entity is not looking to discover anything or to do anything new. It simply wants to use the data to do what it has always been doing—only better. The typical medical center is a good example of an “accidental” Big Data resource. The day-to-day activities of caring for patients and recording data into hospital information systems results in terabytes of collected data in forms such as laboratory reports, pharmacy orders, clinical encounters, and billing data. Most of this information is generated for a one-time specific use (e.g., supporting a clinical decision, collecting payment for a procedure). It occurs to the administrative staff that the collected data can be used, in its totality, to achieve mandated goals: improving quality of service, increasing staff efficiency, and reducing operational costs.
2. An entity has collected a lot of data in the course of its normal activities and decides that there are many new activities that could be supported by their data. Consider modern corporations—these entities do not restrict themselves to one manufacturing process or one target audience. They are constantly looking for new opportunities. Their collected data may enable them to develop new products based on the preferences of their loyal customers, to reach new markets, or to market and distribute items via the Web. These entities will become hybrid Big Data/manufacturing enterprises.
3. An entity plans a business model based on a Big Data resource. Unlike the previous entities, this entity starts with Big Data and adds a physical component secondarily. Amazon and FedEx may fall into this category, as they began with a plan for providing a data-intense service (e.g., the Amazon Web catalog and the FedEx package-tracking system). The traditional tasks of warehousing, inventory, pickup, and delivery had been available all along, but lacked the novelty and efficiency afforded by Big Data.
4. An entity is part of a group of entities that have large data resources, all of whom understand that it would be to their mutual advantage to federate their data resources. An example of a federated Big Data resource would be hospital databases that share electronic medical health records.
5. An entity with skills and vision develops a project wherein large amounts of data are collected and organized to the benefit of themselves and their user-clients. Google, and its many services, is an example (see Glossary items, Page rank, Object rank).
6. An entity has no data and has no particular expertise in Big Data technologies, but it has money and vision. The entity seeks to fund and coordinate a group of data creators and data holders who will build a Big Data resource that can be used by others. Government agencies have been the major benefactors. These Big Data projects are justified if they lead to important discoveries that could not be attained at a lesser cost, with smaller data resources” (J.J. Berman)
Data Governance and Governance … #Bigdata #DataGovernance
DATA GOVERNANCE AND GOVERNANCE
[…] The concept of managing information assets in a formal manner has been established. Now we need a process to ensure that management actually takes place and is being done correctly. Unplug your technology thinking and turn on your accountant thinking. Accountants manage financial assets. Accountants are governed by a set of principles and policies and are checked by auditors. Auditing ensures the correct management practice of financial assets. This is what data governance (DG) accomplishes for data, information, and content assets. DG is defined in the DMBOK as, “The exercise of authority, control, and shared decision making (planning, monitoring and enforcement) over the management of data assets.” In turn, governance is defined as, “The exercise of authority and control over a process, organization or geopolitical area. The process of setting, controlling, and administering and monitoring conformance with policy.”1 This definition is, of course, roughly synonymous with government. Slightly different definitions are often stated with an emphasis on the policy and programmatic aspects of DG. The one we use in our consulting work is, “Data governance is the organization and implementation of policies, procedures, structure, roles, and responsibilities which outline and enforce rules of engagement, decision rights, and accountabilities for the effective management of information assets.” Regardless of style of definition, the bottom line is that DG is the use of authority combined with policy to ensure the proper management of information assets. Make sure you do not confuse the management of data with ensuring data is managed […] from “Data Governance. How to Design, Deploy, and Sustain an Effective Data Governance Program” (John Ladley, 2013, p.11).
Social data is problematic … #bigdata #socialdata #digitalresearch
[…] It is important, furthermore, to understand that this contextual paradox of research between transparent communication and platform obfuscation is not just limited to what kind of data is accessible. Data itself, from a critical perspective, is a problematic concept: should it be seen as a faithful representation of human behaviour or as a dehumanized recording that artificially parcels out existence into quantifiable bits? As we said above, corporate social media do not simply transmit communication among users, they transform it and impose specific logic on it. To borrow from Lawrence Lessig (2006), the platform’s code imposes specific regulations, or laws, on social acts. The consequence of this is that corporate social media give the impression that they merely render social acts visible, whereas in fact they are in the process of constructing a specific techno-social world. For instance, while I can ‘like’ something on Facebook and have ‘friends’, I cannot dislike, or hate or be bored by something and have enemies or people that are very vague acquaintances. The seeming social transparency that is the promise of corporate social media is a construct: the platform imposes its own logic, and in the case of Facebook, this logic is one of constant connectivity. The promise that social media data is in the first place a transparent trace of human behaviour is thus false: what data reveals is the articulation of participatory and corporate logics. As such, any claim to examine a pre-existing social through social media is thus flawed. Thus, in studying modes of participatory culture on corporate social media platforms we encounter two main challenges: one concerning access to data and the ethics of data research, the other data itself and what it claims to stand for […. p 9-10]
from “THE RESEARCH POLITICS OF SOCIAL MEDIA PLATFORMS” – Ganaele Langlois and Greg Elmer – CULTURE MACHINE VOL 14 • 2013
Handbook of Social Media Management (Springer, 2013) #socialmedia #management #research
[…] More and more we can see a continuous change from the old Media Management to new strategic, operative and normative management options. Social Media Management has become an issue for every media company with a renewed “skill set”- specialized knowledge for the digital products and production and for marketing and target groups. This emerging development of media shift nowadays requires increasingly for active communication in the fundamental segments of these scholarly disciplines. The interface of media economy/media management provides the relevant arena in this matter.
– Management with Social Media.
– New Value Chain with Social Media.
– Forms and Content of Social Media.
– Social Media: Impact and Users.
Most of the chapters are interdisciplinary approaches. Main questions (among other) are:
– What are the specific effects of Social Media on Media Management Research?
– How is the value chain changing?
– What are the main changes of business models?
– Are there new theoretical approaches? Is it possible to combine “old” and “new” theories?
– Do we have some new empirical results?
– What are the proposals for the development in this research field?
Seventy-five Media Management researchers from the whole world are involved in this book. With very different content and methodical attempts the contributions also permit an interesting comparison of the developments in each country.
(from the Preface of Handbook of Social Media Management, 2013, Springer)
reading ” #BigData, Big Analytics: Emerging Business Intelligence and Analytic Trends for Today’s Businesses” (2013) by Michael Minelli, Michele Chambers and Ambiga Dhiraj.
[…] In fact, if you speak with most data industry veterans, Big Data has been around for decades for fi rms that have been handling tons of transactional data over the years—even dating back to the mainframe era. The reasons for this new age are varied and complex, so let ’s reduce them to a handful that will be easy to remember in case someone corners you at a cocktail party and demands a quick explanation of what ’s really going on. Here ’s our standard answer in three parts:
1. Computing perfect storm. Big Data analytics are the natural result of four major global trends: Moore ’s Law (which basically says that technology always gets cheaper), mobile computing (that smart phone or mobile tablet in your hand), social networking (Facebook, Foursquare, Pinterest, etc.), and cloud computing (you don ’t even have to own hardware or software anymore; you can rent or lease someone else ’s).
2. Data perfect storm. Volumes of transactional data have been around for decades for most big fi rms, but the fl ood gates have now opened with more volume , and the velocity and variety— the three Vs—of data that has arrived in unprecedented ways. This perfect storm of the three Vs makes it extremely complex and cumbersome with the current data management and analytics technology and practices.
3. Convergence perfect storm. Another perfect storm is happening, too. Traditional data management and analytics software and hardware technologies, open-source technology, and commodity hardware are merging to create new alternatives for IT and business executives to address Big Data analytics” (from “Big Data, Big Analytics: Emerging Business Intelligence and Analytic Trends for Today’s Businesses”, by Michael Minelli, Michele Chambers and Ambiga Dhiraj, 2013, p.1-2)
Quotes from my next book ;-) .. Enjoy and Spread it … #bigdata #beyondbigdata
“To be mediated by the immediacy, with N=all (totality) and T=-1 (premediation), is the service instantiation in a data-intensive age”
| from my next book |
“Quantified selves (Ostherr 2013), social machines (Semmelhack 2013), ambient commons (McCullough 2013) are data actants”
| from my next book |
“Data Ontologies: Totality, Immediacy, Premediation are the ontological vectors reshaping businesses and organizations”
| from my next book |
“In a data deictic perspective, a quantified, networked and anticipated self is emerging as new marketing platform”
| from my next book|
“Data deixis changes the logic of the customer segmentation. It’s no longer a logic of set, rather a logic of emergence”
| from my next book |
“In data-intensive age, customer centricity is useless unless you include the algorithmic mediation of secondary agency”
|from my next book|
“The ‘data continuum’ paradigm is reshaping customer information markets and systems as well as industry boundaries”
| from my next book |
“Looking at data as new personal and partecipatory markets devices is a way to deeply understand our data-intensive age”
| from my next book |
“In a data-intensive age, “real-time” is an ontological continuum spanning from subperceptuality to embedded temporalities”
| from my next book |
“Market, marketing or marke-things intelligence? In an ubiquitous data age, the situated analytics performs operations
| from my next book |
“Technologies for markets remote sensing are not monitoring practices, but modeling devices for new value propositions”
|from my next book |
“Big Data is about Transduction of Coded Spaces, Subperceptuality of Emebedded Temporalities and Machinic Secondary Agencies”
| from my next book
“In a subperceptual regime of temporality, the im-mediate is ontologically and conceptually linked to the un-mediated”
|from my next book |
“In digital age, we have performances not contents, performers not users, performables not channels”
| from my next book |
“In a data-intensive age, customer centricity is useless unless you include the algorithmic mediation of secondary agency”
|from my next book|









