Research

The vast majority of people building analytics and data science processes have every intention of being good and ethical. As a result, most potentially unethical and evil processes arise in situations where that wasn’t the intention. The problem is typically that proper focus and governance is not in place to keep analytics and data science processes on the side of good. On top of that, what is good and what is evil isn’t nearly as clear cut as we’d wish it to be.

Read More »

Mapping an Information Economy

By Doug Mirsky, Aug 16, 2019

Available to Research & Advisory Network Clients Only

Information Economies in Organizations

The data warehouse revolution began in 1991 when Bill Inmon published Building the Data Warehouse. Inmon observed, early in that book, that every organization has a naturally occurring information economy, and that most naturally occurring information economies were inefficient, duplicative and prone to produce suboptimal decisions.

This observation of Inmon’s has not gotten anywhere near the credit, or attention, it deserves. A decade’s worth of collective practice in advanced analytics should tell us that everything we know about real-world economies applies to our information economies. There is demand for information by people and functions in an organization, and there is a supply of (some of) that information. There is (some amount) of technical and procedural infrastructure – some kind of market — to bring demand and supply together in an organized way. That “market” infrastructure is often partial, fragile and in some cases ineffective. There are competitive alternatives (like cloud service providers and SaaS vendors), over- and under-regulation (various data governance models), excessive demand-side taxation (cost allocation strategies), failure to invest in infrastructure, and all other elements of economies.

When organizations are planning strategy-driven large-scale advanced analytics programs, they should begin their planning by characterizing their as-is information economy.

Read More »

Creating A Data Engineering Culture: What it is, why it’s important, and how, and how not, to build

By Jesse Anderson, Jul 31, 2019

Available to Research & Advisory Network Clients Only

Why do some analytics projects succeed while so many fail? According to Gartner analyst Nick Heudecker, as many as 85% of big data projects fail. However, the ROI from the other 15% that succeed is incredibly promising. With such a clearly high barrier to competency in executing big data strategies, there remains significant opportunity for first-mover advantage for enterprises that can crack the code to improving their outcomes.

So, what can organizations do to increase their chances of big data success? Part of the answer lies in creating a data engineering culture. This is the necessary foundation underpinning a big data analytics proficiency and enables companies to outperform the competition.

Read More »

Multi-Model Databases: A Primer

By Daniel Graham, Jun 05, 2019

Available to Research & Advisory Network Clients Only

Multi-model databases (MMDBMS) have been expanding the definition of database for several years. A multi-model database combines several data stores in one database. Those data storage services support distinct data models. Data models include relational, graph, documents, key-value, time-series, and object stores. But simply storing different kinds of data is insufficient to call it multi-model. Specialized programming services must exist for each data model. In the best MMDBMS, a single query can combine data from all data models.

Read More »

Portland 2019 Analytics Symposium Video: Mark Madsen

By Mark Madsen, Apr 17, 2019

Available to Research & Advisory Network Clients Only

The Black Box: Interpretability, Reproducibility, and Responsibility

Historically, a model produced a result that was interpreted by a person who made a decision. In recent years, as the amount of data and number of decisions have grown, agency has been taken from humans and given to machines, which make decisions in a black box. Black boxes raise issues around explainability (or interpretability)—being able to explain how a decision was made—and reproducibility —being able to use the same data and model to make an identical decision.

The reality is that being able to explain complex decisions is extremely difficult, and may not be necessary. And, being able to reproduce decisions is also very challenging, as data, tools, software, models, and environments change. Any single change can have a ripple effect that changes everything. The real issues are trust, reliability, and repeatability, particularly in high-stakes decisions. Building trust starts with IT policies, governance, and infrastructure, to enable preserving history and allow for understanding and reproducing decisions. This is the key to gaining trust and scaling analytics.

Read More »

Inquiry Response: Moving Toward Real-Time Responsiveness

By IIA Expert, Sep 24, 2018

Available to Research & Advisory Network Clients Only

Inquiry:

We’re in the digital entertainment industry, which requires high responsiveness to feedback so that we can continually improve the user experience. What investments can we make that will scale, particularly in terms of supporting real-time analytics?

Read More »

Sarmila Basu’s team of data scientists is using machine learning and modeling to save Microsoft millions of dollars on heating, cooling, and other facilities maintenance costs.

Read More »

Understanding Power in the Digital Economy

By Geoffrey Moore, May 09, 2017

We are all stakeholders in the economic systems within which we live and work, and the better we can understand their dynamics, the more likely we are to navigate them successfully. For the most developed economies of today, this means understanding the transition from an industrial to a digital economy, and specifically, how economic power is migrating from familiar to unfamiliar sites.

Read More »

IIA 2017 Spring Symposium Event Summary

By Jack Phillips, Apr 13, 2017

Available to Research & Advisory Network Clients Only

IIA hosted its first client-only Symposium of 2017 on March 14, 2017 at the VMware campus in Palo Alto, CA. Over 100 of IIA’s research clients gathered for the Symposium featuring five keynotes and two panel discussions. Given the location in the heart of Silicon Valley, the theme of the Spring Symposium was innovation, disruption, and the growing role of technology in shaping how analytics and data management are executed inside enterprises today.

Read More »

Video: Innovation, Disruption, and Enterprise Analytics

By IIA Expert, Apr 13, 2017

Available to Research & Advisory Network Clients Only

2017 Analytics Symposium - Silicon Valley

This presentation addresses how enterprises of all sizes can adopt a “start-up mentality” to transform their organizations and the industry. Featuring Geoffrey Moore, Author, Thought Leader.

Read More »