Great Deal! Get Instant $10 FREE in Account on First Order + 10% Cashback on Every Order Order Now

Dow Chemical Co.: Big Data In Manufacturing W17696 DOW CHEMICAL CO.: BIG DATA IN MANUFACTURING R. Chandrasekhar wrote this case under the supervision of Professors Mustapha Cheikh-Ammar, Nicole...

1 answer below »
Dow Chemical Co.: Big Data In Manufacturing



R. Chandrasekhar wrote this case under the supervision of Professors Mustapha Cheikh-Ammar, Nicole Haggerty and Da
Meister solely to provide material for class discussion. The authors do not intend to illustrate either effective or ineffective handling
of a managerial situation. The authors may have disguised certain names and other identifying information to protect confidentiality.

This publication may not be transmitted, photocopied, digitized, or otherwise reproduced in any form or by any means without the
permission of the copyright holder. Reproduction of this material is not covered under authorization by any reproduction rights
organization. To order copies or request permission to reproduce materials, contact Ivey Publishing, Ivey Business School, Western
University, London, Ontario, Canada, N6G 0N1; (t XXXXXXXXXX; (e) XXXXXXXXXX;

Copyright © 2017, Richard Ivey School of Business Foundation Version: XXXXXXXXXX

It was September 2012. At his office in Texas City in the United States, Lloyd Colegrove, data services
director at the Dow Chemical Company (Dow), was reviewing the results of a pilot study initiated by his
team in one of the company’s U.S. manufacturing plants. The six-month study consisted of testing a data
analytics module being used at the plant’s research and development (R&D) lab for its applicability to the
shop floor. Attached to a laboratory information management system (LIMS), the module collected and
analyzed data, in real time, from instruments tracking the quality parameters of finished goods at the
plant, and acted upon them to maintain high quality yields.

The objective of the pilot study was to test whether the basic structure of the LIMS could be
supplemented and extended, plant-wide, into a larger system known in the bourgeoning analytics industry
as enterprise manufacturing intelligence (EMI). EMI was both a management practice and a software tool
for manufacturing. It contained more sophisticated analytics than the LIMS and provided easier and faster
aggregation of different data types. It also incorporated visualization tools to summarize key insights and
post them on different dashboards for plant engineers to monitor and act upon. The possibility existed to
scale up EMI company-wide, in all 197 of Dow’s manufacturing plants across 36 countries.

However, Colegrove had several issues on his mind:

The pilot [study] shows that plant engineers are working for the data; the data is not working for
them. There is clearly an opportunity to reverse the trend through EMI. But the opportunity has
also opened at least three dilemmas for me. How do we access data at the points of origin, [in]
eal time? Can we gain user acceptance of the proposed EMI and, if so, how? What are the
metrics with which we could measure the return on investment on EMI?


The chemical industry often served as a forerunner of the state of an economy, since its products were
consumed at early stages in the supply chains of user industries. Its business consisted of processing raw
materials to produce chemicals used, in turn, as raw materials by other industries. The business was
cyclical, relying on basic commodities such as oil and gas, and volatile. Of the 20 largest chemical
For the exclusive use of P. Saunders, 2020.
This document is authorized for use only by Paula Saunders in ISM 6026 Summer 2020 OMBA Cases taught by JAHYUN GOO, Florida Atlantic University from May 2020 to Aug 2020.
Page 2 9B17E014

companies operating 25 years ago, for example, only eight remained in operation in 2012. The rest did not
survive for three reasons: they were not making incremental and necessary changes to stay competitive;
they were not ensuring a regular pipeline of new products, and they were not investing in the regular
generation of patents covering proprietary manufacturing processes.

The global chemical industry was witness to a churn in relation to business models. By 2012, three
models were evident worldwide. The first was characterized by ownership of resources such as feedstock
(as the raw materials serving as inputs for the chemical industry were known). The companies following
this model focused on securing low-cost positions through economies of scale since they used a large
asset base to process largely commodity-like chemical products such as petrochemicals. The second
model was characterized by niche positioning. The companies following this model were leaders in
specific technologies and sought to protect their intellectual property through quality, innovation, and
strong relationships with customers that purchased their niche products. The companies following the
third model, such as Dow, were characterized as solutions providers. They understood end-to-end value
streams in different user industries, developed strategic partnerships with customers to drive innovation,
and responded to market changes faster than their industry peers.1 This space was getting increasingly
competitive as oil producers, for example, sought to generate new revenue streams by moving into the
space that companies such as Dow traditionally held.

More recently, some American companies, including Dow, were creating new production capacities in the
United States because of the availability of low-cost shale gas on which to run their plants. Together, these
companies committed up to US$110 billion2 of direct investment in advanced manufacturing factories in the
United States.3 The North American chemical industry was also witness to the phenomenon of reshoring,
esulting from low energy costs in the United States. In a telling example of the cost advantage being
offered in that country, Methanex Corporation, a Canadian company and the world’s largest producer of
methanol, was planning to close a plant in Chile and relocate to Louisiana in 2012.4

Process variations were a major characteristic of the chemical industry. The amount of output per unit of
input, known as yield, also often varied for no immediately apparent reason. Both of these inconsistencies
had effects on product profitability. To ensure both quality and yield, it was common for a chemical plant
to use statistical process control systems. These systems collected data from sensors on equipment and
measurement instruments, such as meters and gages embedded within the process, to regularly monitor
the statistical performance—and variations from “normal”—of equipment, processes, and materials.

Depending on the product, over 200 measures and variances were being monitored during the chemical
transformation process by sensors, devices, and instruments on the shop floor. For example, gas
chromatography instruments measured the various chemical elements in a sample as it passed through a
process. The chemical inputs or outputs from a process had to be within certain parts per million to
conform to normal specifications. Temperature was another type of measure. To improve yields, chemical
companies applied management concepts such as lean manufacturing and Six Sigma. These companies
were always looking for granular approaches to diagnose and co
ect process flaws.

1 Deloitte, The Talent Imperative in the Global Chemical Industry, September 2015, accessed June 1, 2016
2 All cu
ency amounts in the case are in U.S. dollars unless otherwise specified.
3 Andrew N. Liveris, “Keynote – A Revolution by Design: Building an Advanced Manufacturing Economy,” MIT Industrial
Liaison Program, September 20, 2013, accessed December 1, 2016, http:
4 Danielle Levy, “Is it Time to Play the ‘Nearshoring’ Boom?,” CityWire, August 21, 2013, accessed December 1, 2016,
For the exclusive use of P. Saunders, 2020.
This document is authorized for use only by Paula Saunders in ISM 6026 Summer 2020 OMBA Cases taught by JAHYUN GOO, Florida Atlantic University from May 2020 to Aug 2020.
Page 3 9B17E014


The term “big data analytics” refe
ed to the analysis of data originating from different sources to identify
patterns and trends, according to which managers could make informed, rather than intuitive, decisions.
Consulting firm McKinsey & Company defined big data as “data sets whose size is beyond the ability of
typical database software tools to capture, store, manage, and analyze.”5 Research company Gartner
defined it as “high-volume, high-velocity and high-variety information assets that demand cost-effective,
innovative forms of information processing for enhanced insight and decision making.”6

Big data analytics differed from conventional analytical approaches such as data warehousing and
usiness intelligence in four ways: volume, velocity, variety, and veracity. The volume of data processed
in typical data warehousing and business intelligence installations was measured in petabytes (i.e., 1,000
terabytes), whereas big data analytics processes dealt with volumes up to geobytes.7 By way of
comparison, one petabyte was the equivalent of text requiring a storage capacity of 20 million filing
cabinets of the kind used in a typical office. The velocity of big data installations supported real-time,
actionable processes. A retailer, for example, could use geolocation data from mobile phones to ascertain
how many customers were in the store’s parking lot on a given day, making it feasible to estimate the
sales for that day even before the retailer had recorded those sales. The variety of sources from which big
data could be mined included both structured data, such as database tables, and unstructured data
originating from diverse sources such as mobile phones, social networks, sensors, video archives, radio-
frequency identification (RFID) chips, and global positioning systems. Finally, the veracity of data
ed to the inherent discrepancies in the data collected,
inging the reliability and trustworthiness of
the data itself into question.

Any data that could be captured in a digital form had the potential to be analyzed using big data tools. The
key algorithm in big data was known as MapReduce. The algorithm was part of the back-end analytics
platform developed by Google in 2000. Having by then indexed 20 billion web pages amounting to 400
terabytes, Google faced three major challenges: the data were largely unstructured, the volume of data
was huge, and the overall size of the digital universe was doubling every two years.8 Google engineers
Answered Same Day Jul 12, 2021


Moumita answered on Jul 14 2021
134 Votes
Table of contents
Answer to Question 1    3
Answer to Question 2    3
Answer to Question 3    3
Answer to Question 4    4
Reference    4
Answer to Question 1
The director of data services at the Dow Chemical Company, Lloyd Colegrove reviewed the results of the pilot study conducted by his team. In this case, a module for data analytics was being tested on how the data from the research and development can be used in the shop floor in real-time (Swan, M. 2018). The comment that the engineers are working for the data and the data is not working for them in return simply means that people are working to collect store and analyse the data yet those data are not being of use for the personnel or the company. Colegrove also saw the option of EMI system is optimised so that the data can be analysed in real-time for the tracking instruments for the quality parameters and act upon so that high quality is maintained.
Answer to Question 2
The EMI system is software which
ings in manufacturing-related information or data together from various sources. The main purpose of the system is to use those data in reporting, analysing and in having visual summaries of the data represented...

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here