Great Deal! Get Instant $10 FREE in Account on First Order + 10% Cashback on Every Order Order Now

I need a two page executive summary for the attached case study.

1 answer below »
Responsible A.I.: Tackling Tech's Largest Corporate Governance Challenges

















































Date: October 1, 2022
KELLIE MCELHANEY
GENEVIEVE SMITH
ISHITA RUSTAGI
OLAF GROTH
Responsible A.I.: Tackling Tech’s Largest Corporate
Governance Challenges
AI has the potential to improve billions of lives…By ensuring it is
developed responsibly in a way that benefits everyone, we can inspire
future generations to believe in the power of technology.
—SUNDAR PICHAI
CEO, ALPHABET INC., GOOGLE1
The American dream for one Charlotte, North Carolina, family was a new four-bedroom home with
a lawn, 2,700 square feet of living space, and a neighborhood pool for $375,000. Crystal Marie and
Eskias McDaniels saved more than they needed for down payment, had very good credit, and easily
prequalified for a mortgage. However, on the August 2019 day when they were scheduled to sign
the loan documents, their loan officer told them the deal wouldn’t close. He had submitted it at
least 15 times, and noted that each one got “rejected by an algorithm.” Crystal Marie said as a Black
couple, “it would be really naive to not consider that race played a role in the process.” An
investigation found that lenders in 2019—often using algorithms—were more likely to deny loans
to people of color than to similar White applicants, even when controlling for financial factors the
mortgage industry uses to explain racial disparities in lending.2
1 Pichai, S XXXXXXXXXXWhy Google thinks we need to regulate AI. Financial Times. Retrieved from
https:
www.ft.com/content/3467659a-386d-11ea-ac3c-f68c10993b04.
2 Martinez, E. & Kirchner, L. (2021, August 25). The secret bias hiding in mortgage-approval algorithms. ABC News.
https:
abcnews.go.com/Business/wireStory/secret-bias-hidden-mortgage-approval-algorithms XXXXXXXXXX.
Associate Director of the Berkeley Haas Center for Equity, Gender & Leadership (EGAL), Genevieve Smith, and
EGAL Analyst, Ishita Rustagi, prepared this case study with EGAL’s Founding Director, Kellie McElhaney and
Professor Olaf Groth. We would like to give special thanks to the following individuals who provided critical insights
at Google: Melissa Davison, Madeleine Elish, Jen Gennai, and Reena Jana.
Copyright © 2022 by The Regents of the University of California. All rights reserved. No part of this publication may
e reproduced, stored, or transmitted in any form or by any means without the express written permission of the
Berkeley Haas Case Series.
This document is authorized for use only in Jose Curto's EMBA-EN_A
2022_ELECTIVE - Becoming a data driven organization -- IST at IE Business School from Feb 2023 to Sep 2023.
https:
abcnews.go.com/Business/wireStory/secret-bias-hidden-mortgage-approval-algorithms XXXXXXXXXX
https:
www.ft.com/content/3467659a-386d-11ea-ac3c-f68c10993b04










































RESPONSIBLE AI • GOOGLE 2
As of 2022, AI technology using machine learning is being implemented across industries and
usiness functions. Such stories about bias and discrimination, being perpetuated by these tools—
whether in finance, healthcare, policing, hiring, and more—are common. Yet development and
adoption of AI systems have continued to increase due to rapid technological advancements, the
promise of increased efficiency and productivity, and the immense profit potential of AI
technologies.
In 2017, the CEO of Google, Sundar Pichai, announced the company’s key conceptual shift from
“mobile first” to “AI first.”3 This marked a major inflection point for Google. As he said AI,
“touches every single one of our main projects, ranging from Search to Photos to Ads… everything
we do!”4 Pichai saw the challenges that companies faced in ensuring ethical use of AI technologies
and recognized the importance of using AI in responsible ways. Thus he set a new goal of defining
an ethical AI charter for the company.
Shortly after, Google—alongside other leading tech companies—adopted responsible AI principles
to guide its development and use and invested millions in teams, resources, and tools to
operationalize the principles. Implementation would be incredibly challenging. What does
esponsible AI innovation and corporate governance look like? How can business leaders at Google
and elsewhere address the challenges and tradeoffs that exist to ensure AI technologies are trusted
and responsible? Specifically: How can Google Cloud’s Responsible AI team assess applying time-
to-market objectives, ethics safeguards, and multi-stakeholder processes to a new lending tool?
Background
Rapid Development & the Promise of AI
Companies around the world have increasingly developed AI technologies: In a survey of US
companies in 2021, 86% of respondents said AI would be a “mainstream technology” at their
company that year, contributing up to US$15.7 trillion to the global economy by XXXXXXXXXXWhen
companies deploy AI technologies, it is often machine learning. Machine learning systems—made
up of a series of algorithms—take and learn from massive amounts of data to find patterns and
make predictions.7
In 2022, AI that uses machine learning impacts most aspects of many people’s work and personal
lives. It can be used in daily tasks from travel navigation to weather forecasts. It can also be used
to decide, for example, who receives an interview for a job; which products are advertised to which
consumers; who receives a loan; what communities are designated as having high potential for
crime; which COVID-19 patients in hospitals receive life-saving resources. AI can help people
3 Zerega, B. (2017, May 19). AI weekly: Google shifts from mobile-first to Ai-First World. VentureBeat. Retrieved from
https:
venturebeat.com/2017/05/18/ai-weekly-google-shifts-from-mobile-first-to-ai-first-world
4 Chainey, R XXXXXXXXXXGoogle co-founder Sergey Brin: I didn’t see AI coming. World Economic Forum.
https:
www.weforum.org/agenda/2017/01/google-sergey-
in-i-didn-t-see-ai-coming/.
XXXXXXXXXXAI predictions 2021. PwC. Retrieved from https:
www.pwc.com/us/en/tech-effect/ai-analytics/ai-
predictions.html.
XXXXXXXXXXSizing the prize: What’s the real value of AI for your business and how can you capitalise? PwC. Retrieved
from https:
www.pwc.com/gx/en/issues/analytics/assets/pwc-ai-analysis-sizing-the-prize-report.pdf.
XXXXXXXXXXMachine learning, explained. MIT Sloan Management Review. Retrieved from https:
mitsloan.mit.edu/ideas-
made-to-matte
machine-learning-explained.
This document is authorized for use only in Jose Curto's EMBA-EN_A
2022_ELECTIVE - Becoming a data driven organization -- IST at IE Business School from Feb 2023 to Sep 2023.
https:
mitsloan.mit.edu/ideas
https:
www.pwc.com/gx/en/issues/analytics/assets/pwc-ai-analysis-sizing-the-prize-report.pdf
https:
www.pwc.com/us/en/tech-effect/ai-analytics/ai
https:
www.weforum.org/agenda/2017/01/google-sergey-
in-i-didn-t-see-ai-coming
https:
venturebeat.com/2017/05/18/ai-weekly-google-shifts-from-mobile-first-to-ai-first-world
















































RESPONSIBLE AI • GOOGLE 3
make decisions more efficiently and cost-effectively, while also promoting higher productivity and
growth in the economy. Use of AI in predictions and decision making can also reduce human
subjectivity and open new possibilities and opportunities. However, AI can also embed human
iases, produce discriminatory outcomes at scale, and pose immense risk to individuals and
society.8
Beyond social benefits and risks, there are clear business reasons to address ethical concerns when
operationalizing AI principles. A 2018 Deloitte survey found that 32% of AI-aware executives
anked ethical risks of AI as a top three AI-related concern.9 Microsoft flagged reputational harm
or liability due to biased AI systems as a risk to their business in a 2020 report to the US Securities
and Exchange Commission.10 Meanwhile, employees have spoken out on various ethical concerns
elated to AI research and development in the form of walkouts, resignations, and new unions.
Responsible AI is important for more than just large companies. Venture capitalists have spu
ed
start-ups to enhance their approach to responsible and ethical AI.11
Businesses can struggle to generate ROI from their AI projects and pilots.12 However, a global 2021
McKinsey survey found that AI’s impact on the bottom line is growing: 27% of respondents
eported that at least 5% of earnings before interest and taxes (EBIT) is attributable to AI (up from
22% of respondents in XXXXXXXXXXRegardless
Answered Same Day May 03, 2023

Solution

Ayan answered on May 04 2023
26 Votes
WRITTEN ASSIGNMENT        4
WRITTEN ASSIGNMENT
Table of contents
Executive Summary    3
Executive Summary
    The Responsible A.I. case study from the Berkeley Haas Case Series highlights the importance of incorporating ethical considerations into the development and deployment of artificial intelligence (AI) systems. The report acknowledges that while AI has the potential to improve billions of lives, it also poses a number of difficulties, especially in the area of corporate governance. The case study demonstrates the proactive measures Alphabet Inc., the parent company of Google, has made to overcome the difficulties posed...
SOLUTION.PDF

Answer To This Question Is Available To Download

Related Questions & Answers

More Questions »

Submit New Assignment

Copy and Paste Your Assignment Here