Why AI is Reshaping our Conversation about GDP
Posted: 13 December 2017 | By Raja Chatila and Virginia Dignum
You can’t click on a news site or open a newspaper these days without finding an article about the role of ethics in Artificial Intelligence (AI). While this is an encouraging trend, the skeptic might respond that of course manufacturers and governments will give lip service to ethical AI. What’s the alternative? Advocating evil AI? However, from a business perspective, the issue is not whether society should prioritize ethical AI but is their Return On Investment (ROI) for doing so.
In a GDP (Gross Domestic Product)-driven world where exponential growth is the metrics and hence the top priority for societal progress, achieving ROI for AI is pretty simple: automate every task in your organization. “Move fast” to launch any AI-enabled technology as quickly as possible to maximize shareholder profits. However, the environment and human well-being aren’t sustainable in this GDP scenario. Growth is finite by definition when it comes to the planet’s resources; and the human’s sense of purpose is deeply threatened.
To best honor the amazing results enabled by AI technologies, society must move beyond measuring worth solely by means of GDP for genuine progress in the algorithmic age, and start also measuring well-being. Putting human well-being at the core of development instead of the accelerated generation of maximized profits can provide a realistic goal for the design of AI, as well as a concrete means to measure the impact of these technologies on all stakeholders and society.
It is fundamental to realize that people are moving from passively adopting or rejecting technology to being in the forefront of the process, demanding and reflecting on the potential results and reach of AI
Multiple metrics are already in use that measure well-being through Indicators such as the United Nations’ Human Development Index (HDI) and The Genuine Progress Indicator. Business leaders and governments alike have been working for years to implement a Triple Bottom Line (TBL) mindset honoring societal and environmental issues along with financial criteria. There are also now more than 2,100 Certified B Corps (for-profit companies certified to meet rigorous standards of social and environmental performance, accountability and transparency). Organizations like Accenture and PwC are already guiding companies on how to align with the UN’s Sustainable Development Goals to better orient the corporate sector to align with policy makers around the world.
When using established metrics of well-being in conjunction with GDP for development, it’s critical to recognize that human agency, reason, and emotion make us unique. Applied ethics and values-driven design help us understand and honor these distinct human attributes and responsible methodologies ensure we develop these technologies safely today. That is to say, well-being metrics measure the success of how AI products, services and systems – once released – demonstrably increase sustainable, holistic progress and benefit all humankind.
Let us repeat: At all levels and in all domains, businesses and governments are, or will soon be, applying AI solutions to a myriad of products and services. It is fundamental to realize that people are moving from passively adopting or rejecting technology to being in the forefront of the process, demanding and reflecting on the potential results and reach of AI. Success of AI is therefore no longer a matter of financial profit alone but how it connects directly to human well-being. Putting human well-being at the core of development – instead of GDP alone – provides, not only a sure recipe for innovation, but also both a realistic goal as well as concrete means to measure the impact of AI.
While technology can be legal, profitable, and safe (in a narrowly defined way) it can still have dramatic negative consequences on people’s mental health, emotions, sense of themselves, autonomy, dignity, and their ability to achieve their goals
While technology can be legal, profitable, and safe (in a narrowly defined way) it can still have dramatic negative consequences on people’s mental health, emotions, sense of themselves, autonomy, dignity, and their ability to achieve their goals. These attributes are key components of what makes us human and what’s being measured via well-being Indicators today.
For this reason ethical methodologies that ensure values-driven design must be prioritized at the very core of technological development to ensure that AI will protect our dignity by design, not as an afterthought.
Developments in AI will continue to contribute to a needed redefinition of fundamental human values including our current understanding of work, wealth, health and sustainability. By prioritizing human well-being as a success metric for the evolution of these values we’ll evolve society in a responsible way while also providing specific guidance for development.
True responsibility in AI is not just about how we design these technologies but how we define their success. Our well-being and our future depend on it. We’re worth the investment.
Raja Chatila, IEEE Fellow, is Executive Committee Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems.
Virginia Dignum is Executive Committee member of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems.