Artificial intelligence will be probably the base of many astonishing technologies during the next few decades. Over the last few years, cognitive computing technology—often regarded as a significant stepping stone towards AI—has made its way from the laboratory into our daily lives. It analyzes our photos on Facebook and in Google services and provides contextual responses in Windows 10.
Image courtesy of Pete Linforth
AI Concerns: Rational or Irrational?
Now, AI technology has reached a crucial juncture in its development. While the open letter from Elon Musk, Bill Gates, and Stephen Hawking discusses serious concerns about the dangers of uncontrolled AI, a study conducted at the Stanford University investigates the effect of AI on eight different sectors and concludes that there is no reason to worry about AI being an imminent threat to humankind.
The term "AI" inspires both rational and premature fears. The rational fears, which may sound imminent, are mainly about digital intelligence leading to economic dislocation, loss of jobs, biased systems, and more.
For example, the decision-making of AI is still open to question. The criteria considered by programming, along with bias in data, can lead to an AI with skewed decision-making. This can severely affect areas like transportation and healthcare. Biased programming could lead to bias in data analysis and consequently to bias in AI. Such a system could impact everything, from banal subjects like what messages end up in your spam folder all the way to how an AI treats racial minorities through its visual perception and facial recognition systems.
However, the premature fears inspired by sci-fi scenarios (such as a group of intelligent machines taking control over humanity) seem less likely during the next few decades.
The Partnership on Artificial Intelligence to Benefit People and Society
Addressing these concerns, several organizations, such as The Future of Life Institute and the OpenAI project, have been investigating ways of developing, controlling, and even policing AI technologies. And now, tech giants including Amazon, Facebook, Google’s DeepMind, Microsoft and IBM—companies which have been deeply involved in the development of AI- and machine learning-based products—are forming a consortium called the Partnership on Artificial Intelligence to Benefit People and Society, shortened to the "Partnership on AI".
This partnership has brought together the companies which have been in constant competition with each other to incorporate AI and develop the best products. And now, in a very practical sense, these companies are going to frequently discuss AI advancements with each other.
A recent announcement revealed that Apple has agreed to become one of the founding companies in the partnership. Apple has been deeply involved in cognitive computing technology through its personal assistants, image recognition, and voice control solutions. Along with its extensive research capabilities, Apple also brings a huge amount of public interest, likely increasing the amount of attention that the organization will receive from consumers.
Goals and Leadership in AI Development
Funding the project, the members of this partnership have agreed to develop and share the AI technology, set its societal and ethical best practices, advance the public understanding of AI, and publish research results on ethics, inclusivity, privacy, and more.
Best practices, which can give a framework for accurate safety testing, are of paramount importance in the case of AI technology because the involved companies are increasingly asking people to put their lives at in the hands of AI-controlled machines.
Such efforts may also one day highlight some AI-related actions as dangerous. They could help in the creation of rules which, even if they're not enforced, could assist people in assessing products with a better insight.
Eric Horvitz, a managing director at Microsoft's research division, questions the way some companies are testing their AI-based self-driving technologies in the real world. He adds that there might be unknown unknowns out there and that the Partnership on AI is trying to develop best practices for testing AI before its full deployment.
However, he is positive that, in the future, AI will increase highway safety dramatically and help avoid human driving errors which kill 100 people per day on highways.
Any Ulterior Motives?
The founding members of this partnership are a very close-knit group of scientists that consistently gather at conferences and tech-focused meetings around the globe. Some people find it relieving to see that the researchers and scientists of these companies, not their product managers, have come together to form this partnership.
Yann LeCun, the head of Facebook Artificial Intelligence Research (FAIR) group, notes that it is very important that people trust researchers in advancing technology with utmost consideration for human values.
However, it's very important to consider that the partnership is being formed by companies that ultimately would like to sell us AI-related products. These are not simply non-profit individuals! AI products and services may help our batteries work longer and make our snapshots look better, but companies like those involved in the partnership will shape the evolution of the technology and that deserves scrutiny.
Some of the partners associated with the organization. Image courtesy of the Partnership on AI
That is probably why Greg Brockman, the co-founder and CTO of OpenAI (an Elon Musk-backed AI research lab), expects the partnership to include non-profits as the first-class members.
The partnership evidently aims to include academics, companies, non-profits, and specialists in both policy and ethics get involved over time. For now, representatives from the ACLU, the MacArthur Foundation, and academic institutions such as UC Berkeley are also present on the Board of Trustees even in these very early days of the partnership's development.
In addition, the partnership has achieved the support of several organizations such as the Association for the Advancement of Artificial Intelligence (AAAI), the Allen Institute for Artificial Intelligence (AI2), and OpenAI.
Brockman comments that he is happy to see the launch of the group and believes that coordination in the industry is good for everyone.
This partnership is not without controversy. However, the concerns about AI threats may lead to the underground development of the technology and further compromise democratic values such as freedom, equality, and transparency. As a result, we may view this cooperation in a more positive light as it could give a responsible and cooperative path forward for engineers and AI developers around the world.
The first board meeting for the partnership will occur on February 3rd. More updates on the direction of this important step in AI development are expected soon thereafter.