Click to learn more about author Ben Hartwig
Artificial intelligence (AI) are systems designed and programmed to work or act like humans. The process includes AI solving complex problems, learning, and improving themselves over time. At the rate the technology is developing, experts believe that AI will eventually mimic and perform tasks like a human.
The positive applications for AI in every significant aspect of human life is beyond measure. The technology is already being deployed in medicine and used extensively in consumer electronics. Most modern automobiles even get fitted with AI to aid the driver with parking, safety, and adaptive cruise control. Despite all the good AI brings to the table, however, there is still the looming threat of its disruptive potential in everyday human life. The writing is on the wall, and regulators must focus on human rights, the rule of law, and AI ethics.
The Effect of Artificial Intelligence on Human Rights
The use of AI and its underlying technologies can affect life in a wide range of areas, including healthcare, education, law enforcement, work, and social responsibility. Several issues need consideration because AI has the potential to violate human rights and undermine the laws that protect them. Using big data together with AI can threaten the right to privacy due to a risk of increased surveillance and monitoring. Groups that have access to cutting edge AI technology can search public records and other available data faster than any human could. Looking for anyone, anywhere in the world can be done in seconds.
Malicious bots that produce fake news and content faster than any human writer could have been causing havoc online. AI-powered algorithms for social media used to promote regular content has helped spread the disinformation like wildfire. AI can also work against equality, the prohibition of discrimination, and access to other fundamental rights, such as political and personal freedom. AI is already predicting marketing trends based on collected user data, which many people find unethical.
One example often cited is algorithmic bias discrimination. This issue demonstrates obvious human bias in output because the historic data fed to the system for training the AI algorithm inherently features biases against some sectors of society. As a result, the compromised data gets perpetuated and embedded in the system, and the algorithm always targets a specific group of people.
The Ethics Needed for AI
Plenty of people have voiced their concern over a more human rights approach to AI regulation. The most crucial step is the Toronto Declaration on the protection of the right to equality and non-discrimination in machine learning systems of 2018. The Toronto Declaration was touted as the “first step” in making the framework of human rights one of the foundational components of data ethics and AI.
However, the discussions up to this point have only hovered around “ethical” guidance, with little mention of any rights-based or legal framework. The geopolitics of AI has something to do with all of this, as the competition between nations on the cusp of breaking through remains fierce. China, the United States, and the European Union are all vying for AI dominance, which makes devising regulatory framework a real challenge. Regulators need to develop a framework that can control any potential AI excess without stifling innovation in the field.
The European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) has reaffirmed its support for an “ethical” framework for AI regulation. The AI HLEG recently published “Ethics Guidelines for Trustworthy AI” aimed at guiding the AI community for the responsible development and use of “trustworthy AI.” Trustworthy AI is defined as robust, ethical, and lawful. Tech giants Microsoft, Google, and IBM are significant players in the AI industry, and they have all voluntarily published ethical principles for developing their version of AI technology.
Decisions about AI regulation and its impact on human rights must be made today because the consequences of not acting immediately will reverberate long into the future. Companies and governments must undertake their due diligence across all AI industries with haste. Artificial intelligence is no longer confined to TV shows and digital assistants on your smartphone or in your home. AI has deeply changed even the industry of law enforcement; nowadays, they can also achieve crime prevention through AI. AI is already here. The technology is only going to get better, and regulation must start now.