Advertisement

Why Reputation Is Hindering AI and Automation Adoption

By on

Click to learn more about author Tom Allen.

The increasing penetration of technological advancements and artificial intelligence (AI) are being adopted by organizations with diverse services and operations to leverage on the utility of the same. The rapid and inevitable metamorphosis to adopt automation is springing amidst the pandemic.

However, opinions are differently carved addressing the issues and challenges of the gigantic idea and approach to AI. With adoption gaining momentum during the lockdown, many still fear the change. The alarm is unnerving due to the negative headlines a few organizations made due to the loopholes in their AI system.

The pandemic is shaping the economic aftermath globally. Cost-cutting measures are paving paths to overcome losses which is a primary concern for all organizations. Hence, it seems that the adoption rates of AI will undergo reconstruction. While, as per the McKinsey Global AI Survey, AI proves its worth, but few scale impacts, most companies are still seeing benefits.

Improvement in business efficiency and visualizing the long-term goals in case the virus prevails for longer than expected is the current uproar. Without fail, firms are analyzing the potential infection areas, cutting down on employees for social distancing norms, leaving sanctions due to sickness, sanitization costs, and a lot more — these factors are directly proportional to the production and services processes. Disruption of the production line concerns the majority of boardrooms.

Though artificial intelligence systems understand data to make decisions and offer undeniable benefits, AI also has its profound list of risks and issues. The technological systems are the driving force for decisions, changes, and strategies in a company. If the data entered is unhygienic and unfiltered, it has the potential to pose serious threats to the internal and external reputation of the organization.

Without a doubt, systems learn and adapt according to the data entered. The development of systems is based on learning and analyzing the data; each result is based on the patterns of the data. AI too will learn the patterns and designs of the data and act accordingly. If the organization has a past of discrimination, societal differences, and the influence of a certain mindset, the data will clearly exhibit it. The AI system will follow the same patterns for future results and this creates an invisible loophole for the organization to decipher and tackle. In the next 5 years, the number of AI and biased systems will increase in algorithms, according to an IBM report.

The challenge is, although AI can be a powerful tool to support decision-making, “in cases where AI systems base assumptions on patterns of historical data, there is a danger of bias”, says Brian Kropp, Gartner’s VP of Research.

To understand, consider the Amazon recruitment AI-based system devised in early 2015. The aim was to filter review job applications with automation. However, the data used in the system considered the previous 10 years of CVs which exhibited a predominant prevalence of male candidates. Therefore, the system learned to rate female applicants much lower than male job seekers. Numerous attempts to regulate the system failed due to irreversible learning and specific issues in the data.

Amazon made negative headlines in the media for their biased AI towards women working power. As a result, the reputation of the organization was seriously hampered and also, affected the women’s customer base as well.

Prominence and long-term business resilience are based on the value and goodwill of the brand/organization. Loyal customers tend to buy a broader range of products and services from a company with a powerful reputation in the market. Firms with a positive and strong reputation attract more people, as customers link reputation with trust and quality of service or product.

Therefore, human intervention is crucial for the success of an AI system. Building systems that enable human oversight from time to time is a significant need to avoid reputation hazards. AI and machine learning are good as trained, hence why regular human supervision is necessary to understand what pattern the system is learning. Strategizing and devising solutions regularly will ease future complications.

Leave a Reply