Conversica Survey Reveals Only 6% of Companies Have Policies for the Responsible Use of AI Despite 73% Recognizing the Importance of Clearly Established Guidelines

While governments scramble to regulate AI, business leaders must act carefully to proactively get ahead of business-critical issues and risks.

Conversica, Inc., the leading provider of conversational AI solutions for enterprise revenue teams, is announcing today the results of its newest survey conducted with business owners and leaders to get a perspective on enterprise preparedness regarding AI Ethics and Corporate Responsibility.

While governing bodies around the world are still considering AI-specific regulations, giving AI providers the power to launch services and products with various degrees of reliability, almost every organization is falling short of formulating in-house, well-defined principles and guidelines for their own responsible and ethical implementation of this technology. But this is inevitable, as business leaders need to get ahead of potential risks and issues given the disruptive possibilities of this technology.

Survey highlights

  • For organizations already adopting AI, a whopping 86% agree on the critical importance of having clearly established guidelines for the responsible use of AI; the percentage was 73% among all respondents.
  • However, only 6% of companies have policies in place for the responsible use of AI. And among organizations planning to adopt AI in the next 12 months, it’s even lower: 5% have policies.
  • One in five business leaders at companies that use AI have limited or no knowledge about their policies concerning critical AI issues including security, transparency, accuracy, and ethics.
  • The main concerns for companies who adopted AI are: the lack of transparency (22%), false information (21%), and the accuracy of data models (20%).

“The main elements business leaders should be looking for are the safe, brand-protective, and compliant use of AI that protects their end users. The minimum AI factors include governance, diverse and representative training data, bias detection and mitigation, transparency and accuracy”

Read More: SalesTechStar Interview with Paresh Vankar, CMO at LTIMindtree

Getting ahead of the issue: Survey details

The survey data shows that companies that already utilize AI or those with plans for its adoption within a year are more likely to acknowledge the importance of having clear ethical guidelines for the responsible use of the technology.

86% of business leaders at organizations already adopting AI say that having AI ethics policies in place is crucial, compared to 73% of the overall group. However, despite being more likely to recognize the importance of these policies, only 5% of the leaders at companies adopting AI currently have clear guidelines in place.

One in five business leaders at companies who use AI have limited or no knowledge about their company’s policies concerning the critical issues associated with the use of AI including security, transparency, accuracy, and ethics. Even more alarming, 36% claim to be only ‘somewhat familiar’ with these issues. This is a widening risk factor that businesses will have to address in the near future, especially those already using AI, as they likely are already experiencing the challenges and opportunities posed by this technology.

“From an enterprise perspective, these figures are concerning, especially considering the vast array of AI products and services expected to become available in the coming years and the potentially significant impact they will have on the future of business,” said Jim Kaskade, CEO of Conversica. “This could represent a problematic trend for companies that haven’t started planning to enforce responsible and ethical use of AI. Business leaders must get ahead of these issues now.”

According to respondents, the most challenging aspects of making informed decisions about the use of AI in their companies are the lack of resources for data security and transparency (43%), and difficulty with finding a provider that has ethical standards that are aligned with those of the company (40%).

“The main elements business leaders should be looking for are the safe, brand-protective, and compliant use of AI that protects their end users. The minimum AI factors include governance, diverse and representative training data, bias detection and mitigation, transparency and accuracy,” said Kaskade.

AI trends and concerns

Within the following year, business leaders said they intend to use AI for ‘external engagement’, i.e. customer service/support, and marketing/sales outreach (39%). ‘Insights’ including fraud detection, data analytics, and predictive modeling came in at a close second (36%).

Currently, the top concerns about AI vary according to where their companies are in adopting the technology. Among those that have already adopted the technology, the top three concerns about its use are the lack of transparency (22%), the accuracy of data models (20%), and false information (21%).

Respondents whose companies had no plans to adopt AI-powered services in the next year mostly said they don’t have any concerns about AI (29%). For the respondents who did express a concern, their top choice was ‘legal implications, patent infringements, plagiarism, and copyright violations’ (11%).

When asked to what extent they viewed false information generated by AI as a significant concern for their company, 77% of total respondents rated this as either ‘Concerning’ or ‘Very concerning.’ This figure was higher – 88% – for respondents whose companies already have AI-powered solutions in place, indicating that business leaders with more business experience with AI have better awareness of the possible brand risks associated with improper usage.

When it comes to employee use of popular AI-based tools like ChatGPT, the majority of respondents (56%) said that their company either already has rules in place, or is considering implementing a usage policy.

“When we talk about the use of AI, disclosure is power. The more companies are transparent and upfront about how they use AI, deploy vigilant systems, and include humans in the loop, the more they can significantly reduce the risks associated with its adoption and protect their brands, employees and end users,” added Kaskade.

Read More: Mindtickle Enhances Revenue Productivity Platform With Mindtickle Copilot

Write in to psen@itechseries.com to learn more about our exclusive editorial packages and programs.

AccuracyAIAI DespiteAI factorsAI including securityAI providersAI solutionsBusinessChatGPTClearly EstablishedCompaniesconsidering AI-specific regulationsConversicaEthicsfalse informationInsightsJim KaskadeNewsOrganizationsSurveytransparencytransparent