No items found.

How Employees Use AI at Work (But Don’t Tell You)

Marcel Deer
Marketing Journalist

The use of generative AI tools like ChatGPT, Copilot, and Gemini is now common in the workplace. Many businesses are actively exploring AI use cases and introducing them to workers. However, some employees are using AI without their employer’s knowledge or oversight. This trend carries risks for staff and employers and is a new challenge for organisations and leaders. 

Salesforce found that 28% of employees are using AI at work, but 55% use unapproved tools and 40% use actively banned tools. The software company says “despite the promise” of AI, “a lack of clearly defined policies around its use may be putting businesses at risk.” A substantial 84% of UK workers have never received AI training from employers, but many are already using it for daily tasks. 

Why are Employees Secretly Using AI?

Salesforce surveyed 14,000 global employees, including 1,000 from the UK, in a November 2023 issue of its Generative AI Snapshot Research Series, ‘The Promises and Pitfalls of AI at Work.’ According to the results, 51% of respondents believe AI will increase job satisfaction. When looking to the future, 47% believe AI skills will make them more attractive to employers, and 44% expect AI skills to lead to higher pay. 

A LinkedIn survey of 1,000 business leaders in Europe found 60% expected AI to automate manual, boring tasks, 53% believe it will increase productivity and 50% expect that AI will free time for creative thinking. 

The expectations of leveraging AI are pretty clear. The technology automates repetitive, boring tasks, quickly generates text for anything from emails to social media ads and web articles, and provides fast, concise answers to all kinds of questions that emerge during a weekday. 

A sales agent could use AI all day to research customers and write sales collateral like emails, follow-ups, and even proposals. A marketing team member could use ChatGPT to create and schedule social media and web content. There are many applications for freely accessible AI tools that workers could be using all day without employees' knowledge.

The reasons for employees “secretly” using AI can range from simply using a new tool to work faster and better inadvertently hidden from the employer's eyes to consciously using AI when they are not supposed to because it gets things done.

What are the Risks of Workers Using AI Unsupervised?

Generative AI and new Large Language Models (LLMs) such as ChatGPT and its competitors, as well as other off-the-shelf AI tools and the AI functionality readily available in common work platforms, all have risks. 

The risks of AI can include bias and discrimination, falsifications and hallucinations in the content or output produced as well as more simply accuracy and quality errors. For example, generative AI can write a sales email, no doubt, but AI can’t reason, personalise, or empathise. It doesn’t know the customer, and its content can be bland and repetitive. So when your customer receives the email, can perhaps tell it’s written by AI with no additional effort from your company, or it has a glaring mistake, how will that email be received?

A study by MIT students suggests that AI can help reduce gaps in writing ability between employees and help less experienced workers produce better work. The key to making this successful is the oversight and improvement where necessary, of AI outputs. 

There are further risks, too. Generally, AI won’t protect any data that’s shared with it and can “learn” from the data and share it with other users, legitimate or illicit users. So, there are data privacy and compliance concerns and also cybersecurity threats and there are ethical concerns associated with AI. 

A study of over 15,000 respondents worldwide by the Oliver Wyman Forum and discussed by the World Economic Forum (WEF) in a recent publication on mitigating AI risk in the workplace found that 84% using AI at work say they have publicly exposed their company’s data in the last three months. 

How to Make Sure AI is Used Safely in the Workplace

Using AI whilst mitigating the specific risks for your business is going to take thought, research, and perhaps even some expert advice. We’re including some general pointers here, but your industry and specific AI uses will determine your exact strategy. 

From the WEF shared study, 92% of the workers who exposed company data say their employers do have AI data guidelines. Over 40% say they have seen incorrect AI output, and almost half said they have used AI outputs to make decisions without the review of others. In this study, 47% of AI users said they would continue using AI behind their employer’s backs, even if it were prohibited at work. 

The WEF recommends that employers provide high-quality training and issue clear guidelines on generative AI use. It also says employers should improve messaging about job security to allay workers' fears. 

Guidelines and training will help AI users protect data, identify appropriate queries, identify bias, and check for plagiarism, inaccuracies, and poor quality. If workers have guidelines and know the risks of AI, they are much more prepared to protect their employer’s business and themselves from potential problems. With clear policies in place, an employer has much clearer recourse to manage employees who break the rules or create avoidable risks. 

PwC also recommends seven crucial actions for managing AI risks, stating that “many business leaders have yet to grasp the urgency of the challenges that generative AI poses.” The recommendations include:

  • Setting risk-based priorities
  • Revamping cyber, data, and privacy protection
  • Addressing opacity risk (of AI systems)
  • Equipping stakeholders for responsible use and oversight (including employees)
  • Monitoring third-parties
  • Watching out for regulation
  • Adding automated oversight (by considering emerging software tools)

The need to oversee AI outputs is a particularly crucial but easy-to-implement action to mitigate AI risk. When employees understand that AI can be inaccurate, false, or produce poor content, they will also comprehend their developing role to work alongside AI and use it and their own skills to produce better outputs. 

If you're considering actively adopting AI for your workplace, read 6 Components of a Successful AI Deployment next. 

Recent post