A survey by FDM Group reveals that 35% of businesses view data security as a key obstacle in embracing AI technology, despite its growing popularity and usage. While AI tools are already being used in various sectors, concerns remain about potential risks highlighted by the National Cyber Security Centre UK.

During London Tech Week, over one-third of businesses expressed concerns about data security as a barrier to adopting artificial intelligence (AI), according to a survey by FDM Group. Despite AI’s popularity and widespread use across various sectors, 35% of businesses cited data security as the primary issue.

AI technology, pivotal in sectors like cybersecurity, helps identify threats through pattern recognition of large data sets. Yet, the National Cyber Security Centre UK has highlighted potential risks including bias, susceptibility to manipulation through “data poisoning,” and creation of harmful content.

The survey also found that 64% of businesses are already using AI, including tools like ChatGPT, Claude, and Copilot, especially in back-office operations and software development. However, 48% reported lacking the necessary specialist skills to fully utilize these tools.

Hannah Pirovano from Grammarly emphasized the company’s commitment to providing secure AI tools, addressing the industry’s data security concerns. FDM Group’s COO, Sheila Flavell, stressed the importance of developing AI skills within the workforce to maximize its potential and maintain a competitive edge.

The survey’s release follows the Seoul AI Summit, where 16 organizations agreed to voluntary AI safety standards to ensure safe and ethical AI development while promoting innovation.

Share.

Jaimie explores the ethical implications of AI at AI WEEK. His thought-provoking commentary on the impact of AI on society challenges readers to consider the moral dilemmas that arise from this rapidly evolving technology.

Leave A Reply

Exit mobile version