Samsung Bans Generative AI Tools Amid Security Concerns
While fears of AI taking jobs linger, Samsung has prohibited employee use of generative AI tools such as ChatGPT and Google Bard. This decision follows an incident where sensitive code was inadvertently uploaded to ChatGPT, leading to a data leak.
In response, Samsung is proactively reinforcing security measures to create a safe environment for the future use of generative AI, aiming to enhance productivity and efficiency. However, until these measures are finalized, the company is restricting access.
Samsung has expressed concerns over using generative AI tools due to data being stored on external servers, raising issues regarding data access, removal, and potential unintentional sharing. For instance, ChatGPT retains data for training purposes unless users opt out.
While many companies promote generative AI adoption, Samsung is not alone in its cautious stance. Financial institutions like JP Morgan Chase, Bank of America, and Citi Group are also limiting employee access to these tools, given their sensitive data environments.
An internal Samsung survey revealed that 65% of employees shared concerns about the security risks associated with AI tools. Despite this ban, Samsung is developing its own AI solutions for tasks such as software development and translation.
Employees are permitted to use AI tools on personal devices as long as it is for non-work related purposes, with strict penalties for violations. Notably, Samsung’s corporate policy does not affect consumers, who can still access generative AI tools across Samsung devices.