Samsung Electronics Co. is prohibiting their employees from using generative AI tools like ChatGPT, according to a Bloomberg report. The company has discovered staff accidentally uploaded sensitive code to the platform.
Samsung has raised concerns about the security risks associated with AI platforms as their popularity grows. An internal memo cautioned that data transmitted to AI tools like ChatGPT could be difficult to retrieve or delete.
“Interest in generative AI platforms such as ChatGPT has been growing internally and externally,” Samsung told staff. “While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”
Samsung’s new regulations prohibit generative AI systems’ use on company-owned devices and internal networks. The company has advised employees who use ChatGPT or similar tools on personal devices to avoid submitting any company-related or personal data that could compromise the company’s intellectual property. Violation of the new guidelines could lead to termination of employment.
“We ask that you diligently adhere to our security guidelines and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” Samsung said in the memo.
Samsung’s Take To AI
Samsung is creating its own internal AI tools for translation and summarizing documents as well as for software development. It’s also working on ways to block the upload of sensitive company information to external services. In the previous month, ChatGPT introduced an “incognito” mode that prevents the usage of chat data to train AI models.
“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency…However, until these measures are prepared, we are temporarily restricting the use of generative AI.”