Are Artificial Intelligence (AI) Tools Helpful or Trouble?
One of the latest trends in efficiency are new devices that record and then transcribe meetings or personal notes. While these devices can be effective and efficient, many are powered by open-source AI that collect everything you record to train its Large Language Models (LLMs). LLMs are AI programs that use deep learning to understand and generate language
Your department may suffer legal liability and penalties for state and federal violations if your staff uses these devices in meetings, or for personal notes and documents that include fiscal, legal, health, proprietary or personally identifiable data. Anything recorded during these meetings, or when staff record personal notes or other documents, may be collected by the open-source AI on these devices and become part of the LLMs that are publicly available.
Devices and service agreements don’t always make it clear that the data you record is being collected and shared. If the device relies on open-source AI, like ChatGPT or other generative AI models, presume that anything you record is being collected and shared with these open-source AI models. As state agencies, staff are obligated to comply with state and federal requirements for protection of data, and must take extra precautions when using new devices.
Information collected by open AI can also be used by criminals for targeted phishing campaigns and other cyber-attacks.
ACTION STEPS:
- Ensure that you are informing staff not to use these devices, even if purchased personally, for meetings or work-related issues
- Restrict the purchase of these devices for department use unless powered by AI that is in a closed AI system controlled by the department.
Always report any suspicious activity to your security staff immediately. See our CTR Cyber page for more cybersecurity internal controls and contact [email protected] with any incidents or suspected incidents of fraud or cyber threats or if you need support from our Statewide Risk Management Team.