AI Security Isn't New: Apply What You Already Know
The business world is losing its collective mind over AI. Every conference, every board meeting, every strategy session features animated discussions about how AI will transform everything. CEOs want to know why your organization isn’t moving faster. Competitors are announcing AI initiatives. Vendors are rebranding every product as “AI-powered.” The pressure to adopt is immense.
Here’s what cybersecurity professionals need to understand: your job right now is to be the voice of calm in the chaos.
Organizations should adopt AI. They need to explore what this technology means for their operations, their customers, their competitive position. But adoption without governance is reckless. And the good news, the part that should let you sleep better at night, is that SideChannel already knows how to do this. We’ve adopted disruptive technologies before. We have playbooks. We have tools. We have experience doing exactly this kind of work.
What we need now is to apply those lessons to AI, implement basic controls, and enable the business to move forward safely.
Start With Policy
The foundation of any AI governance program is a clear policy that tells your workforce what’s acceptable and what’s not. Your employees want to use AI tools. More importantly, they’re using them already, whether you know it or not. They’re copying customer data into ChatGPT to draft emails. They’re uploading proprietary code to AI coding assistants. They’re feeding confidential strategy documents into AI tools to create summaries. They’re doing this because they’re trying to be productive, to do their jobs better and faster.
Your policy needs to address this reality head-on. What types of data are employees allowed to input into AI tools? What kinds of AI applications are they permitted to use? Under what circumstances do they need approval before using a new AI tool? What happens if they violate these guidelines?
The policy doesn’t need to be a 50-page document filled with legal language. In fact, it shouldn’t be. Your workforce won’t read that. Write something clear and concise that answers the questions people actually have. Use examples. Make it practical.
Here’s what effective AI policies typically cover:
Data classification rules. Employees need to understand which data types are off-limits for AI tools. Personally identifiable information, protected health information, credit card numbers, source code, trade secrets, confidential business strategy… these should be clearly defined categories that employees recognize. The policy should explain that inputting these data types into unapproved AI tools is prohibited.
Approved versus unapproved tools. Employees need a clear list of AI tools that have been vetted and approved for use. This doesn’t mean you’ve approved every possible AI application. That would be impossible given how fast this space is moving. But you should identify the core tools that meet your security and compliance requirements. If employees want to use something not on the approved list, there should be a process for requesting approval.
Business versus personal use. Your policy needs to address whether employees are allowed to use personal AI tools for work purposes. The answer is often no, but you need to state it explicitly. Likewise, if employees want to use approved work AI tools for personal projects on company devices, you need a position on that.
Attribution and disclosure. Some organizations require employees to disclose when they’ve used AI tools to generate content, code, or other outputs. This is particularly important for client-facing work, creative content, or technical documentation. Your policy should address whether and when disclosure is required.
Training requirements. Your policy should specify that employees must complete AI governance training before using approved tools. This ensures everyone understands the rules and the reasoning behind them.
Once you have a policy, you need to communicate it. All-hands meetings, team discussions, email campaigns, training sessions… whatever it takes. Use multiple channels to ensure everyone gets the message. Make it easy to find and reference. Update it regularly as your understanding of AI risks evolves.
Provide an Approved Tools List
Policy alone isn’t enough. You need to give your workforce concrete guidance about which tools they’re allowed to use. This is where an approved tools list becomes essential.
Your list should identify specific AI applications that meet your security, privacy, and compliance requirements. For most organizations, this means tools that offer enterprise agreements with data protection guarantees, tools that don’t train models on customer data, tools that provide audit logs and access controls.
The list should cover different use cases. Your sales team needs AI tools for different purposes than your engineering team or your marketing team. Think about the actual work people do and identify approved tools for those scenarios.
For each approved tool, provide basic information: what it does, who should use it, any special setup or configuration requirements, and where to get help if there are problems. Make this information accessible: a wiki page, a SharePoint site, a section in your employee handbook. People should be able to find this list in 30 seconds or less.
You’ll also need a process for adding tools to the approved list. Employees will discover new AI applications and want to use them. You need a workflow for evaluating those requests, assessing the security and compliance implications, and either approving or denying them. Make this process fast. If it takes three months to evaluate a tool request, people will just use the tool anyway and ask for forgiveness later.
Scan Your Asset Inventory
You have policies. You have an approved tools list. Now you need visibility into what’s actually happening across your organization. This is where asset inventory and scanning come into play.
Your IT teams should be maintaining an inventory of all devices people use to do their jobs: laptops, desktops, tablets, phones. This inventory should track what software is installed on each device. You need to scan these devices regularly to identify AI applications and compare what’s installed against your approved tools list.
Look for AI coding assistants, AI writing tools, AI image generators, AI chatbot applications. There are dozens of categories and hundreds of specific tools. Your scanning solution needs to recognize these applications and flag anything that’s not on your approved list.
When you find unapproved AI software, you have a decision to make. Is this a tool that employees are using productively and that you should evaluate for addition to your approved list? Or is this a tool that presents unacceptable risk and needs to be removed? This decision requires input from both security teams and business leaders.
The scanning process needs to be ongoing, not a one-time exercise. New AI tools launch constantly. Employees install new software regularly. You need continuous monitoring to maintainvisibility.
Block Prohibited Technologies
Once you’ve identified prohibited AI tools, your IT teams need to take action to block them. You have multiple enforcement mechanisms available.
For installed software, you have direct control. Group policy on Windows, mobile device management for phones and tablets, endpoint management platforms… these tools allow you to prevent installation of specific applications or to remotely remove them if they’re already installed. Use these capabilities to enforce your approved tools list.
For web-based AI applications, secure web gateways are your primary control. These tools have been used for decades to restrict access to specific websites and web applications. If you don’t want employees using a particular AI chatbot or AI image generator, add it to your blocklist in your secure web gateway. Traffic to that site gets blocked at the network level.
The objection you’ll hear is that these are legitimate productivity tools and blocking them will anger employees and hurt productivity. This is where your policy and approved tools list become critical. You’re not saying employees can’t use AI. You’re saying they need to use approved AI tools that meet your security requirements. If they want to use a specific tool, there’s a process for getting it evaluated and potentially approved.
Address Home Computer Usage
Here’s where things get tricky. Your employees work from home. Many of them use personal computers for work tasks, despite your bring-your-own-device policies. How do you prevent them from copying work data into unapproved AI tools (bring-your-own-AI) on devices you don’t control?
Egress routing and IP allow lists are your answer. Configure your approved AI applications to only accept logins from specific IP addresses (namely, the IP addresses of your corporate network and your employees’ known home offices). Then enforce this on all your endpoints by routing traffic through those approved IP addresses.
Is this disruptive? Yes. Does it add complexity? Absolutely. Will some employees complain? Count on it. But if you’re serious about data protection and AI governance, this control is necessary. The alternative is hoping that employees follow policy on devices you can’t monitor or control. That’s not a security strategy.
You need to balance security with usability. If you make the controls so restrictive that employees can’t do their jobs, they’ll find workarounds. Work with business leaders to understand their teams’ needs. Implement controls that protect data without creating so much friction that productivity collapses.
Enable Growth While Minimizing Risk
The goal of AI governance isn’t to stop AI adoption. The goal is to enable your organization to explore this technology, to experiment with new capabilities, to find competitive advantages, all while minimizing the risk of data breaches, compliance violations, and security incidents.
You’re helping the business move into uncharted territory, but you’re doing it with guardrails in place. You’re saying yes to innovation while also saying no to reckless behavior. You’resupporting growth and stability for employees, for customers, for everyone who relies on your organization to protect their data and operate responsibly.
This is the work cybersecurity professionals should be doing right now. Not fearmongering about AI. Not dismissing the technology as a fad. Not creating bureaucratic processes that slow adoption to a crawl. Instead, we should be implementing practical controls that let the business move forward safely.
The Path Forward
SideChannel ha adopted disruptive technologies before. Cloud computing. Mobile devices. Social media. Bring-your-own-device. Each of these created new security challenges. Each required new policies, new controls, new ways of thinking about risk. And each time, the organizations that succeeded were the ones that found ways to enable adoption while managing risk.
AI is no different. The blocking and tackling required for AI governance are familiar. Write policy. Provide approved tools. Scan for compliance. Block prohibited technologies. Use existing security infrastructure to enforce controls. These are capabilities you already have. You just need to apply them to this new challenge.
Be that voice of calm. Be that voice of reason. Support your business as they explore AI. Help them do it in a way that enables the mission your organization exists to serve. Minimize disruption. Keep moving forward.
The chaos around AI will continue. The hype won’t die down anytime soon. Your job is to cut through that noise and focus on what matters: enabling your organization to adopt valuable technology safely. You know how to do this. Now do it.
Jerod Brennen is VP and Cybersecurity Advisor at SideChannel, where he helps organizations build resilient cybersecurity programs. When he’s not geeking out about security technologies, he’s probably still wondering what his life would have been like as a high school choir director.
Connect with him on LinkedIn or reach out at jerod@sidechannel.com.


