AI adoption is accelerating, and with it, the pressure on in-house legal teams to keep things safe, compliant, and ethical. But in reality, legal鈥檚 role in AI implementation can (and should) go much further than just red flags and disclaimers. Legal teams are uniquely positioned to set the standard for how AI is used responsibly across the organisation.
We鈥檝e spoken to legal leaders across industries who are doing exactly that: balancing innovation with integrity, and helping their companies avoid reputational missteps while unlocking real value from AI tools.
Here鈥檚 what they鈥檝e learned.
The roll out of AI across organisations isn't just for tech teams, it's also a legal, ethical and reputational issue. And no one else in the business sits at that intersection quite like legal.
Whether it鈥檚 deciding what data can be used in an AI tool, reviewing vendor terms, or advising on copyright and IP risks from AI-generated content, legal already owns many of the decisions that will shape how AI works in practice.
As Ty Ogunade, Contracts Manager at GWI, put it:
鈥淭he main concern for us is around proper usage of AI tools. Making sure the information we're putting into tools like ChatGPT isn鈥檛 being used to train it.鈥
And as Alex Love, Corporate Counsel at Algolia, added:
鈥淭he legal team鈥檚 key role is giving a holistic view of the risks鈥攂oth input and output. It鈥檚 not just about what the tool produces, but what you feed into it.鈥
A well-drafted AI policy doesn鈥檛 have to be a 30-page document. In fact, it shouldn鈥檛 be. Legal teams are increasingly creating concise, practical guidance that answers one key question:鈥淲hat can and can鈥檛 we do with AI?鈥
That includes:
And just as importantly: who to ask when something鈥檚 unclear.
Read our precedents on AI policy
Luis de Freitas, Managing Legal Counsel at Boston Consulting Group (BCG), noted:
鈥淵ou have to be very clear about which tools are safe and how to use them responsibly鈥攅specially when you鈥檙e rolling out across a global team.鈥
Setting AI policy is not a solo act. The most successful in-house teams we鈥檝e spoken to don鈥檛 just hand down rules, they co-create them.
That might mean running workshops with stakeholders from product, data, IT, and HR, or building a cross-functional AI taskforce to explore use cases and flag risks early.
As Luis explained:
鈥淟eadership buy-in is really important... but you also need enthusiasts inside the legal team, people who want to test the tools and make the information flow.鈥
And it鈥檚 not just about tech implementation. Cheryl Gale, Head of Legal at Yoto, highlighted the importance of culture:
鈥淭ransparency is key, and we really do see that from the top down. Legal isn鈥檛 just there to say no, we鈥檙e a core part of every business conversation.鈥
The rules are evolving and your policy should too. AI laws are developing rapidly across the UK, EU, and US, and with them come new obligations around explainability, transparency, and risk classification.
That鈥檚 why some in-house teams are putting in place lightweight AI governance models, regular check-ins, training sessions, and shared registers of approved tools. It鈥檚 not bureaucracy; it鈥檚 future-proofing.
As Laura Dietschy, Commercial Legal Lead at Palantir, said:
鈥淭he mistake people make is buying point solutions and slapping them on top of problems. You need to start with your data, your risks, your reality.鈥
AI is rewriting how businesses operate, and legal teams have a real opportunity to shape that transformation for the better.
By drafting clear, actionable policies, working cross-functionally, and staying on top of emerging risks, legal teams can go beyond compliance. They can become champions of ethical, effective, and empowered AI use.
And in a world where customers, regulators and employees are all watching closely, that leadership is more valuable than ever.
Need help building your AI policy? Read our precedent here.
* denotes a required field
0330 161 1234