### #WIP This is work in progress, please treat it accordingly. I've written about building AI responsibly before ([[Building AI responsibly]]). Having seen a few more companies come out and share their internal guidance, I wanted to revisit the topic. I've been involved with building AI, Machine Learning, and prediction products since ~2015 with products such as an early warning service for heart attacks, neurological decline prediction tools for dementia, and cost saving prediction models used in FinTech and Google. I don't want to keep talking about AI, it's becoming a big annoying to see the internet we knew and loved being systematically broken down by profit seeking big tech companies and those peddling #AI slop. Paul's blog post [Bored of it](https://paulrobertlloyd.com/2025/087/a1/bored/) is a cathartic read. Whilst all this is true, this genie isn't going back in the bottle. That's clear for all to see. Given this, creating useful guidelines and offering best practice to those wanting to explore this space is a big lever we can pull on in an attempt to stop this train sliding off the tracks entirely. Just last week, JPMorgan [published an open letter](https://www.jpmorgan.com/technology/technology-blog/open-letter-to-our-suppliers) highlighting a big supplier risk as those adopting AI without proper foresight are already running into serious problems; 3x increase in security vulnerabilities since AI reached mass adoption, 78% of enterprise AI deployments lack proper security protocols, and most companies can't explain how their AI makes decisions! It's really important we start to implement AI correctly. I hadn't seen a list of principle or guidance docs, so I wanted to make one. Here's my, totally AI free, attempt at that. ### [Channel 4's AI principles](https://assets-corporate.channel4.com/_flysystem/s3/2025-05/Channel%204%20AI%20principles.pdf) > At Channel 4, we believe AI is here to support human creativity, not replace it. So we’re not handing the keys to the machines The AI principles outline their core beliefs (creativity comes first, championing transparency, inclusive storytelling, and everyday integrity), and outlines clear examples where AI is being used today - Another win for #transparency, one of my core [[operating manual]] items. It also makes clear where Channel 4 wont be using AI, the big ones being to peddle misinformation, displace creativity, and not working with AI algorithms that perpetuate existing biases. **PS**: As a small aside, PDF's on the internet aren't #accessible. It's also harder to track analytics, they’re not interactive, and they’re extremely painful to read on phones as [Matt McGregor points out](https://shorthand.com/the-craft/the-pdf-is-in-terminal-decline/index.html). ### [Google's AI principles](https://ai.google/responsibility/principles/) > We believe our approach to AI must be both bold and responsible This is coming from the company that killed your favourite product [Killed by Google](https://killedbygoogle.com/). They are big enough and have some of the best engineering talent, I've seen first hand during my time there. Google's AI policy has 3 principles: 1. Bold innovation 2. Responsible development and deployment 3. Collaborative progress together This feels mostly #marketing junk, but there is value in their [responsible AI progress reports](https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf?_gl=1*1utlsas*_up*MQ..*_ga*MTg0NzA3ODUzMC4xNzQ2MjE1ODEw*_ga_KFG60X3H7K*MTc0NjIxNTgwOS4xLjAuMTc0NjIxNTgwOS4wLjAuMA..) that breaks down their AI efforts into govern, map, measure, and manage. For those interested in governance advice from Google, I'd start here. ### [Microsoft's AI principles](https://www.microsoft.com/en-us/ai/principles-and-approach#tabs-pill-bar-ocb9d4_tab0) > We're committed to developing AI systems in a way that is transparent, reliable, and worthy of trust. This was surprisingly lacking in actual actionable content. It read entirely as a fluff piece, even their [responsible AI action report](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/final/en-us/microsoft-brand/documents/responsible-aI-transparency-report-2024.pdf) is corporate drivel > 99 percent of employees completed the responsible AI module in our annual Standards of Business Conduct training Right. Great. Well done? I had expected more insight, guidelines, and guidance from one of the largest companies in this space. Then again, they did make Microsoft Teams so perhaps I'm expecting too much. ### [Wharton: effective chatbots](https://knowledge.wharton.upenn.edu/wharton-blueprint-ai-chatbots/) > Some tasks are perfectly suited to AI chatbots: high-volume, repetitive tasks in lower-risk industries like travel, retail, logistics, and hospitality Although specific to Chatbots and not all AI applications, Wharton's effective #chatbot blueprint is a helpful view of how we can implement AI to help the customer, and help improve our business finances too.