There's a lot of fun, fear and anger at AI. Teams building new AI products need to be hyper aware for the unexpected outcomes of the products they're building and take steps outside of their core roles to protect users. I've been involved with building AI, ML, and prediction products since ~2015 with products such as an early warning service for heart attacks, neurological decline prediction tools for dementia, internal cost saving, Insurance and Banking products. Here's how I think about it: * A team building a new AI product is creating a baby ↩ * Making the baby is fun and exciting ↩ * It's your responsibility to look after what you have created ↩ * Saying 'not my job' isn't acceptable 🚩 This team, highlighted in The Guardian, is responsible for creating an AI which suggested its users combine products from the company together to create Chlorine Gas. The use of Chlorine Gas is a war crime. Congratulations, you have created a product which suggests creating a war crime. The team is responsible as they set the parameters which made this possible. Building cutting edge products and services is a team sport. The team, regardless of your role in that team, is responsible for the ethics of what is being created. Getting everyone together and having a premortem, which is similar to a risk assessment for non-technical folks reading, and using the UK Government's 5 principles to guide and inform the responsible development and use of AI: 1. Safety, security and robustness 2. Appropriate transparency and explainability 3. Fairness 4. Accountability and governance 5. Contestability and redress Have you tried any AI products, what did you think? #blog #writing #AI