Discover this podcast and so much more

Podcasts are free to enjoy without a subscription. We also offer ebooks, audiobooks, and so much more for just $11.99/month.

#173 Top 10 Reasons to Use ISO 42001 AI Management

#173 Top 10 Reasons to Use ISO 42001 AI Management

FromThe ISO Show


#173 Top 10 Reasons to Use ISO 42001 AI Management

FromThe ISO Show

ratings:
Length:
20 minutes
Released:
Apr 30, 2024
Format:
Podcast episode

Description

ISO 42001 was published in December of 2023, and is the first International Standard for Artificial Intelligence Management Systems. It was introduced following growing calls for a common framework for organisations who develop or use AI, to help implement, maintain and improve AI management practices. However, its benefits extends past simply establishing an effective AI Management System. Join Steph Churchman, Communications Manager at Blackmores, on this episode as she discusses the top 10 reasons to adopt ISO 42001. You’ll learn ·      What is ISO 42001? ·      What are the top 10 reasons to use ISO 42001? ·      What risks can ISO 42001 help to mitigate? ·      How can ISO 42001 benefit both users and developers of AI?    Resources ·      Isologyhub ·      ISO 42001 training waitlist   In this episode, we talk about: [00:30] Join the isologyhub – To get access to a suite of ISO related tools, training and templates. Simply head on over to isologyhub.com to either sign-up or book a demo. [02:30] What is ISO 42001?: Go back and listen to episode 166, where we discuss what ISO 42001 is, why it was introduced and how it can help businesses mitigate AI risks.   [02:45] Episode summary: We take a look at the top 10 reasons why you should consider implementing ISO 42001. [02:55] #1: ISO 42001 helps to demonstrate responsible use of AI.  – , ISO 42001 helps ensure fairness, non-discrimination, and respect for human rights in AI development and use. Remember, AI can still be bias based on the fact that AI models are typically trained on existing data, so any existing bias will carry over into those AI models – an example of this is the existing lack of representation for minority groups. We also need to take care in the use of AI over people, as staff being replaced by AI is a very real concern and should not be treated lightly. We’ve already seen a few cases where this has happened, especially across the tech support field where some companies mistakenly think that a chatbot can replace all human staff. We also need to consider the ethics of AI content. It’s predicted that 90% of online content will be AI generated by 2026! A lot of this generated content includes things like images, which poses a real concern over the values we’re translating to people. The content we consume shapes the way we think and if all we have is artificial, then what message is that conveying? An example of this is Dove’s recent advert, which showed an example of AI generating images of very unobtainable ideals of a beautiful face. Which were predictably absolutely flawless, almost inhuman and something that can only be achieved through photo editing. If the internet was flooded with this sort of imagery, then that starts to become the expectation to live up to, which can be tremendously damaging to people’s self-esteem. They then went on to show actual unedited people, in all their varied and wonderful glory and stated that they will never use AI imagery in any of their future marketing or promotional material. Which sends a very strong message – AI definitely has its place, but we need to fully consider the implications and consequences of it’s use and possible oversaturation. [05:20] #2: Traceability, transparency and reliability - Information sourced via AI is not always correct – It collates information published online, and as many of us are aware, not everything on the internet is correct or accurate. Data sets carelessly scrapped from online sources may also contain sensitive or unsavoury content. We’ve had cases where people have managed to ‘break’ Chat GPT, causing it to spew out nonsense answers which also contained sensitive information such as health data and personal phone numbers. While not usually accessible when requested, it does not stop the risk of this data being dug up through exploits. AI is like any other technology, and is not infallible. So, it’s up to developers to ensure that the data used to train
Released:
Apr 30, 2024
Format:
Podcast episode

Titles in the series (100)

Blackmores is a pioneering consultancy firm with a distinctive approach to working with our clients to achieve and sustain high standards in Quality, Risk and Environmental Management. We'll be posting podcasts discussing ISO standards here very soon!