Skip to content

New Scientist CultureLab: Meredith Broussard on trusting Artificial Intelligence

I came across a very interesting podcast which was an interview between New Scientist and Meredith Broussard. Meredith is an Artificial Intelligence (AI) researcher. She had some very interesting insights into AI systems.

Meredith highlights a phrase coined by herself- ‘techno chauvinism’. She defined this as a pro-technology bias that says technological solutions are superior to others. She stated that we need to back off from ‘techno chauvinism’ in order to make good decisions on if, and how, we use technological solutions. With the rise of AI and technological advances I think this phrase is more important than ever. I think we as a society need to have real conversations on if a problem requires an AI solution, before jumping on the ‘AI bandwagon.’ Fortunately, cancer screening is an area where AI solutions could really benefit the screening process to make it more efficient, with the key being to aid the human skillset and specialised intellect as appose to replacing it. This actually ties in very well with Meredith’s next point which was AI systems can’t replace judgement or interaction. She gave the example of when Sports Illustrated tried to use AI to replace the journalists and it being a total disaster. Using AI to aid human judgement instead of replacing them, is something that is fundamentally at the core of what Astronomical AI stands for. Meredith suggests designing sociotechnical systems to help human do their jobs better as appose to design systems to replace humans, which is exactly what we are trying to achieve at Astronomical AI.

Meredith talks about how more education is needed into ‘what AI is and is not’. She talks about when an AI system evaluates a scan you have to decide how wrong you want the system to be mathematically, i.e. more false positives or more false negatives. Both options are bad, however false negative consequences are much greater, so systems are tuned to have more false positives. People think AI can set their minds at ease because they will get a computational answer, but its set to tell you there is a problem more often than you imagine, therefore the system is not doing what you want it to, there is no reassurance with AI, we have to live with ambiguity. I think this is very profound as it highlights the unrealistic expectations people have of AI, that is can provide certainty, however the reality is that it can only really produce a prediction of high degrees of accuracy, nothing can produce certainty, as Meredith states when looking at AI we need to have a mindset that to an extent we have to live with ambiguity. She also points out there is a misconception of AI, that it is some kind of magic or superior solution that is going to be a salvation. An Ai system is just a machine that does complicated math and statistics, it is not magic.

Meredith discusses the moral dilemmas of using AI and she points out that it is not as simple as ‘AI is good’ or ‘AI is bad’ it is fare more complex. She highlights the important thing is about AI in context. I think this points out that the AI system is only as ‘good or bad’ as the people or datasets developing it, and this is how you get biases and prejudices being transferred from humans/traditional systems into an AI system. She points out that when we think about AI our minds jump to science fiction and Hollywood, but this is abstract and a tendency we need to fight against. We should not be using science fiction as a template of what to make in real life. I think this is very important to help fight the stigma attached to ‘AI taking over the world’ and realise the actual real-life application of AI is nothing life the science fiction of ‘The Terminator’.

Meredith touches on people having a vision that AI can predict the future and she explains in reality all it does is reflect the past. She explains that all an AI system does is it uses past data to compute trends and patterns. The system then uses mathematical patterns in the data to produce new predictions, which is called a model. She points out if the past data has flaws then the model inherits those flaws in its predictions, therefore AI is not a foolproof system that predicts the future as people may think. She gives the example of an automated mortgage approval software that uses AI, and how it rejected a high number of mortgage applications from people of colour due to historical financial discrimination- residential segregation. What comes out of computational system is often a reflection of developers own biases and life experiences. These biases were replicated in the computational system, and that these biases can be corrected but not enough people are doing this. This is something that we are very passionate about at astronomical AI and are actively looking to become more of ‘these people doing this’ that Meredith is referring to. We aim to do this by ensuring we have a diverse workforce, but also that the dataset used in our algorithm is highly scrutinised and evaluated to ensure a fair representation of scan images.

Next, Meredith poses the question, ‘do we trust AI too much?’. She discusses regulation and states it needs to be more aggressive and we need to implement it into existing laws into AI systems. She talks about the EU regulation regarding AI and how it defines AI into either high or low risk. She discusses facial recognition as an example, she explains that facial recognition used to unlock her phone would be low risk as there is a passcode back up, however facial recognition use by police on real time video surveillance would be high risk, due to more inaccuracies on people with darker skin tones, resulting in people of colour being wrongfully arrested, misidentifying women more than men, trans and non-binary people being misidentified. She feels this needs regulation. She states that the EU regulation regarding AI is a good way to start with, high/low risk regulation. She asks will humans green light AI judgement as appose to quality check what it produces as they trust AI too much?

I think this is a very interesting topic of discussion as humans have a tendency to become over reliant on the systems used to aid their knowledge and judgement, they often lose sight that the system relies on their expert knowledge and judgement to proof check its outcomes and correct it where possible, this is the only way to eradicate biases in  AI and regulation can really help with this going forward. We need to ensure we do not become over reliant on AI or any system.

Overall, I think this interview with Meredith discussed some very profound topics and issues which I think everyone who is looking towards AI for solutions should be taking into account. I think many of the problems and issues she has discussed have been at the forefront of Astronomical AI’s mind and many of those issues we are actively looking for solutions to. We are trying to ensure that the AI tool we produce is compliant from moral, intellectual and ethical standpoint. We think that regulation within AI is something that can really help with this. We hope the product we are creating helps create the fairer and brighter future that Astronomical AI is an advocate for. https://open.spotify.com/episode/5jwONwn1hBu8VI-w60wRjJU?si=Cdiaxod-ISkeas48o7OeV3Q&context=spotify%3Ashow%3A7xN0ob-O7y5AR20v2KT7TBp

#AI #ArtificialIntelligence #Future

Leave a Reply

Your email address will not be published. Required fields are marked *