A century of women at the Royal Institution

The Ri is celebrating 100 years since Joan Evans gave the first presentation by a woman, making it the perfect venue for the 15th Ada Lovelace Day.

This year is marks the 100th anniversary of the first presentation by a woman at the Royal Institution – archaeologist Joan Evans, who was an expert in English jewellery from the fifth to the 17th centuries, gave a Discourse in June 1923 titled “Jewels of the Renaissance” – making it the perfect setting for the 15th celebration of Ada Lovelace Day!

Ada Lovelace herself attended lectures at the Ri, in the very theatre where ALD 2023 will be taking place this year and the same theatre where Michael Faraday first demonstrated many of his discoveries. The Ri is still home to his original laboratory and his collection of notes, which are preserved as part of their internationally significant collection, on display in the Ri’s free museum.

Lovelace was keen to receive tutelage from Faraday, writing to him several times, and their letters can still be seen today at the Institution of Engineering and Technology (IET). Sadly for Lovelace, Faraday declined her request.

The Ri is not just a home for science where everyone is welcome, it continues to champion the known and unknown contributions of women to science. It has hosted many amazing female speakers, including:

And if that’s not enough inspiration, the Ri has this compilation of 10 mind-blowing science talks by women.

Notable members and fellows of the Royal Society include Katherine Lonsdale, a pioneering scientist, especially known for her groundbreaking work on x-ray crystallography, who worked at the Ri at numerous points throughout her career; Angela Burdett-Coutts, “the wealthiest woman in England after Queen Victoria” and campaigner for children’s education, whose application was signed Michael Faraday; and Agnes Clerke, renowned astronomer and author of A Popular History of Astronomy during the Nineteenth Century.

If podcasts are more your speed, then try these two episodes: Tackling climate change with innovation features the Ri’s Director Katherine Mathieson in conversation with Alyssa Gilbert, the Director of Undaunted, a partnership with Imperial College London that supports climate-positive startups; How did patriarchy develop across the world?, in which award-winning science journalist Angela Saini and former Australian prime-minister Julia Gillard discussed the roots of gendered oppression.

The Ri has long championed women in science and it’s an honour for Ada Lovelace Day to be returning to the venue for the third time with our science cabaret, featuring some of the smartest and most innovative women in STEM from across the UK: Prof Jennifer Rohn, urologist; Dr Anjana Khatwa, Earth scientist and presenter; Dr Sophie Carr, mathematician; Dr Aarathi Prasad, writer, broadcaster, and geneticist; Dr Azza Eltraify, senior software engineer; Dr Antonia Pontiki, biomedical engineer; Rosie Curran Crawley, presenter.

Join us in person or online, on Tuesday 10 October for seven fascinating talks that will entertain, inform and surprise you.

Ada Lovelace Day is back on 10 October 2023!

Thanks to generous support from The Royal InstitutionStylistRedgate, The Information Lab’s Data School and dxwAda Lovelace Day Live 2023 will now go ahead on the evening of Tuesday 10 October.

The Royal Institution will be hosting ALD Live as part of their autumn program of public events, and Stylist have come on board as our media partner, providing outreach and promotional support.

“We’re delighted to be hosting this year’s Ada Lovelace event at the Royal Institution,” said Katherine Mathieson, Director of the Royal Institution. “We’re looking forward to welcoming a wide range of people on the day, in-person and online, to meet and celebrate some inspirational women working in computing and technology. It’s a perfect fit for our mission of bringing people and scientists together to celebrate their interest and passion for science. We’re a home for science and everyone is welcome.”

“When we heard that Ada Lovelace Day was under threat we wanted to help save it,” said Lisa Smosarski, Editorial Director at Stylist. “As a champion of gender equality, we had always admired the day as a truly authentic way of championing women in STEM and for showcasing the pioneering work of women like Ada. Considering women are still hugely underrepresented in this field, this day is still very important and much needed. By adding the Stylist brand network and influential audience, we’re thrilled the day will run in 2023 and for many more years to come.”

Over the next twelve months, we will be working hard to build an Ada Lovelace Day that can serve women and girls in STEM long into the future. As part of that work, we are relaunching our newsletter on Substack, where we’ll keep you up to date on Ada Lovelace Day news, as well publishing profiles of women in STEM and highlighting books and podcasts by and about women in STEM. We will also have a membership option for those who would like to support us financially, so sign up now and pledge your support. We do still need additional sponsors, so if your company wants to get involved, drop Suw Charman-Anderson a line.

We are delighted to be back and we hope you’ll join us at the Royal Institution in October for a fascinating and entertaining evening featuring seven women in STEM who will share their experiences, insights and expertise and whose stories we hope will inspire and empower the next generation.

Do you have an ecological #FieldworkFail you’d like to share?

We are looking for ecologists to share their experiences of working in the field as part of our Fieldwork short comedy film project. If you’d like to take part, you can either do so by answering some or all of the questions in the form below, or you can arrange an hour-long interview with Suw – just pick a convenient date and time via Calendly.

Introducing Fieldwork

Everything you need to know about our latest creative project.

If you’ve ever been on a science field trip, you’ll know that, in amongst the experiments and data gathering, things can go hilariously wrong. The longer you spend in the field, the more likely you are to have had animals carry off your equipment, experienced unexpected malfunctions, or seen creatures other than your target species appearing in your camera traps.

We are collecting examples of #fieldworkfails from ecologists, particularly in the UK, and listening to their experiences of working in the field to inform the development of a comedy drama. The first output will be a short film script, which Suw Charman-Anderson will be writing, but we may also use data collected as the basis for other outputs, including this newsletter.

Our aims are both to entertain and to increase awareness of ecology as a subject and as a career path. Television and film can have a powerful effect on people’s perceptions of a subject. The X-Files inspired a generation of women to become interested in science, technology, engineering and maths with what is now known as The Scully Effect. Bones encouraged women into science, as has Black Panther’s Shuri.

Can we do the same for ecology?

Our new Fieldwork newsletter

I’m going to be chronicling the entire process of writing and making the Fieldwork short film here on the ALD blog and also in a Substack newsletter. I’ll talk about my background research, possibly sharing some snippets from my interviewees, and exploring life in a field station.

I’ll also be sharing my journey into the world of comedy writing, delving into the complexities (or simplicities) of character, structure and joke writing. I dabbled in stand-up comedy many years ago, so this isn’t entirely new to me, and I’m very excited by the idea of re-finding my funny.

If you’re interested in comedy writing, then this project is very definitely for you.

I’m an ecologist! Can I take part?

Yes, you can! Just drop me a line and I’ll let you know when our online survey and interview schedule is ready.


Fieldwork is part of the International Collaboration on Mycorrhizal Ecological Traits, organised by the University of York, University of Edinburgh, Dartmouth College and Ada Lovelace Day. It is funded by the National Environment Research Council (NERC), Grant Number: NE/S008543/1.


Subscribe to Fieldwork on Substack

If you’d like to follow this project, you can subscribe to Word Count, Suw’s creative writing newsletter on Substack, a hybrid email newsletter/blog publishing service which we are using as part of our program of public outreach.
Word Count has four sections: a weekly writing newsletter, plus Essays, Fiction and Fieldwork. When you subscribe, you’ll be able to control exactly which emails you receive, so if you only want news about Fieldwork, you can unsubscribe from the other three sections.
Subscribing to a single section in Substack can be a little bit fiddly, but you only have to do it once.
  1. Visit https://wordcounting.substack.com/s/fieldwork.
  2. Put your email address in the box and click Subscribe.
  3. Pick your subscription plan. A free plan is available.
  4. Skip the recommendations by clicking Maybe Later, or choose which additional newsletters look interesting to you.
  5. Select which newsletter sections you’d like to receive, eg, untick Fiction and Essays if you do not wish to receive those emails.
  6. Click Continue, then either share to Twitter or untick the box and continue.
  7. If you’re not already a Substack member, create a sign in.
  8. Visit your settings at https://wordcounting.substack.com/account and unselect Word Count: Mews, News & Reviews if you do not wish to receive Suw’s weekly writing newsletter.

You can also follow us on Twitter at either @suw or @iCOMET_York.

TrustyAI – an open source project looking to solve AI’s bias

Rebecca WhitworthBy Rebecca Whitworth, Manager, Software Engineering at Red Hat. 

Artificial intelligence (AI) is an exciting area of technical development. As the tools and methodologies advance, many organisations are looking to use AI to improve business efficiencies, bring innovation to customers faster, gain actionable market insights and more.

However, the rush to put AI in place without always knowing what it can be used for, or how to use it, can lead to problems with the systems and the data itself. We have heard many stories of when AI makes the “wrong” decision due to built-in biases, and in some cases, the outcome can be life or death.

This is a challenge that open source project TrustyAI – built on Red Hat OpenShift and Kogito – is looking to address.

AI gone wrong

AI is essentially a set of priorities and decisions a system has been programmed to follow. And because humans are responsible for that programming, our flaws become the system’s flaws. This happens in two main ways. One is cognitive bias that comes from the person, or people, who train the data model and build bias into the AI logic. The second problem is a lack of data, or a lack of diverse data. For example, we’ve seen cases of datasets that are dominated by white males, meaning AI trained on this data filters out anyone who doesn’t fit these characteristics.

There is a growing amount of research about the transfer of bias from humans to AI. As AI becomes more prevalent, it is being used to decide critical life moments like whether you get into college, get a mortgage, qualify for a medical operation, and even for determining a prison sentence, which makes the negative outcomes easier to spot.

The data science community is very aware of the need to stop systems perpetuating bias, and all the big tech companies are now looking at this. IBM’s AI Fairness 360, Google’s “What-If Tool” and Amazon SageMaker Clarify all go to show how significant and competitive this field has become.

Identifying the problem

If you aren’t proactively looking for bias, you may not notice if the AI you are using for hiring is selecting people based on their gender and name instead of their technical skills and experience, for example. There hasn’t been a simple and reliable way to test the balance and accuracy of an AI model, so the first time a business knows it has an issue is when it gets some negative PR or someone brings a case against it. This black box opaqueness has also become a legal risk with the advent of the General Data Protection Regulation (GDPR), where anybody in Europe has the right to ask for their data and how it is being used.

TrustyAI is a standalone piece of software that plugs into any AI application, regardless of what environment or technology it’s running on. It introspects the system, looking at all the various inputs, and maps how these influence the outputs. That analysis is served up in easy-to-visualise charts, which make it clearer if any biases exist. The user can alter the values to see the effect of different criteria.

For example, with a rejected mortgage application, you’d expect a different outcome if the applicant had a higher credit rating or earned more money. But if TrustyAI finds that by only changing the applicant’s ethnicity or postcode you arrive at a different outcome, then it’s clear the system has bias. We call this Explainable AI — XAI for short.

TrustyAI in action

TrustyAI has been built with Kogito (an open source, end-to-end business process automation technology) and Red Hat OpenShift. So it is cloud-native and can be deployed to any system running on any environment. It is written in Java and integrates with Python, the language data scientists generally use, so a user only needs to open up a notebook and apply library functions to their datasets. There’s no reconfiguration required.

Open source has been critical to TrustyAI. Tapping into open communities has enabled us to develop at a faster pace. For example, KIE (Knowledge Is Everything), an open source community for business automation, has been a brilliant partner. Its repository hosts the TrustyAI code and its members continually perform testing, offer fixes and suggestions and share their datasets. Without this collaboration (our in-house project team is myself, three developers and one architect) progress would have been much slower.

We’re really excited about the impact that TrustyAI can have. Reputational management can make or break a business, and there’s a real clamour to mitigate the risks of AI. Understandably, businesses don’t want to slow down their adoption of AI, and TrustyAI helps give them the confidence to push forward, while also prioritising transparency and accountability.

As a project team, we are looking at how to apply TrustyAI across Red Hat’s portfolio to add strategic value — both internally and for our customers — while continuing our research on all things AI, from cognitive bias and ethics to developments in data science and algorithmic best practices.

About Red Hat

Red Hat logoRed Hat is the world’s leading provider of enterprise open source software solutions and services, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers develop cloud-native applications, integrate existing and new IT, and automate, secure, and manage complex environments. Follow Red Hat on Twitter: @RedHat