Ada Lovelace Day is back on 10 October 2023!

Thanks to generous support from The Royal InstitutionStylistRedgate, The Information Lab’s Data School and dxwAda Lovelace Day Live 2023 will now go ahead on the evening of Tuesday 10 October.

The Royal Institution will be hosting ALD Live as part of their autumn program of public events, and Stylist have come on board as our media partner, providing outreach and promotional support.

“We’re delighted to be hosting this year’s Ada Lovelace event at the Royal Institution,” said Katherine Mathieson, Director of the Royal Institution. “We’re looking forward to welcoming a wide range of people on the day, in-person and online, to meet and celebrate some inspirational women working in computing and technology. It’s a perfect fit for our mission of bringing people and scientists together to celebrate their interest and passion for science. We’re a home for science and everyone is welcome.”

“When we heard that Ada Lovelace Day was under threat we wanted to help save it,” said Lisa Smosarski, Editorial Director at Stylist. “As a champion of gender equality, we had always admired the day as a truly authentic way of championing women in STEM and for showcasing the pioneering work of women like Ada. Considering women are still hugely underrepresented in this field, this day is still very important and much needed. By adding the Stylist brand network and influential audience, we’re thrilled the day will run in 2023 and for many more years to come.”

Over the next twelve months, we will be working hard to build an Ada Lovelace Day that can serve women and girls in STEM long into the future. As part of that work, we are relaunching our newsletter on Substack, where we’ll keep you up to date on Ada Lovelace Day news, as well publishing profiles of women in STEM and highlighting books and podcasts by and about women in STEM. We will also have a membership option for those who would like to support us financially, so sign up now and pledge your support. We do still need additional sponsors, so if your company wants to get involved, drop Suw Charman-Anderson a line.

We are delighted to be back and we hope you’ll join us at the Royal Institution in October for a fascinating and entertaining evening featuring seven women in STEM who will share their experiences, insights and expertise and whose stories we hope will inspire and empower the next generation.

Do you have an ecological #FieldworkFail you’d like to share?

We are looking for ecologists to share their experiences of working in the field as part of our Fieldwork short comedy film project. If you’d like to take part, you can either do so by answering some or all of the questions in the form below, or you can arrange an hour-long interview with Suw – just pick a convenient date and time via Calendly.

Introducing Fieldwork

Everything you need to know about our latest creative project.

If you’ve ever been on a science field trip, you’ll know that, in amongst the experiments and data gathering, things can go hilariously wrong. The longer you spend in the field, the more likely you are to have had animals carry off your equipment, experienced unexpected malfunctions, or seen creatures other than your target species appearing in your camera traps.

We are collecting examples of #fieldworkfails from ecologists, particularly in the UK, and listening to their experiences of working in the field to inform the development of a comedy drama. The first output will be a short film script, which Suw Charman-Anderson will be writing, but we may also use data collected as the basis for other outputs, including this newsletter.

Our aims are both to entertain and to increase awareness of ecology as a subject and as a career path. Television and film can have a powerful effect on people’s perceptions of a subject. The X-Files inspired a generation of women to become interested in science, technology, engineering and maths with what is now known as The Scully Effect. Bones encouraged women into science, as has Black Panther’s Shuri.

Can we do the same for ecology?

Our new Fieldwork newsletter

I’m going to be chronicling the entire process of writing and making the Fieldwork short film here on the ALD blog and also in a Substack newsletter. I’ll talk about my background research, possibly sharing some snippets from my interviewees, and exploring life in a field station.

I’ll also be sharing my journey into the world of comedy writing, delving into the complexities (or simplicities) of character, structure and joke writing. I dabbled in stand-up comedy many years ago, so this isn’t entirely new to me, and I’m very excited by the idea of re-finding my funny.

If you’re interested in comedy writing, then this project is very definitely for you.

I’m an ecologist! Can I take part?

Yes, you can! Just drop me a line and I’ll let you know when our online survey and interview schedule is ready.


Fieldwork is part of the International Collaboration on Mycorrhizal Ecological Traits, organised by the University of York, University of Edinburgh, Dartmouth College and Ada Lovelace Day. It is funded by the National Environment Research Council (NERC), Grant Number: NE/S008543/1.


Subscribe to Fieldwork on Substack

If you’d like to follow this project, you can subscribe to Word Count, Suw’s creative writing newsletter on Substack, a hybrid email newsletter/blog publishing service which we are using as part of our program of public outreach.
Word Count has four sections: a weekly writing newsletter, plus Essays, Fiction and Fieldwork. When you subscribe, you’ll be able to control exactly which emails you receive, so if you only want news about Fieldwork, you can unsubscribe from the other three sections.
Subscribing to a single section in Substack can be a little bit fiddly, but you only have to do it once.
  1. Visit https://wordcounting.substack.com/s/fieldwork.
  2. Put your email address in the box and click Subscribe.
  3. Pick your subscription plan. A free plan is available.
  4. Skip the recommendations by clicking Maybe Later, or choose which additional newsletters look interesting to you.
  5. Select which newsletter sections you’d like to receive, eg, untick Fiction and Essays if you do not wish to receive those emails.
  6. Click Continue, then either share to Twitter or untick the box and continue.
  7. If you’re not already a Substack member, create a sign in.
  8. Visit your settings at https://wordcounting.substack.com/account and unselect Word Count: Mews, News & Reviews if you do not wish to receive Suw’s weekly writing newsletter.

You can also follow us on Twitter at either @suw or @iCOMET_York.

TrustyAI – an open source project looking to solve AI’s bias

Rebecca WhitworthBy Rebecca Whitworth, Manager, Software Engineering at Red Hat. 

Artificial intelligence (AI) is an exciting area of technical development. As the tools and methodologies advance, many organisations are looking to use AI to improve business efficiencies, bring innovation to customers faster, gain actionable market insights and more.

However, the rush to put AI in place without always knowing what it can be used for, or how to use it, can lead to problems with the systems and the data itself. We have heard many stories of when AI makes the “wrong” decision due to built-in biases, and in some cases, the outcome can be life or death.

This is a challenge that open source project TrustyAI – built on Red Hat OpenShift and Kogito – is looking to address.

AI gone wrong

AI is essentially a set of priorities and decisions a system has been programmed to follow. And because humans are responsible for that programming, our flaws become the system’s flaws. This happens in two main ways. One is cognitive bias that comes from the person, or people, who train the data model and build bias into the AI logic. The second problem is a lack of data, or a lack of diverse data. For example, we’ve seen cases of datasets that are dominated by white males, meaning AI trained on this data filters out anyone who doesn’t fit these characteristics.

There is a growing amount of research about the transfer of bias from humans to AI. As AI becomes more prevalent, it is being used to decide critical life moments like whether you get into college, get a mortgage, qualify for a medical operation, and even for determining a prison sentence, which makes the negative outcomes easier to spot.

The data science community is very aware of the need to stop systems perpetuating bias, and all the big tech companies are now looking at this. IBM’s AI Fairness 360, Google’s “What-If Tool” and Amazon SageMaker Clarify all go to show how significant and competitive this field has become.

Identifying the problem

If you aren’t proactively looking for bias, you may not notice if the AI you are using for hiring is selecting people based on their gender and name instead of their technical skills and experience, for example. There hasn’t been a simple and reliable way to test the balance and accuracy of an AI model, so the first time a business knows it has an issue is when it gets some negative PR or someone brings a case against it. This black box opaqueness has also become a legal risk with the advent of the General Data Protection Regulation (GDPR), where anybody in Europe has the right to ask for their data and how it is being used.

TrustyAI is a standalone piece of software that plugs into any AI application, regardless of what environment or technology it’s running on. It introspects the system, looking at all the various inputs, and maps how these influence the outputs. That analysis is served up in easy-to-visualise charts, which make it clearer if any biases exist. The user can alter the values to see the effect of different criteria.

For example, with a rejected mortgage application, you’d expect a different outcome if the applicant had a higher credit rating or earned more money. But if TrustyAI finds that by only changing the applicant’s ethnicity or postcode you arrive at a different outcome, then it’s clear the system has bias. We call this Explainable AI — XAI for short.

TrustyAI in action

TrustyAI has been built with Kogito (an open source, end-to-end business process automation technology) and Red Hat OpenShift. So it is cloud-native and can be deployed to any system running on any environment. It is written in Java and integrates with Python, the language data scientists generally use, so a user only needs to open up a notebook and apply library functions to their datasets. There’s no reconfiguration required.

Open source has been critical to TrustyAI. Tapping into open communities has enabled us to develop at a faster pace. For example, KIE (Knowledge Is Everything), an open source community for business automation, has been a brilliant partner. Its repository hosts the TrustyAI code and its members continually perform testing, offer fixes and suggestions and share their datasets. Without this collaboration (our in-house project team is myself, three developers and one architect) progress would have been much slower.

We’re really excited about the impact that TrustyAI can have. Reputational management can make or break a business, and there’s a real clamour to mitigate the risks of AI. Understandably, businesses don’t want to slow down their adoption of AI, and TrustyAI helps give them the confidence to push forward, while also prioritising transparency and accountability.

As a project team, we are looking at how to apply TrustyAI across Red Hat’s portfolio to add strategic value — both internally and for our customers — while continuing our research on all things AI, from cognitive bias and ethics to developments in data science and algorithmic best practices.

About Red Hat

Red Hat logoRed Hat is the world’s leading provider of enterprise open source software solutions and services, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers develop cloud-native applications, integrate existing and new IT, and automate, secure, and manage complex environments. Follow Red Hat on Twitter: @RedHat

Ada Lovelace: A growing legacy

While Ada Lovelace lives on as an inspiration to women working and researching in STEM subjects through the activities of Ada Lovelace Day, her impact can also be seen in the literature. Using tools available from Digital Science, Simon Linacre looks at the ever-increasing amount of research surrounding one of science’s most fascinating figures.

Firstly, a confession: before starting work for Digital Science in early 2022, I didn’t know who Ada Lovelace was. I knew the name, and may have known she had been a scientist of some description, but I had no idea what she had achieved or when she had achieved it. Put it down to my preference for the arts in my education, a patriarchal society or just sheer ignorance, but I had no idea what an inspirational figure she was. Of course, now I know a lot more having worked with Digital Science in its support of Ada Lovelace Day, but I wanted to know more. As someone who has worked in scholarly communications for most of their career, what does her legacy look like in the current literature?

At first glance, it is pretty significant. According to the Dimensions linked database – which covers 131 million publications – there are over 15,000 mentions of ‘Ada Lovelace’ in those publications, as well as 74 policy documents and 29 patents. There are even mentions in eight grant applications. Looking at who is doing the mentioning, it includes some major institutions in scientific research, including Oxford, Cambridge and University College London.

Having said that, most publications originated in the US and not the UK, which shows that her influence has not been limited to her country of birth.

This wider influence is also in evidence when it comes to research categories. While it is unsurprising that the most common research category with mentions of her name is Information and Computing Sciences (716 mentions), not far behind are Philosophy and Religious Studies (486), Language, Communication and Culture (271) and Human Society (240). Lovelace has clearly had quite a broad spectrum of influence, far outside of her ‘home’ of computing sciences.

And the influence appears to be growing. As you can see from the chart, despite a lull around 2018, interest in Ada Lovelace has been growing steadily in recent years, with some acceleration more recently when we look at total outputs of publications that mention her name.

Graph showing the increasing number of mentions of Ada Lovelace in academic publications

Perhaps one reason for her popularity is the proportion of articles that mention her being open access, with 79% of them being free to read online.

Focusing again on the present day, Ada Lovelace has also made somewhat of a splash online. When we look at Altmetric data, we see that the increase in mentions trackable on the internet have also seen rapid growth since 2016, the year after her bicentenary, and especially in the last couple of years. In 2021 alone (see below) there were over 8,000 recorded instances, including 6,662 Twitter mentions, 1,028 Wikipedia mentions and 312 news items.

Bar graph showing increasing mentions of Ada Lovelace in Altmetric's data.

So, who was Ada Lovelace? These days, she is best known for being the first person to publish what would today be called a computer program.

Throughout her childhood she was fascinated by machines, and at the age of 17, she was introduced to the engineer and inventor, Charles Babbage, and his general purpose mechanical computer, the Analytical Engine. Lovelace and Babbage became life-long friends, and Lovelace came to understand the Analytical Engine in depth.

When Italian mathematician Luigi Menabrea published an article about the Analytical Engine in French, she translated it into English, correcting some errors as she went. Babbage suggested that she added her own footnotes, which she did, tripling the length of the paper.

In these footnotes, Lovelace wrote a set of instructions for the calculation of Bernoulli Numbers. Although Babbage had written fragments before, her more elaborate program was complete and the first to be published. She also speculated on the future capabilities of the Analytical Engine, suggesting that it could be used to create original pieces of music and works of art, if only she knew how to program it. She recognised the enormous potential of machines like the Analytical Engine and her vision bears a striking resemblance to modern computer science.

Lovelace’s work was truly ground-breaking and her achievements become even more impressive when one remembers that she was working from first principles with only Babbage’s designs and descriptions to guide her and no working computer to tinker with. Yet, the importance of her paper was not recognised until Alan Turing’s work on the first modern computers in the 1940s.

As we can see from the data, Lovelace is now a widely cited and mentioned mathematician who is increasingly influencing a broad range of STEM and social science areas. Not only does the legacy of Ada Lovelace live on, it has never been bigger or more important.

About Digital Science

Digital ScienceDigital Science is a technology company serving the needs of scientific research. We offer a range of scientific technology and content solutions that help make scientific research more efficient. Whether at the bench or in a research setting, our aim is to help to simplify workflows and change the way science is done. We believe passionately that tomorrow’s research will be different – and better – than today’s. Follow Digital Science on Twitter: @digitalsci