AI Career Spotlight: Simon Fothergill

In this career spotlight series, we showcase the career paths, daily work, and impact of people working in AI. Whether you’re an aspiring researcher, an engineer, or simply interested in AI, these stories will give you an idea of the possibilities ahead of you.

Today, we speak with Simon Fothergill, Lead AI Engineer. As an Engineer, Simon focuses on building systems that not only work, but also contribute positively — balancing technical depth with responsible leadership, and bringing clarity to the complex.

“Leadership is… taking responsibility for replacing ambiguity with clarity, by uniting people in a common direction at the opportune moment.”

Simon’s journey started with an undergraduate degree project, and since then he’s built his career in the field of natural language processing.

Tell us a bit about your job

I am an engineer because I love to build things and get them working so they positively contribute to people’s lives.

The AI industry is currently like a sports arena full of lego, with new models and bricks being poured in through every door and window, as all sorts of people build, build, build!

Companies currently need AI engineers with:

  • Deeply technical, hands-on skill to choose the right bricks to assemble in the right way.

  • Technical leadership and product/project/team management skills to navigate this sea of bricks and surf the waves.

  • Breadth of experience across applications to cope with the uncertainty of this sea.

Whether I am consulting or in a full-time role, I look for roles with a mix of these opportunities and that would broaden my experiences.

I enjoy working with domain experts who can inspire and refine product definitions and the corresponding AI models.

I develop evaluation frameworks that allow product owners to navigate project iterations.

I specify requirements for fairly and authentically collecting and annotating data, to a suitable level of saturation. This allows me to experiment with safe and transparent product iterations, one subdomain at a time. Building beta systems to collect high quality data as soon as possible can sometimes be necessary.

Design diagrams and code reviews are great ways to improve as a software engineer, so I invest significant time on this.

How did you get into the field of AI? What excites you about working in AI?

I worked on an AI system for my final year undergraduate project and didn’t want to stop!

AI is the automated prediction of information. I love the magical contradiction of making things happen all by themselves. Predicted information, whether useful knowledge in itself or not, is the reflection of our world through the mirror that is the AI model. The better shaped and polished the mirror is, the more it reveals about the world and the better we can understand and contribute to it. Recently I’ve worked on modelling phenomena ranging from the risk profiles of legal proceedings, to the inspiration of journalism, to the empathy of psychotherapy.

Can you talk about some of the career choices you’ve made along the way?

I think the level of responsibility one wishes to take in developing AI systems is currently an important factor in career direction.

It takes a village… just because a company has more money, doesn’t necessarily mean more things happen there.

I have ended up on the natural language processing side of things more than the computer vision side, as a matter of chance and have built on that.

I’ve favoured applications over generic platforms as I learnt about parts of the world being modelled. I build ‘vs’ buy, as soon as possible, for responsibility, IP generation, customisation and development speed. I keep models as simple as possible for as long as possible — Occam’s Razor.

I’ve always had the role of a hands-on, full-stack, AI software engineer, albeit with an increase in scope, management and leadership responsibilities. But my community name has changed around me, from “software engineering” to “data science” to “machine learning engineering” to “AI engineering” and even now back to “software engineering”.

How did you develop the leadership skills you need for your role?

Leadership is not the management of scope, schedule or ways of working, but taking responsibility for replacing ambiguity with clarity, by uniting people in a common direction at the opportune moment. Justifying the value chain can help, as can humility and serving, protecting, teaching and gently challenging those around you.

I think I’ve reflected on these ideas, which have come from Jesus, parents, 1–1 conversations with mentors (previously highly successful businessmen) and also just common sense.

What’s your best piece of advice for anyone early on in their AI career?

Being clear is more important than being right.

What are you excited for in the future of AI?

The current “efficiency (gold) rush”, towards AI models that we hope reflect our world and automatically predict something of business value, can be worth the risks of inaccuracy, opaqueness and the uncanny valley, especially for non-critical, commonsense or human-in-the-loop applications.

However, exciting opportunities to mitigate risks include:

  • Not trying to solve problems that shouldn’t be solved by AI, with AI.

  • Not always wanting to listen to an UPF (Ultra Processed Foundation model).

  • Current foundation models are not deep enough for some niche domains.

  • Judging when the iterative, empirical process of AI development is likely to fail to reveal a suitable model.

  • Algorithms do not ‘deserve’ to be trusted in the same way humans deserve to be trusted.

  • Over-use of co-pilots can weaken a human user’s abilities to critique output.