AI Career Spotlight: Peter Wooldridge
/In this career spotlight series, we showcase the career paths, daily work, and impact of people working in AI. Whether you’re an aspiring researcher, an engineer, or simply interested in AI, these stories will give you a firsthand look at the possibilities ahead of you.
Today we talk with Peter Wooldridge, Director of Machine Learning at Monolith.
My advice to that would be: learn how to solve problems. Problem solving transcends technology and is the key meta-skill that enables adaptability and longevity in, I think, any field.
Peter’s journey into AI began in Software Engineering, where he built a solid foundation before using his maths background to transition towards AI.
Tell us a bit about your job
As Director of Machine Learning at Monolith, I lead three specialised teams: ML Engineering, Forward Deployed, and Research Science. My role is ensuring these teams have what they need to deliver and clear direction on short and medium-term goals.
On a good day, I try to start with a simple plan: “what 1 key thing would I like to have done by the end of the day.” With numerous meetings and 1:1s, this focused approach helps me stay realistic and productive.
Our teams work on distinct challenges: ML Engineering builds models for our SaaS platform, Forward Deployed collaborates with OEM customers to implement machine learning in our customers testing processes, and Research tackles some of industry’s hardest unsolved problems that could become new product features.
One project I’m particularly proud of is our anomaly detection system, which began life as a simple customer experiment but evolved into one of our platform’s flagship features. What started as a scrappy proof-of-concept with a single automotive client now helps engineers across multiple industries instantly pinpoint hidden irregularities across hundreds of data channels.
Beyond day-to-day execution, my role has a strategic element — planning 3–6 months ahead, determining which initiatives to pursue, and establishing success metrics. This involves integrating signals from across the business and market into a coherent strategy.
I work with technical teams, participate in executive meetings, and speak directly with customers. Customer meetings are one of the favorite aspects of my job, as it gives me first hand understanding of their pain points which strongly informs our planning.
How did you get into the field of AI? What excites you about working in AI?
I was lucky enough to start my career at IBM, joining their software development graduate scheme in 2010. Starting at a company like IBM offered two major advantages: exposure to incredibly varied work, and access to world experts in virtually any technical domain you can probably encounter.
After 3 years working in a very talented team learning the foundations of what it means to build software in the real world, I saw an opportunity to join IBM’s Emerging Technologies Services team. I had only a cursory understanding of AI back then, having taken some courses on SQL and big data. Back then, the industry was buzzing with excitement about big data and distributed computing platforms like Hadoop.
I was drawn to AI because it allowed me to move beyond pure software engineering, which never quite felt like my natural home. It was a great opportunity to fuse my then newly developed software engineering skills with my maths background.
In 2013, I transitioned into a Data Science Consultant role that opened up opportunities to travel all over Europe and work with a variety of companies across industries. I typically collaborated with IBM consulting teams on diverse AI projects, ranging from large-scale ETL pipelines to distributed ML models built on massive datasets. Back then I was coding in a mixture of Python, R and SQL and sometimes Java.
The rapidly evolving nature of AI makes it both an energising and sometimes challenging field to work in. Despite the fact that AI is all about automation and streamlining, the thing that keeps me in the field is actually the people. Being able to work alongside really talented individuals is what makes it really fun for me.
Can you talk about some of the career choices you’ve made along the way?
One of the big transitions in my career was moving from an IC (individual contributor) into leadership. When I first made the move, it was tough. I felt as though my coding was my currency and I was holding on to it tightly. In the initial stages of management, I was trying to juggle both coding and leading. This can work for a bit, but as the teams scale, juggling the two led to doing neither particularly well, and so I had to choose. Once I’d made peace with the transition, I felt liberated to focus on leadership.
One of the most valuable things I’ve learned in leadership is how to handle unclear, vague requests. Initially, I used to get frustrated by this lack of clarity. But I discovered that true leadership is about seeking clarity yourself and playing it back to help others. I’ve found this is best addressed by asking better questions and reflecting people’s thoughts back to them to uncover what’s really in their head. This approach transformed not just how I led, but how I solved problems entirely.
I believe the higher-level perspective gained in leadership also makes me a better developer, when I do code. Understanding the “why” behind technical decisions and having the confidence to seek clarity when requirements are vague are skills that would enhance any technical work I do. The best technical solutions come from understanding the problem deeply rather than just implementing what’s asked for.
How did you develop the leadership skills you need for your role?
I don’t think I’ve ever really done any formal training on this. I like to observe people who I see as strong leaders and try to understand what it is that is resonating with me.
Something I admire in strong leaders is their ability to communicate clearly. This can be from little details like the way Steve Jobs uses pauses for effect on stage. I think communication is such an important skill to develop as a leader in tech, especially the communication of technical concepts to non-technical audiences.
Communication skills can definitely be learned and developed. I’ve found the most effective method is simply recording yourself (quite easy these days with lots of recorded meetings) and noting down what you need to work on. Alternatively, share the recording with someone who’s a good communicator to give you brutally honest feedback. It’s painful in the beginning but there might be certain ticks or things you do that only others can spot, and it’s worth finding these out.
I think the other thing about leading, specifically in a startup ecosystem, is learning to make decisions under uncertainty. The growth of the company is directly correlated with the speed with which decisions are made. Very rarely are choices black and white, but the worst of both worlds is usually trying to hedge your bets and half-pursue two options, which results in mediocre results across the board. Deciding at a leadership level is often about clarifying all the things you won’t be doing in order to stay on track to your goals. Being able to exclude things is a highly valuable way to make a team more effective.
I’m not sure if there’s a playbook for how to improve this — it comes from experience, but I think decisiveness can start with really small behavioral patterns. For example, when someone asks “where do you want to go for dinner tonight?”. We have agency in how we respond. We can say “I don’t mind, what do you feel like?” or, we can make a proposal: “let’s go to this Italian place.” These small behavioral patterns are what eventually translate to the bigger things.
What’s your best piece of advice for anyone early on in their AI career?
One fundamental challenge people have today more than ever I think is how to stay relevant in a rapidly changing field like AI. How to keep being useful with the pace at which AI is evolving?
My advice to that would be: learn how to solve problems. Problem solving transcends technology and is the key meta-skill that enables adaptability and longevity in, I think, any field.
That doesn’t mean you shouldn’t learn Python, but, thinking about problems from first principles means that if/when programming language X falls out of fashion, you are able to be agile and adapt.
From a programming perspective, this looks like applying algorithmic thinking or data structure knowledge (arrays, trees, graphs, hash tables etc.) to problems rather than obsessing over a specific syntax. This is where I’ve seen experienced developers able to switch programming languages super quickly — they’re not starting from scratch because they’re able to apply universal problem-solving principles, just with a different syntax.
This principle extends beyond just programming languages to the field of AI itself. For example, I’d say it’s probably more valuable to learn the key linear algebra operations (matrix multiplication, eigenvectors, vector spaces, etc.) that basically underpin every modern ML algorithm than it is to learn the specific architecture behind transformer networks. The former has a greater return on investment since it can aid understanding of basically any modern ML algorithm.
What are you excited for in the future of AI?
2025 is going to be the year of agentic applications. Agentic AI systems are those that can plan, reason, and take actions to accomplish specific goals with minimal human oversight. I’m expecting to see people creating agents for things that don’t really need them, but amidst the hype, I believe we’ll witness some truly transformative applications.
What excites me most is how these technologies will reshape knowledge work. For instance, in engineering testing — my field — I envision AI agents that can autonomously design experiments, analyse results, identify anomalies, and suggest next tests. This could compress months of testing cycles into days, fundamentally changing the revolutionising the product development workflow.
Another big breakthrough will come as lightweight model architectures enable more on-device intelligence. Running sophisticated models locally addresses many privacy concerns that currently limit adoption in sensitive industries. We’re starting to see this with models like ollama that can run on laptops, opening possibilities for AI applications in healthcare, finance, and other regulated sectors.
That said, there’s still significant work needed to make agentic apps reliable in production environments. The gap between impressive demos and reliable business applications is substantial. The teams that can build robust systems, align them with business processes, and create effective human-AI collaboration will be the ones delivering value beyond the initial hype.