Are AI engineers the future?
Each time a new technology comes along, jobs are eliminated and others are created. We get a glimpse of the latter and speak with AI engineering guru Aurimas Griciūnas to learn more.
Isn’t an “AI engineer” just an engineer who knows how to apply AI in their field?
It might seem like a fad – and let’s be honest, “AI” is being plastered everywhere at the moment – but it’s a fundamentally different role to that of a traditional software engineer, and an example of a role that is only growing in popularity.
Aurimas Griciūnas writes extensively about AI engineering in his newsletter SwirlAI. His content is among the best I’ve seen, going deep in a way that remains accessible to a non-technical audience; refreshing in the age of self-proclaimed AI gurus.
We talk to Aurimas about:
What is an AI engineer?
How can software engineers make the transition?
Common pitfalls
How to hire an AI engineer
What is an AI engineer?
“
From an implementation perspective, AI engineering is very similar to software engineering. From a system’s perspective, it’s more comparable to machine learning engineering, because of all the nondeterminism in the system and how you need to evaluate, experiment and evolve it.
When looking under the hood of an AI product, you’ll uncover nondeterminism wrapped up in complex systems. Where traditional software products are deterministic (“if X then Y”), AI products are nondeterministic (“If X then Y, Z, or K).
Combine this in a system of agents where large language models pass information to each other to reason, and a set of skills is required that neither software engineers nor machine learning engineers excel in: AI engineering.

“
Most software engineers are not trained for nondeterministic systems. They don't know how to evaluate them other than getting feedback from users (e.g., thumbs up or down and overall satisfaction). Because of this, an AI engineer needs a combination of different skill sets, which is why we need a new title to describe their role.
Aurimas breaks this down as a combination of AI researcher, ML engineer and software engineer.

How can engineers transition into AI engineering?
One interesting topic we cover is who is best placed to move into the role of AI engineer from the three roles mentioned above.
“
For early-stage startups looking to hire their first AI engineer, the best background is that of a machine learning engineer. It’s easier to learn code than it is to understand statistics. But it’s also important to note that they’ll still need to upskill.
They’ll still need to accustom themselves to distributing the type of compute necessary for microservice architectures and need to become skilled at implementing monitoring and observability into these systems.
Regardless of the discipline, we’re starting to see specialisms converge. It’s ideal if an individual is exceptionally skilled in one area, but the ability to 1) upskill in adjacent areas and 2) use AI to scale themselves will be necessary for everyone.
The same logic applies to marketing, which we discussed in our previous post: “Should you replace your marketing team with AI?” - and led to the conclusion that the era of the “T-shaped marketer” is over. Change is coming here…
“
This is a new field that is continually evolving. But you’ll need to decide which areas to dive deeper into. An AI engineer doesn’t need to train machine learning models. However, they should stay up to date with the latest news and research. There are also boot camps that give engineers a solid foundation to learn the skills needed for this role.
Aurimas is running one of these himself. Check out his Maven course, End-to-End AI Engineering Bootcamp. Given how insightful his newsletter is – it’s taught me a bunch already – I’d highly recommend checking it out.
I’m keen to dive into what he’s seen when building these systems in practice.
Common pitfalls in AI engineering
“
You need to be prepared for imperfect systems. You’ll need to cluster input questions into those that are answerable and those that are not, and spend a significant amount of time improving the answers for each cluster.
It’s not enough to feed your data into an LLM and expect it to work. Even when things do work, you’ll need to be prepared for them to stop working. This is why setting up tests and observability is so essential. Dealing with non-deterministic systems is not easy.
Blanket comments about engineers being eliminated are more hype than substance. If anything, AI products require more, not less, human expertise than non-AI products.
But the above also explains why there are so many sloppy “AI products”. We’ve all used them: they look amazing during a demo, but when you use them in a real-life scenario, they fall over at the first hurdle.
Maybe we need a new mantra for AI products, as launching an “embarrassing MVP” that loses trust only leads to high churn (typical for many AI products).
“
Another common pitfall is not involving a business stakeholder early enough. Business people are usually more intelligent than engineers might think, and because of their experience, they’ll probably be better placed to write prompts.
If you’re trying to automate a part of their job away, simply asking them to review your prompts can lead to solid gains.
Hiring in AI engineering
I’m especially interested in getting Aurimas’ thoughts on how to interview AI engineers:
“
I would focus primarily on system design. I’d present candidates with a hypothetical scenario in which we collaboratively build a system and ask how they would approach it.
We can start by identifying the problem and defining the metrics that we’d optimise for, not just building an LLM system. Ideally, you should be able to articulate a story about how you would initiate and continuously develop such a system.
What I like about this is that it’s not simply an output. It’s testing the thought process in a collaborative environment.
Again, different discipline, but quite similar to the theory outlined in a previous post about GTM hiring: the value in an interview process isn’t the outputs but rather the thinking and inputs that go into creating those outputs.
“
After the “proof of concept” stage, I’d ask about how a candidate would proceed.
How would they define their evals?
When would they start implementation?
How would they evaluate the end-to-end system as well as the smaller pieces?
How would they control the costs of all this testing whilst guaranteeing that the system evolves in the right direction?
Wrap up
It’s not all doom and gloom. AI will inevitably create new jobs. Once you start to understand how AI products work, you know that building them isn’t straightforward and will require a new set of skills.
The doomsters claiming the end of software engineering are clickbaiting.
Another role emerging is that of a full-stack AI engineer. Similarly to AI engineers, it encompasses several disciplines, requiring product thinking and rapid iteration.
In our next post, we'll talk to Lawrence Jones from incident.io about what this role requires in practice and get a peek into the day-to-day responsibilities.