Beyond the Hype. A Research Scientist's Take on AI: Hiring and Execution
There's an interesting correlation between how much expertise someone has versus how much they claim to have. It's somewhat inversely correlated. In the case of Pavan Kumar, that holds.
If youโre short on time, read the 30 second version of this post.
Anyone who claims to be an expert on LLMs is talking nonsense. That includes me. GPTs didn't exist a couple of years ago, so everybody is basically starting from scratch.
Pavan Kumar, Head of AI at Digs
There's an interesting correlation between how much expertise someone has versus how much they claim to have. It's somewhat inversely correlated. In the case of Pavan Kumar, that holds. He's an applied AI scientist who exited his previous AI startup and has amassed decades of experience blending deep research with production-grade AI. He publishes and reviews papers for highly respected institutions like CVPR, ICLR, and NeurIPS.
Fair to say, despite what he'll tell you, he knows what he's talking about. More than most, he is well-placed to advise on AI in the workplace. In this conversation, we speak to Pavan about:
Hiring in AI and common mistakes to avoid.
How you should be using and staying current with AI.
The future of AI and its potential impact on entry-level jobs.
Hiring in AI and Common Mistakes to Avoid
The biggest challenge in hiring engineers for AI applications isn't simply understanding the theory โit's finding people who can bridge the gap between the theory (research) and practical (putting that theory into production).ย
Hiring for this very niche space is very difficult. You can have good researchers, but from my various startup experiences, what happens is good researchers produce lousy code. It's not maintainable, not productizable.
This disconnect between theoretical and practical knowledge is particularly important in application-level AI startups (ones that make AI models accessible in a way that enables business customers or consumer entertainment). The ideal candidate needs both the curiosity to experiment with new approaches and the discipline to write production-grade code. It's a rare combination.
When interviewing candidates, Pavan notes a typical pattern:
When we do interviews for development with LLMs, if a candidate is good at programming, they often don't understand how to experiment. And with more tenured engineers, there's usually more resistance to experiment. Conversely, if they know the research and are keen to experiment, they could be better at programming.
So, how do you evaluate both of these capabilities? Pavan looks for candidates with broad experience rather than narrow specializations, which he believes is especially important for earlyโto mid-stage startups.
For example, I'm more efficient at backend development. I'm much less efficient at front-end code right now, but I do tinker with it. It would help to have an all-rounder skill set and curiosity, especially in the beginning. They need the ability to take on a project and get their hands stuck in to figure out a solution.
One positive signal Pavan looks for is side projects involving LLMs.ย
If I saw someone playing around with LLMs in their spare time or having side projects where they were actually experimenting, that would be a positive signal because it shows that they are learners and curious.
As is often the case, when a candidate does something in their spare time, it's often a great signal that they are genuinely passionate about it. It isn't a prerequisite for every job, but I have witnessed it being an excellent signal for self-motivated candidates.
However, he cautions hiring managers to look carefully. He wants to see candidates tinker with side projects and that these projects have depth rather than candidates who have simply bought into the hype with surface-level implementations.
It's a very positive signal, but because of the hype, many people are jumping into that space. I want to see depth in these side projects rather than many side projects implemented in a shallow manner.
How you should be using and staying current with AI
The pace of change in AI at the moment is staggering. For example, what was current three months ago is now outdated. It's so fluid.
It's such an exciting time to be in and around startups. We are witnessing the redefinition of entire industries right before our eyes. But it can feel like a double-edged sword. How do you stay up to date with all the latest developments?
Pavan recommends several key resources:
Hugging Face: A platform and community hub for artificial intelligence and machine learning, mainly focused on natural language processing. They host a collection of AI models, libraries and collaborative tools. Looking for a chat GPT for X? Give it a go.
Academic papers, especially ones with code that you can use to tinker yourself. Here's a list to get you started
Academic conferences: Attending conferences like ICLR, CVPR, and AAAI.
"Not only should you consume the theory, you have to implement these models in practice". Pavan says he sets aside 4-5 hours weekly for this, with most of that time spent on practical implementation rather than just reading.
We also asked for tips on using AI based on how his AI startup, Digs (a platform for managing construction projects), does things.
It's not only the obvious things like adding gen AI tools to your engineering stack. You can rethink things from the ground up. For example, connecting your database to an existing AI tool to avoid having to build whole dashboards from scratch. For every task, taking a moment to think if there is an AI-first way of doing that that makes sense is powerful.
I've spent a lot of time thinking about how to make tasks "AI-first." Not every task is a good fit for AI, but the ones that are can save you a lot of time. Figuring out the task AND where AI is most powerful can help you scale yourself effectively.ย
Writing content is a great example. I recently went to a talk where a fantastic copywriter,Joanna Wiebe, gave an excellent presentation on how to use AI for writing. The crux of it: use it for the gruntwork (research analysis, ideating, outlining). What remains is what makes great writing: originality, deep thought, and empathy with what your reader cares aboutโnone of which she recommends delegating to an AI.
There's a similar way of thinking about modern AI systems, which Pavan outlines:
Building an agent involves complex things behind the scenes, like how do I deal with vector databases? How do I integrate this vector database with my existing database? Existing data can live in a PostgreSQL relational database, but vector databases are a new data paradigm. You still need to have a deep knowledge of how these systems work before you can outsource tasks to an LLM.
The Future of AI and Entry-Level Jobs
One topic that needs more attention right now is the impact AI is having (and will have) on entry-level jobs. What happens to entry-level roles if we have tools that allow more experienced folk to scale themselves, and they no longer need an entry-level person to help with tasks they've done a million times before?
Pavan doesn't believe we are there. At least not yet.
I don't buy the argument that seniors will only use LLMs to orchestrate in future. We are not there yet. Maybe one day. I've used tools like Cursor, which is great in general. But, sometimes, it spews complete nonsense. LLMs quickly hit a wall if you're working in a niche area and have a problem.
In Pavanโs view, the good news for entry-level developers is that while AI won't replace them, it can accelerate their learning.ย
Expertise only comes with practice. The good news is to get that muscle memory; it used to take entry-level talent years. But now they can condense it down to weeks.
I fully agree with this view. A founder we recently interviewed, Tonia, told us she's using chatGPT as a tutor to help her digest academic papers. I've been using it to understand topics that I previously knew very little about. Give it a try: upload a PDF into Claude/chatGPT and ask you to explain things you don't understand in basic terms. For those curious enough, learning at speed is now possible in a way it wasn't until quite recently.
Pavan emphasises, though, that there's no replacement for understanding the fundamentals.
There is no substitute for practice or hard work. It helps build knowledge that becomes especially useful when an AI hits a wall. Understanding the fundamentals means you can usually figure out what's gone wrong.
Again, my personal experience ties up with this. Giving Claude an SQL problem is easy. Knowing how to debug it when it inevitably doesn't workโthe years I spent pulling my hair out debugging queries comes in handy.
Conclusion
Pavanโs perspective cuts through the hype surrounding AI hiring and implementation. The key takeaways:
Look for candidates who can bridge research and production, prioritizing broad experience over narrow specialization.
Stay current through hands-on implementation. It's not just reading about new developments but putting them into practice.
Do not expect AI to replace entry-level roles, but do expect it to accelerate learning the fundamentals for them.
Despite all our progress with LLMs, there's still a lot left to figure out. As Pavan outlines, half the problem for hiring managers is knowing who to look for. In the case of candidates, understanding how to best stand out for these roles is non-trivial, too. We are hopeful, however, that we can resolve these problems in the not-so-distant future.