UX researchers are not paid to talk to people.
In this post, we talk to Hugo Alves, co-founder of Synthetic Users, a startup that provides human-like AI participants for user research. The reception to his product has been polarising...
If youโre short on time, read the 30 second version of this post.
I've been called a wanker on LinkedIn. It's a professional social network. You'd think that even if people disagreed with your idea, they wouldn't resort to name-calling. We discovered early on that this is not a product that people would be lukewarm on.
Hugo Alves, CPO & Co-founder at Synthetic Users
Something primal goes off in our brains when we feel our sense of identity threatened. It's a truism to say those of us working in startups should focus on building things people want. So far, the most effective way of doing that has been talking to people. However, as with many things currently, how we get to the end destination is under question with AI. Critics often lash out at Hugo because they perceive it as an attack on their identity.
The outcome is that UX researchers should gain insights that help the business build better products for people. Talking to people can be a method, can be a way to get there, but it's not a requirement.
The real question isn't about the outcome but the process. Are these insights better gathered by talking to people or conversing with a "synthetic" person who emulates a person's opinions? That's still an open question, one which will make many UX researchers feel uncomfortable but which Hugo is keen to exploit:
I don't think researchers provide value by talking to people. They provide value by learning about the user's needs. The first sentence will be provocative to many, but we've used it intentionally. Our website says 'user research without users', and we put it there to bait people into saying things about us online because that's how we got our early growth.
That first sentence has been at the core of what has driven my curiosity around this startup. At Prolific, where Hugo and I met, researchers would pay to connect with actual human participants. But we'd see a lot of tension at the "submit your survey" stage. Am I asking the right questions? Is this the right audience and the correct number of people? These are all concerns that a product like Synthetic Users could help to alleviate. Researchers could use it as a warm-up before going to human participants.
In this post, we talk to Hugo about:
The Founding Story Behind Synthetic Users
What He Looks For In Candidates When Hiring
How He Sees AI Impacting The Future Of Hiring
1. The Founding Story Behind Synthetic Users
The story behind Synthetic Users comes at Hugo's "aha" moment while exploring GPTs.
When GPT-3 was released, I immediately joined the waitlist. My first experiment was a bedtime story generator for my co-founder's daughter, something I built in 30 mins.
This project led to a series of experiments that led Hugo and his co-founder, Kwame, to a lot of tinkering. As opposed to the more traditional route founders take of finding a problem first and then searching for a solution, the pair went in the opposite direction. "What's something that most people won't even believe is possible?" became their guiding question. The answer was Synthetic Users. If you're still rolling your eyes at this point of the post, this excerpt might help:
I should explain that "user research without users" has nuance. It's not all user research without users. Researchers can conduct some studies without users. People already do this by reading academic papers, scrolling through subreddits or finding information online. This technology will not replace 100% of user research, but the percentage you can cover without (or before) talking to people will grow to a large extent.
If you've been following recent posts on Hiring Humans, you'll notice that these are the ideas I am bullish on startups that actively design products to scale human capabilities rather than replace them entirely. Startups that focus on identifying where human input can create 10x experiences, using the strengths of both AI and human intuition.
Another point is that human and synthetic participants share some flaws in user research: the desire to please. If you haven't read the mom test, I recommend it to anyone wanting to run user research. TL;DR: Wait to tell people your idea before running research on them; most of them will want to be nice and tell you why they like it. But this is not helpful. It's also what you'll get today if you go to chatGPT or Claude and ask its opinion: it will most likely tell you what you want to hear.
At Synthetic Users, we have an agent-like experience. We design to get negative feedback by asking questions like: What challenges do you foresee? What concerns do you have? Because I never wanted my product to be a checklist or a way to validate anything you throw at it.
These flaws are what make me curious about Synthetic Users. Humans are flawed, especially in a research scenario. Many detractors of Synthetic Users work on the assumption that human research participants are not flawed. In practice, some participants can be lazy, biased, or simply distracted. Surely there's a space to disrupt those participants?
Given how much time Hugo has spent thinking about Synthetic Users, we were keen to get his take on AI in hiring. AI agents are synthetic recruiters, and automated job applications are synthetic job applicants when you think about it.
2. What he looks for in candidates when hiring
Hugo has strong views on how candidates should use AI, but it's a nuanced take.
I value candidates who use AI but do not mindlessly delegate tasks. If they can showcase their use of GitHub Copilot or Cursor, that's great, as long as they are treating it as an assistant. But things like cover letters should be done by hand. Even if it's not super polished, it signals that you think this role is worth your effort. We need to remember that human communication is a lot about signalling. If you go to chatGPT and input the description of the company and your CV to generate a cover letter, it's signalling that you don't care that much. I can smell AI from a mile away, so when candidates do this, they're making it easier for me to filter them out.
As we've mentioned before in this newsletter, showing an employer you care is often the best way to stand out. It might require more time and effort, but it will force you to narrow your applications down to companies where you're likely to be a good match anyway (a good thing!).ย
The concept of a "startup tourist" also comes to mind. When it's a small group of people setting out to disrupt an industry and to create the "next big thing", founders need to look out for the candidates who care and go the extra mile. The best candidate will not be the one who mindlessly submits a generic application to a job; it will be the one who finds a way of standing out and getting the hiring manager's attention, a skill that will be immensely valuable when working at a startup.
We then discuss the red and green flags Hugo has observed in job applicants:
Red flags: no personal projects. Is there anything you've been experimenting with to test a new technology? The absence of this signals a lack of curiosity. Conversely, if they've heard about a new technology and went on to tinker with it to solve a personal problem, these candidates usually pleasantly surprise me.
The biggest green flag he still relies on, though, is referrals. And despite his enthusiasm for AI, he still thinks this is the biggest shortcut to hiring:
When people put their name on the line for someone else, that's huge. AI will not disrupt referrals. We're still people; much of our work centres around relationships and trust. Someone putting their credibility on the line for someone else they've worked with before is still the ultimate hiring hack.
However, he has a refreshing take on how he sees the future of recruiting.ย
3. How he sees AI impacting the future of hiringย
Neither candidates nor companies want to have a mismatch. The issue is that none are good at communicating what they want or positioning themselves to attract their counterpart. AI can help here through embeddings. AI can help create a higher dimensional representation of both the candidates and the opportunity from various forms of input. These embeddings will enable the 10x hiring experience.
This paragraph describes what great recruiters do manually today. They deeply understand what each side is looking for and help to position either a job or a candidate. They conversationally gather these inputs with an individual they've built a relationship with over many years. They have deep empathy and can deeply understand the needs of each side and what they want.ย
Right now, in a CV, we mention JavaScript, level 3; Python, level 5; and DevOps, level 2. But this is too concrete and broken. It's not about technology but about tastes, interests, and actual experience.ย
We fully agree with this view of broken resumes. The same thing happens on the hiring manager's side with poor job descriptions. Many startups are currently spending a lot of VC funding, using AI agents to scrape information from the internet that is essentially rigid and broken. Many of them will spend their time trying to make a match at this level, trying to find the perfect keyword combination with very shallow data. As you can tell, I don't think this will work out.
We wrap up our conversation by discussing Hugo's views on entry-level hiring.
Entry-level hiring is messed up right now. There needs to be an incentive for someone to hire an entry-level candidate. When you can get roughly the same output by giving an experienced hire extra AI-augmented capability, you really question the need. Entry-level candidates will need some differentiation because LLMs replace low-level tasks that previously justified an entry-level candidate.
Our advice for entry-level candidates is to figure out how to become an "AI-first" hire. Their competition, in many cases, is going to be an LLM, so if they are the ones helping companies rethink their processes from the ground up with LLMs, that's where their differentiation will allow them to shine.ย
But we recognise that we're not laying out all of the answers here and that this will require a new way of approaching work for many entry-level candidates who already have to navigate a lot of noise. But this is the future present: the only constant is change, and a lot of things will require us to figure things out as we go.
Conclusion
We need more founders like Hugo that are genuinely challenging existing norms. Too many startups are trying to sprinkle AI on top of existing solutions, without really embracing the opportunity to tear up the rulebook and re-think problems from first principles.
Whilst that may upset some, and make their sense of identity feel attacked โ itโs the only way weโll get to genuine progress. Beneath the provocative messaging behind Synthetic Users lies a more nuanced vision: using AI not to replace humans, but to enhance their capabilities.
In the same way that Synthetic Users aims to help researchers ask better questions before meeting real users, AI could help both candidates and companies better understand each other before the first conversation, ultimately avoiding costly hiring mistakes.
The key insight isn't about whether AI will change certain fields, it will. It's about how we ensure these changes lead to better outcomes that solve the underlying problem better than we ever could. As Hugo puts it, we're still people, and much of our work centers around relationships and trust. In spite of all the things AI changes, that is a constant we can rely on.
Thank you for the opportunity!