
When Henry LeGard started Verisoul, the fraud landscape was already shifting. Then generative AI accelerated everything. “The big unlock was access,” LeGard said. “AI made it possible for anyone, anywhere, to create intelligent fraud at scale. You no longer need to code. You no longer need to speak the language. The barrier collapsed.”
LeGard had spent years working at the intersection of fraud, identity, and data, including leadership roles at Neustar and TransUnion. What he saw was not just more fraud, but a structural change in who fraud was affecting.Traditionally, fraud tooling was built for financial services. Large banks had teams, budgets, and layered systems stitched together from dozens of point solutions. But as AI enabled fraud became more profitable, it spilled into industries that had never needed fraud prevention before.
“We now work with companies you would never expect,” LeGard said. “AI startups, market research firms, developer tools. These teams do not have fraud departments. They want something that works out of the box and covers everything.”
That insight became the foundation of Verisoul. Instead of selling a narrow signal, the company built a unified platform designed to determine whether a user is real, unique, and trustworthy across the entire lifecycle.
From a Space Raffle to a Company
Verisoul’s origin story did not begin with a pitch deck. It began with a problem. Before the product existed, a friend approached LeGard and his cofounders with an unusual request. He was planning a public raffle that involved giving away two tickets to space on Blue Origin. The concern was obvious. Bots, fake accounts, and duplicate entries would overwhelm the system. The team tried stitching together existing fraud tools to solve it. Device fingerprinting. IP intelligence. Email signals. Social checks. The experience was painful.
“That was the moment we realized how hard this was to do well,” LeGard said. “And how broken the tooling ecosystem actually was.”
Soon after launching, customer validation came quickly. One early customer was so convinced by the product that he asked to sell it himself. He showed up at conferences, evangelized the platform, and shared his experience switching from legacy solutions.
Several customers went a step further and asked to invest. Today, a meaningful portion of Verisoul’s cap table is made up of customers. “That level of conviction is rare,” LeGard said. “It tells you you are solving a real problem.”
Fraud Is Not Just Bots
One of the most common misconceptions LeGard sees is how people think about bots. “People imagine everything is automated,” he said. “In reality, a lot of fraud comes from sophisticated fraud farms. Teams of people clocking in and out. Hands on keyboard.” To LeGard, fraud is not a moral abstraction. It is economics.
“These are normal people chasing profit,” he said. “They are exploiting incentives in systems. That framing matters because it changes how you defend against it.”
AI has made content almost useless as a signal. Language, images, even voice can now be generated flawlessly. That forced Verisoul to rethink where detection happens. The company uses an internal framework that focuses on the actor, the behavior, and the content. As content becomes indistinguishable, the product shifts toward unspoofable signals tied to the actor and their behavior. “We assume content will be perfect,” LeGard said. “So we design for a world where that no longer helps you.”
Where the Pain Is Highest Right Now
Fraud shows up wherever users can extract value, but LeGard sees particularly acute pressure in AI companies and market research. Many AI products are product led growth businesses that offer free credits. That makes them prime targets for attackers who want to scrape frontier models or extract value at scale.
“One fake account might pull ten or fifteen dollars,” LeGard said. “Multiply that by millions and it becomes existential.” Market research faces a different version of the same problem. As surveys and panels are flooded with synthetic responses, companies lose confidence in their own data.
“If you cannot trust that a real human answered the question, the insight collapses,” he said. Most Verisoul customers already have some form of protection when they arrive. Internal scripts. Blocklists. One off vendors. None of it keeps up. “Fraud and cybersecurity are unique,” LeGard said. “You are fighting an active adversary. If you stand still, you lose.”
Looking Ahead to an Agentic Future
LeGard believes the next challenge will be distinguishing between malicious automation and legitimate AI agents acting on behalf of real people. “There will be bots, fraud, and fake accounts,” he said. “But there will also be authenticated agents that represent you.” That shift will force companies to redefine their terms of service and rethink how trust is established. Verisoul is already designing for that future. The goal is not just stopping fraud. It is preserving trust as the internet becomes more automated.
“The only thing that matters is customers relying on us to protect their entire user base,” he said. “If we do that well, everything else follows.” For Verisoul, success is quiet. Real users creating real value in systems that were designed to trust them.






