
For most designers today, working with AI still means generating pixels.
Image models can produce stunning visuals, but those images often break down the moment someone tries to integrate them into a real design workflow. Logos need to scale. Icons need to be editable. Fonts need to maintain mathematical precision across screens, billboards, and print.
That gap is what Joan Rodríguez set out to close.
Rodríguez, founder of Quiver AI, is building a new category of generative AI focused not on pixels but on vector graphics.He launched the company alongside co-founders Pascal Wichmann, Haotian Zhang, and Nicklas Scharpff, combining deep research expertise with engineering and product leadership to build a new generation of design AI models. His company trains foundational models capable of generating structured design assets such as logos, icons, and typography directly as scalable vector graphics, or SVGs.
The idea emerged not in a startup incubator, but during Rodríguez’s PhD.
“At Quiver, we are both an AI research lab and a product company,” Rodríguez explains. “Our focus is training foundational models for vector graphics generation. In the bigger picture, we are working on frontier AI for design.”
The Research Breakthrough
Rodríguez’s journey began with a deceptively simple question: could AI generate vector graphics the same way language models generate code?
During his doctoral research, he began experimenting with training models to produce SVG files directly.
SVGs differ fundamentally from traditional images. Instead of storing pixels, vector graphics represent shapes mathematically through paths, curves, and coordinates. The result is infinitely scalable and editable.
“I started my PhD working on SVG generation models,” he says. “I developed a model called StarVector and published it as my first research paper. The idea was to generate vector graphics directly through code.”
At first, the model was far from perfect.
“The initial version was actually terrible,” Rodríguez says with a laugh. “But it showed potential. It was a small model, yet it could already generate interesting outputs.”
That early potential turned out to be the beginning of something much bigger.
The Moment Everything Changed
After publishing the StarVector paper on arXiv, Rodríguez watched as it began circulating within the AI community. A technical Twitter account that regularly highlights new research papers shared the work. Almost immediately, the post went viral.
“People were extremely excited about the idea,” Rodríguez recalls. “Companies started reaching out, researchers started contacting me, and suddenly there was a lot of inbound.”
That response revealed something important: the problem he was solving was widely felt across the design and developer ecosystem. It was also around this time that Rodríguez met his future co-founder.
Pascal Wichmann, a developer based in Germany, had spent nearly a decade quietly collecting datasets of vector graphics. He had been crawling design repositories and building one of the largest SVG collections available.
“When Pascal reached out, we instantly connected,” Rodríguez says. “He had been gathering vector graphics data for years, and we started talking almost every day about what could be built in this space.”
Together, they began scaling the technology.
Scaling the Model
Inspired by the trajectory of large language models, Rodríguez and Wichmann began applying the same scaling principles to vector generation. They expanded their dataset and increased model size from roughly one billion parameters to larger architectures approaching seven billion parameters. With more data and compute, the results improved dramatically.
“What we realized is that you could apply the same recipe as GPT,” Rodríguez says. “Scale the parameters, scale the data, and the model starts producing much better results.”
The improvements attracted attention from investors as well. Andreessen Horowitz eventually led an investment into the company, helping turn the research project into a full-fledged startup. Rodríguez describes the moment he realized the opportunity in simple terms.
“It felt like discovering oil,” he says. “I found something really exciting. The idea worked, and now it just needed resources to scale.”
Pixels vs. Programs
The core insight behind Quiver is that modern generative AI models focus almost entirely on pixels. That works well for media like photography, film, and social content, but professional design requires something different.
“Pixels depend on the medium where the image is displayed,” Rodríguez explains. “Vector graphics are mathematically defined. That allows you to scale them infinitely and manipulate them precisely.” Because vectors are defined by curves, shapes, and coordinates, designers can edit them in tools like Figma or Illustrator without losing quality. This makes them essential for elements such as logos, icons, fonts, and user interface components. Rodríguez believes generative AI needs to understand these structures natively.
“You can define curves, control points, and color layers very precisely,” he says. “That is why vector graphics are used everywhere in design.”
Early Feedback From Designers
Quiver is currently in beta, and Rodríguez has been closely observing how designers interact with the tool. Even though the model is still evolving, early users are already integrating it into their workflows. Some generate icons and then refine them manually in Figma. Others experiment with typography or logo concepts.
“One designer told us that even though the output isn’t perfect, it already saves them an hour of work,” Rodríguez says. That is the point. Quiver is not built to replace the designer — it is built to give them a faster start and more time for the work that actually requires judgment. That kind of feedback is exactly what the team needs as they continue refining the system.
A Growing Ecosystem
Beyond individual designers, Quiver is beginning to attract interest from companies building AI-powered creative tools. Platforms that already serve generative models are experimenting with integrating vector outputs into their pipelines. Companies that route AI models or build application workflows can potentially use Quiver’s technology to generate icons, logos, or typography automatically.
At the same time, marketing agencies and brand builders are exploring ways to automate visual production tasks. “We are seeing interest from many directions,” Rodríguez says. “From individual designers to companies that want to integrate vector generation into their products.”
The Next Frontier for AI Design
Quiver’s long-term ambition extends far beyond generating icons. Rodríguez wants to build foundational models capable of powering the entire visual layer of digital products. “Our goal is to build frontier AI for design,” he says.
In practice, that could mean AI systems capable of generating complete visual systems, from logos to typography to interface elements, all structured as editable design assets. The company is already preparing the next iteration of its models.
“We are launching a new version soon,” Rodríguez says. “It will be significantly better and introduce new features.” If the early response to StarVector is any indication, the next wave of development could attract even more attention.
For Rodríguez, the mission remains clear: bring the same transformative power that generative AI brought to text and images into the world of design. And this time, the output will not just be pixels. It will be code.






