
As “vibe-coding” gains traction across tech and creator communities, solo builders are prioritizing speed, intuition, and iteration over technical perfection. One such builder is New York–based product manager and creator Jakyoung Goo, also known as Amanda Goo. With a background in AI product management and a visual creative practice, she is building Doodlely, a sketch-first AI canvas designed to reduce friction in text-centric workflows.

Q: Can you introduce yourself?
Amanda Goo: I wear three hats. I’m a product manager who has shipped AI products in fast-growing startups such as BoldVoice (YC S21), Sendbird(YC W16), and Hyperconnect(acquired by Match Group), working for multi-modality of AI across both B2B SaaS and consumer-facing platforms. I’m also a solo builder and creator exploring how visual assets scale across formats. I’m based in New York and pursuing a Creative Technology master’s at New York University’s Interactive Telecommunications Program (NYU ITP).
Q: When did Doodlely really begin?
Goo: The real start was June 2025, when I joined Lovable’s accelerator program ‘Lovable Shipped season 1’. I had been feeling this frustration for a long time, but that program pushed me to commit. Within about six weeks, I built the first MVP. For the first time, I handled everything end to end—user interviews, product definition, UI, backend, database, landing page, and early user acquisition.
Q: What core problem were you solving?
Goo: I’m a visual thinker. Almost every idea I have—whether it’s for product design, illustration, or branding—starts as a sketch. But most AI tools are still overwhelmingly text-driven. That forces people like me to translate visual ideas into words and jump between multiple tools. The constant context switching breaks focus. I wanted a workflow where ideas could stay where they start: inside notes and sketches.
Q: How is Doodlely different from existing tools?
Goo: In Doodlely, the canvas is the conversation. You sketch, generate, and iterate in the same space. The system is still powered by text internally—because text is currently the type of input AI understands most reliably—but users don’t have to lead with it. From their perspective, sketching feels like the most natural way to express intent.
Behind the scenes, Doodlely breaks the flow between sketch, generation, and iteration into smaller steps, using different LLM layers connected through prompt chaining. Visual inputs on the notes are first interpreted to infer intent, translated into structured prompts, and then executed across models. Editability was just as important: instead of asking users to type more instructions, Doodlely lets them draw directly over generated images.
Q: What did you learn from early user interviews?
Goo: User personas shaped everything. Fashion designers talked about wanting to turn rough garment sketches into cleaner visuals without spending hours redrawing technical flats. Illustrators wanted to explore compositions and styles quickly while staying in a sketching mindset. Character designers cared about iterating on silhouettes, poses, and proportions visually, rather than rewriting prompts over and over.
Across roles, the need was consistent: they wanted to stay in flow, iterate visually, and avoid translating intuition into text unless absolutely necessary.
Q: What milestones helped validate the idea so far?
Goo: In December, Doodlely won first place at the Monday.com × Lovable × Railway Full-Stack Founders Hackathon in New York, with around 150 participants on December 8, 2025. That was a strong external validation. More recently, I showcased Doodlely at the NYU ITP Winter Show, where about 100 people tried it hands-on. Watching real users pick it up intuitively—and seeing how quickly they understood the sketch-first interaction—gave me much stronger confidence in the product direction.

Q: How has that experience influenced your thinking about AI interfaces?
Goo: Right now, text prompting dominates because it’s the most “AI-readable” input. But from a human perspective, sketching is often far more intuitive. I think the important question is how we design systems that can bridge that gap—connecting human intuition with machine understanding. That’s the part I want to contribute to: making AI adapt to how people naturally think and create.
Q: Where is Doodlely today?
Goo: Doodlely is currently in beta. I’ve opened sign-ups, and anyone interested can try it at https://doodlely-website.lovable.app/. It’s still evolving, but each round of real usage helps clarify what works and what needs to change.
Conclusion
Doodlely’s journey reflects a broader shift in how products are being built in the age of AI. By combining vibe-coding, product thinking, and a creator’s intuition, Amanda Goo is exploring what happens when AI tools start from how humans naturally create. As more visual thinkers look for alternatives to text-heavy workflows, sketch-first systems like Doodlely may point toward a more human-centered future for creative AI.