# Bart Zyglowicz > Product engineer. Full-stack. AI-native. I design and ship products across UX, frontend, backend, and AI workflows. From experimentation-heavy product teams to AI-native systems with real integrations, operational visibility, and controllable execution. ## What I do I work across UX, frontend, backend, integrations, and operations, and I care about what actually ships. I build around models with workflows, tools, evaluation, retries, and human control instead of pretending a prompt is a product. I come from experimentation-heavy teams and care whether users actually get value. I gravitate toward systems where things actually happen: background jobs, fulfillment pipelines, API integrations, business workflows with operational weight. I started in product design, which is probably why I still care about how technical systems feel to use, not just whether they work. ## Experience Booking.com, 5 years. Built internal ML tooling and experimentation interfaces that helped product and data teams work with models in a high-scale environment. Cohabs, 4 years. Owned product work across web and mobile, including member-facing flows, backend services, and internal tools for operations and support. HiCoach, 1 year. Founding product engineer from concept to launch, spanning product direction, UX, backend API, and mobile/web implementation. ## How I work I like building products, not just shipping tickets. I prefer practical systems over clever abstractions. I care about observability and control when AI is involved. I think good UX is part of engineering quality. I learn fastest by shipping and iterating. ## FAQ Q: What kinds of roles are you looking for? A: Remote product engineering roles where AI is part of the product or workflow, and where I can own meaningful work end to end. Q: Are you open to on-site or hybrid? A: I'm looking for remote positions only. I've worked effectively in distributed teams across time zones and have a strong async communication style. Q: Are you more frontend or backend? A: Both, realistically. My background started more frontend/product-side, but a lot of my recent work is backend workflows, integrations, and AI system design. Q: How much of your recent AI work is production vs experimental? A: Upscale Print is live. The marketing agent system that runs its Instagram and ad operations is also running in production. --- ## Case Study: Upscale Print URL: https://zyglowicz.pl/case-studies/upscale-print Live product: https://upscaleprint.com Role: Founder / Product Engineer Timeline: 2025 to present Stack: Next.js 15, React, TypeScript, tRPC, Drizzle ORM, PostgreSQL, Stripe, Topaz Image API, Replicate, Prodigi, Vercel, Playwright A live e-commerce product that turns phone photos into gallery-quality wall art. AI handles image analysis and enhancement behind the scenes, connected to payments, fulfillment, and order tracking. ### Introduction Upscale Print is a live e-commerce product. You upload a phone photo, the system analyzes and enhances it using AI, and you get a gallery-quality print delivered to your door. I built the whole thing: product concept, UX, frontend, backend, the image processing pipeline, Stripe checkout, print fulfillment through Prodigi, and the operational tooling that keeps it running without babysitting. The hard part isn't calling an AI model. It's building the workflow around it so that image quality gets assessed correctly, enhancement gets routed dynamically, failures get retried, and the customer never has to think about any of it. ### Problem Most people have photos worth printing, but they don't know that. The gap between 'nice photo on my phone' and 'something I'd actually hang on a wall' is bigger than it seems. Image quality varies wildly. People can't tell if a photo will look good at print size. Enhancement tools are fragmented and technical. Print fulfillment is its own operational mess. And the whole experience feels risky if the product doesn't build confidence at every step. I saw a chance to use AI as hidden infrastructure. Not as a feature to market, but as the part of the system that makes the product viable in the first place. ### What I owned I owned the product end to end: - Product concept and positioning - UX and user-flow design - Frontend implementation - Backend architecture - Image processing pipeline design - Integrations with Stripe, Topaz, Replicate, and Prodigi - Operational tooling and failure handling - Deployment and production setup ### Outcome The product is live at upscaleprint.com: - Customers upload, configure, pay, and receive prints through a single flow - Each image is analyzed for quality and content before enhancement - Enhancement routing adapts to the source image instead of one-size-fits-all processing - Background jobs handle enhancement, upscaling, and fulfillment submission asynchronously - Admin tools surface job status, failures, and order state for manual intervention when needed ### System overview This isn't a frontend that calls one AI API. It's a workflow system where the customer experience, async processing, and operational reliability all have to work together. Flow: Upload, Checkout, Payment, Job Queue, Image Analysis, Enhancement, Upscale, Fulfillment, Tracking. Steps: 1. A user uploads a photo 2. The system stores the file and captures order context 3. After payment, background jobs are created for processing 4. The image is analyzed for quality and content characteristics 5. Enhancement settings are chosen based on the analysis 6. The image runs through the enhancement and upscaling pipeline 7. The result is stored and attached to the fulfillment order 8. The print order is submitted to Prodigi 9. Failures are retried or flagged for manual intervention ### Key decisions AI as workflow infrastructure, not a headline feature. AI powers the core promise: making ordinary photos printable with minimal friction. The customer never interacts with 'AI' directly. It's embedded in the pipeline, doing analysis and enhancement where it creates real value. Job-driven architecture for unreliable work. Enhancement, webhooks, and fulfillment all fail sometimes. The system uses background jobs with retries and follow-up actions instead of assuming everything works on the first try. Dynamic enhancement routing. Not every photo needs the same treatment. The system uses quality assessment and object detection to decide how each image should be enhanced. A low-res landscape and a sharp portrait get different processing paths. Operational tooling as a first-class concern. Most product demos skip what happens when things break. This system has failure handling, alerting, job visibility, and admin tools built in from the start. A product that can't be operated isn't really a product. Simple surface, complex internals. The customer sees a clean upload-to-order flow. The system behind it handles quality assessment, dynamic routing, async processing, payment state, and fulfillment coordination. Hiding that complexity is a product decision, not just a design one. ### What mattered most - Building a real product end to end, not just a technical demo - Using AI where it creates genuine leverage, not where it sounds impressive - Treating operational reliability as part of the product - Bridging customer experience and backend complexity - Making practical architecture decisions under real constraints ### What I'd improve next - Better signals to help customers understand which photos will print well - Deeper instrumentation on processing quality and order outcomes - Customer feedback loops on enhancement results - Tighter operational dashboards as order volume grows Building this proved to me that the interesting part of AI products isn't the model. It's making the whole system feel simple and reliable when a dozen things can go wrong between upload and delivery. --- ## Case Study: Marketing Agent System URL: https://zyglowicz.pl/case-studies/marketing-agent-system Role: Builder / Product Engineer Timeline: 2025 to present Stack: Python, TypeScript, Mastra, SQLite, APScheduler, Anthropic/OpenRouter, Google Ads API, Instagram Graph API, fal.ai, Playwright, Pillow, Docker Compose An AI agent system that runs the marketing for Upscale Print. It plans content, generates creative, publishes to Instagram, and feeds performance data back into future decisions. ### Introduction This system runs the marketing for Upscale Print. It plans weekly content, generates images and copy, evaluates creative quality, publishes to Instagram, and feeds performance data back into future planning. The @upscaleprint Instagram account is managed almost entirely by this system. I built it because I needed real marketing for a real product, and I wanted to find out what it actually takes to make an agent system do useful, ongoing work. Not a demo, not a one-shot prompt chain, but something that runs on its own, handles failures, and improves over time. The architecture separates reasoning from execution. A TypeScript agent layer handles planning and decisions. A Python worker layer handles scheduled jobs, API calls, and side effects. Both share a SQLite database that serves as the control plane, audit trail, and source of truth. ### Problem Most "AI agent" demos are impressive for about five minutes. Then you ask: can this run on its own? What happens when an API call fails? Who approved that budget change? Where can I see what it did last Tuesday? The answers are usually bad. These aren't theoretical concerns. Hidden state that nobody can inspect, side effects that fire without oversight, no separation between the model deciding something and the system actually doing it. They show up fast when you try to run an agent against real APIs with real consequences. So I built a system that treats these problems as design requirements. Models handle reasoning and evaluation. Explicit jobs handle execution. Everything is logged, retryable, and inspectable. ### What I owned I owned the system end to end: - Architecture and data model - Agent and workflow design - Tools, handlers, and integrations - Scheduling and execution patterns - Operational safeguards - Deployment, documentation, and testing ### Outcome The system has been running since early 2026: - The @upscaleprint Instagram account is actively managed by the system - Content planning, image generation, copy, and publishing run on a weekly cycle - Generated creative goes through vision-based quality evaluation before publishing - Google Ads campaigns are analyzed and optimized, with approval gates on budget changes - Performance metrics feed back into planning so the system improves over time - Every action is logged with full audit trails ### System overview Two processes, one database. The agent layer (TypeScript/Mastra) handles conversation, planning, and specialist reasoning. The worker layer (Python) handles scheduled jobs, API integrations, and execution. Both read and write to a shared SQLite database that serves as the control plane and audit trail. Flow: Agent UI, Agent Layer, Shared SQLite State, Python Worker, External APIs. Steps: 1. The system pulls recent performance data and account context 2. A strategist agent builds a weekly content plan 3. Posts are stored in the database with status tracking 4. Copy and image prompts are generated for each post 5. Creative assets are generated and evaluated by a vision model 6. If quality doesn't pass, the system revises and retries 7. Approved posts are queued and published to Instagram on schedule 8. Engagement metrics are collected after publishing 9. Performance data feeds back into the next planning cycle ### Key decisions Separate reasoning from execution. The agent layer proposes, analyzes, and plans. The worker layer does things. When a model suggests something, that suggestion goes through an explicit job before anything happens in the real world. This makes the system retryable, inspectable, and safe to leave running. Shared SQLite state as the control plane. Instead of spreading state across agent memory, process variables, and external services, everything lives in one SQLite database. Both processes read and write to it. Debugging is straightforward, recovery is possible, and the audit trail is automatic. Approval gates on risky actions. Not everything the model recommends should happen automatically. Budget changes in Google Ads are a good example. The system flags these for human review instead of executing them directly. Evaluation loops, not blind trust. Generated images go through a vision-model review before publishing. If the creative doesn't meet quality criteria, the system revises and retries. This matters especially for brand consistency, where first-try output from image models is often not good enough. Observability as a feature, not an afterthought. The system was built to be operated, not just launched. Status visibility, health checks, structured logging, and durable records of every action. If something breaks at 3am, I can see exactly what happened. ### What mattered most - Agents need structure around them, not just freedom - Separating reasoning from execution makes agent systems safe to run unattended - Observability is not optional when real APIs and real money are involved - Evaluation loops matter more than prompt quality for consistent output - The hard part of AI systems is everything around the model call ### What I'd improve next - Richer dashboards for monitoring system health and content performance - Better operator UX for approvals and manual intervention - Stronger measurement of how agent decisions affect business outcomes - Expanding to more channels beyond Instagram and Google Ads - Automated evaluation of agent quality over longer time horizons Running this system taught me that the hard part of agents is not the model. It's everything you build around it to make it safe, observable, and actually useful day after day. --- ## Contact - Email: b@zyglowicz.pl - Phone: +48 882 170 636 - LinkedIn: https://www.linkedin.com/in/bartekzyglowicz/ - GitHub: https://github.com/tumski - CV: https://zyglowicz.pl/bart-zyglowicz-cv.pdf