Robin Guignard-Perret on his Learning Journey to Create Video Editor Tellers.ai: A Serious Insights Interview from CES 2026

In a market where AI-driven creativity tools proliferate, Robin Guignard-Perret stands at the intersection of automation and artistry, challenging the boundaries of what it means to edit video in the age of intelligent agents. In this Serious Insights interview, Guignard-Perret unpacks the philosophy and technology behind their AI video editing startup, revealing a commitment to transparency, user empowerment, and ethical stewardship.
The conversation traverses the evolution from journalism-focused tools to a platform designed for broad creative control, while navigating the complex terrain of data privacy, platform risk, and shifting expectations for both creators and audiences. The result is a nuanced exploration of how AI can augment, rather than replace, human intentโinviting editors, developers, and organizations to rethink their relationship with technology and the stories they tell.
Key Takeaways from our Robin Guignard-Perret interview:
- From Journalism to Generalization:ย The companyโs origins in journalism led to a realization that the core technologyโautomated, AI-driven video editingโcould serve a much broader market. Unlike competitors focused on video generation, their platform emphasizes real footage and agent-controlled, web-based editing, offering both automation and granular creative control.
- Agentic Workflows and User Empowerment:ย Inspired by coding assistants, the platform allows users to delegate repetitive tasks to AI while retaining the ability to intervene and fine-tune edits. This hybrid approach supports both one-line prompts and detailed scripting, catering to a spectrum of user expertise and workflow needs.
- Transparency and Trust:ย Every AI action is visible in real time, with explicit disclosure of models used and credits spent. This transparency builds user intuition and trust, ensuring that editors remain in control and can easily override or adjust AI decisions.
- Data Privacy and Content Governance:ย The company does not train its models on user data and complies with strict European regulations. For enterprise clients, only low-resolution proxies are uploaded, adding an extra layer of security and protecting intellectual property.
- Platform Risk and Human Oversight:ย While acknowledging risks like hallucinations or incorrect edits, Guignard-Perret emphasizes that an operator always reviews outputs before publication, framing the tool as a productivity enhancer rather than an autonomous publisher.
- Business Model and Go-to-Market:ย The platform is usage-based, with plans for optional subscriptions. The go-to-market strategy targets post-production companies, marketing agencies, and video-centric organizations, while keeping the tool accessible for a wide range of use cases.
- Technical Challenges Ahead:ย The hardest problems on the horizon involve making video upload, analysis, and transcoding fast and seamless, especially given the complexity and size of video data compared to text.
- Vibe Coding and the Future of Work:ย The discussion extends to โvibe codingโโusing AI to automate code and content migration. Guignard-Perret notes that while AI can handle much of the work, domain expertise and clear intent remain essential. The rise of citizen developers underscores the need for better prompt-writing and a deeper understanding of both the tools and the problems they address.

The Serious Insights Interview with Robin Guignard-Perret
This interview was lightly edited from an automated transcript.
Why are you different? What’s the reason for your existence?
So the reason for our existence, if we go back to the story of the company, we built the company for journalists. We worked on tools for journalists
At some point, we realized that turning articles into videos was a huge key differentiator. It was very important for the industry, so we worked on that for a while, and then we realized that we could go much further than that, and journalism was a small market, so we decided to go back to the drawing board.
We took the technology that we developed and turned it into something that could go much further, and unlike other AI video companies, we decided to focus on real footage from the start.
We focused on automating video editing instead of generating videos or just putting out different video generation models, which a lot of companies right now are doing. They want to orchestrate the different AI models, which we are doing now, but IT’s only the last step of our process. The core of our technology is indexing, analyzing, and cutting videos. We have a custom video player for the web that is like a cloud-based video player that can render everything that the agent will do. Basically, what we have is a video editing software that is web-based, which can be fully controlled by an AI agent.
How do you balance the automation versus creative control with the editor? The editor has a certain workflow behavior, and then somebody says, I want to do a little tweak here.
How do you adjust for that? Actually, we did not invent this workflow. We got inspiration from Cursor and the coding assistant, which we are using. We are like two co-founders, and we are very technical. We always worked on AI, and now we are using this coding agent. It writes half of our code, basically. And the workflow is very nice. Basically, we are developers. We are experts, so we know what we want to do. Sometimes, some repetitive tasks.
It doesn’t make sense to do it ourselves when AI can do it, and itโs very straightforward. Basically, we want to offer the same level of control to video editors or anyone who wants to create videos. If you want to just give it a one-line prompt and wait for your video, then you can do that, but if you want to give it a full script, for instance, we are working with a reality TV post-production company where they have like 2,000 hours of raw video footage, and then they want to turn IT into broadcasts.
So basically, they have the scenarist, the writers who will input the script, and then will turn it into a rough timeline, and then they send this pre-made timeline to the actual video editor.
Okay. I may have another question for you at the end of this that has nothing to do with it. So I’m writing an article right now on vibe coding.
Okay, so we are the vibe editing company.
[Daniel W. Rasmus: Exactly, well, that’s the so I was talking to a company the other day called AppZen. They do back-office financial workflows. And they were talking about vibe re-engineering. So they had high-level system SMEs, similar to video editors, who were modifying prompts on the fly through agents and doing kind of vibe-change reengineering of processes rather than saying, ‘We’re going to do this big change effort.’ So yeah, interesting.]
Give a little walkthrough, if you can, of the video generation pipeline. How you deal with scripting, pacing, audio, all that kind of stuff.
So basically, as I told you, we worked on this specific video player that lets us create a specific video editing timeline and everything. We rebuilt everything so that it worked perfectly with the agentic workflows. When you ask something of the agent, you will be able to see each modification in real time. If the agent is moving a clip, you will see it. If the agent is searching for some footage, searching inside your assets, analyzing videos, cutting or reshaping the timeline, aligning the clips with the audio, analyzing whatever, or generating videos with AI. You will see every step very clearly. It’s full transparency.
We always show the AI models that are being used explicitly. We show the amount of credit that is spent, and you can see the real-time video being built on the site. That gives a lot of control and also a lot of intuition-building, so the user understands what the agent is doing and when they want to do it manually. Because it’s faster, he can do it very easily.
What metrics do you track to assess output quality and user satisfaction? Do you evaluate semantic coherence, engagement outcomes, production accuracy, or latency/efficiency measures?ย What metrics do you track to assess output quality and user satisfaction? Do you evaluate semantic coherence, engagement outcomes, production accuracy, or latency/efficiency measures?ย
Yeah, what we want is people exporting the videos. We want people to be happy enough with the video and then take it somewhere else.
How are you addressing data privacy and content governance?
So we don’t train our model with user data. We really want to be as clean as possible. We are Europe-based, so we also have to abide by all the regulations there, which are stricter than in the US.
Europe is backing away a little bit because we’re not going there, soโฆ
Yeah, so basically, for now, we are definitely not training on any user data. It is all cloud, and whenever you upload your assets to the platform, they will stay on your account. And for enterprise, we have a specific command line tool, tooling that will only upload a proxy of the videos, so a low resolution format of the videos, if they really want to keep the intellectual property of the big source files. I did the security on top of it. With that? Itโs an extra layer of security.

Platform risk, hallucinations, incorrect edits, how do you… Because we’ve been told now that itโs going to happen.
I wouldn’t call them risk because we are working on video editing which means first itโs not very dangerous to create a video and also we always have an operator behind so itโs not something that you click a button and then you get a video that is published on Instagram or YouTube or whatever you always have an operator and then the operator is the one responsible for what is being produced so for us itโs more a question of productivity and ease of use and making sure the user is in a state of flow and he can keep editing the video and seeing what the agent is doing, so that is very fluid. Sweet, you know.
So user intent. If somebody wants to do a deep fake, is that possible?
Yeah, many of the best models, even if we integrate them, so for instance, Sora, they don’t allow us or anyone to generate deepfakes of other persons, but some other models like VEO, they allow it, so everyone is navigating it, and I believe that as long as we are providing a tool and not something that is directly publishing, but more a creativity tool, then I still want to hope for humanity that they can get tools in the end to create movies and stories.
And then maybe the government will have to tell people that they should not do that stuff. I’m not naive. I know that we could install some restrictions, but itโs very hard to navigate.
Yeah, I do a lot of writing about that itโs not the AI’s fault, itโs our fault, right? Whatever we do with it. I mean, yes, it makes mistakes, but itโs not doing that on purpose either. We’re doing deep fakes. We’re doing it on purpose, and we know what we’re doing here.
I think the general population is getting educated, so it will be less and less impactful to have deepfakes because people will know that itโs possible, and we will start labeling real content. So, for now, itโs actually a big belief of mine that we are labeling AI content, but I think in like two years we will have to label real content with like media sources, and that’s what will matter, because we should just consider everything potentially fake by default.
Whatโs the go-to-market motion right now โ creator economy, agencies, enterprise content teams, or education โ and how does that influence roadmap priorities?
Yes, we want the tool to be very generalizable. We want the tool to be simple and kind of, yeah. We want to keep the interface very simple so anyone can use IT and very generic so it can work on any use case but we also want, because we are a startup, and we need revenue, to focus on very specific verticals for the go to market so the tool will remain usable by a lot of different use cases and people but our go to market is focused on a big post-production company and production companies then I had marketing agencies and then some sms companies that that have like a strong video and strategy, okay?
On the business model side, you mentioned a bit of this: SaaS, usage-based APIs or agent calls, white-labeling of workflows, partnerships with media systems, and integration with digital asset management or moving management systems. Are you looking at any of that stuff?
For our clients in reality TV post-production, we integrated with Avid. So, itโs a big post-production system. So, it means that we can analyze the videos in their format, and we can then export the timeline so that video editors can finish IT in their regular software once they have rough cuts or like the work is done basically.
As a strategy, we also want to push a lot of our API so white-labeling in a sense. We have a command-line tool that is inspired by the command line from the Claude Code, so like the vibe coding, right? So people can just, in their terminal, you know, the geeks, they can write a prompt to generate a video and render it from the terminal. That also allows them to automate some processes more easily.
And also, the command line is very useful when you have many videos to upload. As I mentioned, if you want to upload only a low-resolution format or run a process always in the background, that’s a big strategy for us. Regarding the business model, we are currently fully usage-based. We may add a subscription option for storage on the platform, but we want to keep it as simple as possible.
For example, if you want to spend nine dollars on tokens to access the platform and just do very affordable editing or talk with an agent to analyze a single video, you can do that. We don’t force you into a $500 token package or a full-year subscription.
And then looking ahead, the hard technical problem that you expect over the next 12 to 24 minutes that you have to tackle.
It’s a lot about uploading, analyzing, and transcoding the video in a way that is very fast. Because when you work with video, if we compare it with vibe coding, vibe coding is text. Huge amounts of open-source training data. Video is much harder to train and to manage because itโs much heavier. And for a user to have a fluid experience, a nice experience on their phone, for instance, we need to make sure that when they upload a video, itโs processed fast, and they can start doing the video edit fast. Right now, it takes a few minutes every time you upload a video for IT to be analyzed, cut and indexed in the platform.
Let me go back to the vibe coding question. So that’s all the questions I came up with for you ahead of time as I did my homework. So, on the vibe coding side, how did you use it as you were developing, and what lessons did you have as a developer? Do you like vibe coding? Would you prefer that IT wasn’t a thing? What’s your experience?
That’s a good question. The takeaway has changed every month for one year. Basically, I think now it can really do tons of work, so I think we are not far from having some use cases, like the vast majority of coding that you have to do, the AI will do 90% of the work.
Then you still have to validate, but itโs still a huge value to know what you want to do because it will avoid mistakes, It will ensure that, because you’re the only one that has like the long-term vision and you will never be able to, or it will be very hard for you to prompt all the long-term vision and the underlying constraint that you have in mind and that you can’t even materialize really. So, when you’re coding, you’re always thinking about the next six months and how this decision will impact the next development.
So when you have really no experience with code, and you write code, then the AI will make some decision that might not be relevant for you in two months, with your expectations and needs in the long term. So yeah, I think being a developer, the first takeaway being a developer and knowing what you do is still a huge and will remain a huge added value.

But many, many people can still develop on vibe coding. I think you will have to learn some best practices, but I think you can do a lot with Vibe coding.
[Daniel W. Rasmus: It’s interesting, I talked to a company, and they were telling me about content migration, legacy content migration, PDFs and stuff, getting it ready for doing some AI work.
And they asked a developer to move it out of an old repository and into a new one. The AI โa[[โ did that, so going to your intent point, it did that, and it screwed up a lot of the content with hallucinations. The developer didn’t understand the intent of the people who asked it to do the job.
So these guys then went out; this was human resources people, no coding background. They started doing vibe coding. The problem I see with vibe coding, and they very clearly expressed it was that they started doing it, and then they had to start learning developer stuff like GitHub, and they said, we didn’t want to learn all of that. We thought we could just do it.
And so, they ended up doing this whole migration thing, and it was very successful for them, and they created a citizen developer program. Could you imagine HR people going, oh, let’s get other people to do this, right? But yeah, I thought it was interesting that, as with most of the AI stuff, itโs about knowing what I want, becoming goal-based.]
Absolutely, and wanting to have the details if you need. When you want to do something, like the agency part of yourself, you are the only one who knows what you want to do. If you have very light expectations, then you can just roll with it, and you will get what you expect.
But if you want to build something greater that is very aligned with your vision or your constraints, then you need to go into the weeds and spend the time with it, and I think the people who know the most about the problem will be more relevant than developers in the past who know nothing about the support in business processes.
[Daniel W. Rasmus: I spent a lot of time in IT with people who didn’t know what they were doing for other people. Yeah, I find it interesting, the development of language as the interface. We’re going to have to start teaching people to express intent effectively. , As I teach college courses, AI has changed my prompting for students, because I now write the prompts for them like I’m writing IT for an AI. I used to assume they would understand what I wanted, but maybe the way I phrased my prompts wasn’t clear enough, which is why I got bad outputs. So I now write my prompts for students like I’m writing them for an AI.]
Yeah, I think thatโs a good exercise. You learn to write better requirements. So it works with humans, and it works with AI as well.
Right now, if you want to be a vibe coder, learn Git, and you will have a lot of added benefits.
One last thing about vibe coding, you talked about GitHub, I think, and actually I had someone that wanted to learn about Codebeats that worked in an internship for one month in our company, and he wanted to do the website, doing vibe coding, because he was doing a lot of vibe coding, he said, okay, sure, but you need to learn Git. So, you need to learn Git and GitHub,
It’s like the only requirement that I ask of you, because whenever you make a modification, you know how to save it, you know how to revert it, you know how to manage it and work with other people, so I think unless you’re developing your website in WordPress, then you got to learn Git. Right now, if you want to be a vibe coder, learn Git, and you will have a lot of added benefits.
About Robin Guignard-Perret

Robin has been focused and passionate about AI for 15 years. He was part of the first batch of students at School 42 and built his first AI startup in 2015. After that, he worked on AI in medical imagery, voice recognition and the media industry. He has been working on tellers.ai for more than 2 years.
For more serious insights on AI, click here.
For more serious insights on management, click here.
Did you enjoy theย Robin Guignard-Perret interview? If so, like, share or comment. Thank you!

Leave a Reply