AI is a hot topic in the tech industry, but how does it intersect with Vue.js?
In this special episode, Michael and Alex host a panel at Vue.js Nation 2025 and are joined by two amazing guests:
Patrick van Everdingen, AI Solutions Engineer
Daniel Kelly, Lead Instructor at Vue School
The four developers discuss how AI and Vue can work together. Will we all lose our jobs to AI? How does AI might influence the job market and which tips for Vue.js developers are the most important to know regarding using AI in their projects and workflows? You'll get answers to all these questions, and more in this episode.
Links marked with * are affiliate links. We get a small commission when you register for the service through our link. This helps us to keep the podcast running. We only include affiliate links for services mentioned in the episode or that we use ourselves.
Chapters
Welcome to DejaVue
Guest Introduction
Will we all lose our jobs to AI?
How have you integrated AI into your daily workflow?
What is your best tip/advice for using AI with Vue.js?
Does the role of documentation diminish with AI?
How do framework and library authors need to adapt to AI?
Where does environmental responsibility intersect with AI?
LLMs and Privacy
How will AI influence the job market?
Where can people find you?
AI is a hot topic in the tech industry, but how does it intersect with Vue.js?
In this special episode, Michael and Alex host a panel at Vue.js Nation 2025 and are joined by two amazing guests:
Patrick van Everdingen, AI Solutions Engineer
Daniel Kelly, Lead Instructor at Vue School
The four developers discuss how AI and Vue can work together. Will we all lose our jobs to AI? How does AI might influence the job market and which tips for Vue.js developers are the most important to know regarding using AI in their projects and workflows? You'll get answers to all these questions, and more in this episode.
Links marked with * are affiliate links. We get a small commission when you register for the service through our link. This helps us to keep the podcast running. We only include affiliate links for services mentioned in the episode or that we use ourselves.
Creators & Guests
Host
Alexander Lichter
Web Engineering Consultant • Founder • Nuxt team • Speaker
Host
Michael Thiessen
Full-time Vue educator
Guest
Daniel Kelly
Lead INstructor at Vue School
Editor
Niki Brandner
Audio Engineer and Video Editor
Guest
Patrick van Everdingen
Speaker, Panel Host, Full Stack Developer
What is DejaVue?
Welcome to DejaVue, the Vue podcast you didn't know you needed until now! Join Michael Thiessen and Alexander Lichter on a thrilling journey through the world of Vue and Nuxt.
Get ready for weekly episodes packed with insights, updates, and deep dives into everything Vue-related. From component libraries to best practices, and beyond, they've got you covered.
Michael Thiessen:
Welcome to DejaVue.
Alexander Lichter:
It's your favorite Vue podcast. You just don't know it yet. I guess I mean, maybe some of you are here since the very first episode, and, we are live with the panel at Vue.js nation 2025, everybody. Yeah.
Alexander Lichter:
Michael, how do you feel about that?
Michael Thiessen:
I'm feeling great. We're going to talk about AI and Vue, and it's going to be awesome. So if you have any questions for these fine folks for this panel, put them in the chat, and we might be able to get to them. And we'll, yeah, we'll see where this whole panel goes.
Alexander Lichter:
Absolutely. And if you wanna wait. Wait. Wait. What's what's, like, Deja Vue?
Alexander Lichter:
Never never heard about that? Well, we are a weekly podcast all around the Vue ecosystem. We have amazing guests. We had like Evan Uon, of course, a couple times, Daniel Roe, Lead of Nuxt team, but also Matt Pocock, right, the TypeScript wizard, CJ from Syntax who talked about his, well, first true love, Vue.js.
Alexander Lichter:
And, of course, today, we also have amazing guests. So, Michael, go ahead and introduce our first guest today.
Michael Thiessen:
Yeah. So our first guest is Daniel Kelly, and you may have seen him yesterday giving an awesome talk on Vue and AI. He is the lead instructor at Vue School. And so he's he's got, like, ten years of full stack experience. He does lots of teaching, you know, all of that amazing stuff.
Michael Thiessen:
So can't wait to hear from him about, what you're what you're doing with AI. How are you doing today? Are you feeling a little bit better?
Perfect. And, of course, with, like, we can't only do it with one guest. We have to have two people on here at least. And our other lovely guest is a full stack developer. He has a knack for TypeScript, really good expertise, also in GenAI.
Alexander Lichter:
Actually, we talked a couple of hours ago about super interesting project. And maybe if you're, like, a DejaVue hardcore fan or at least listened to episode five, you have heard and maybe seen him already, because he talked about server side events and his side project CareerDeck AI. Welcome to the panel, Patrick van Everdingen. Patrick, how are you doing?
Patrick van Everdingen:
I'm doing great, Alex. Thank you. How are you?
Alexander Lichter:
Good. Yeah. Psyched about the panel. And, yeah, as as Michael said, really curious what the the chat is saying. A whole big claps right now for all our amazing guests, and we should start straight away with the first question, which is a very simple one.
Alexander Lichter:
Will we all lose our jobs to AI? What do you think, Patrick, Daniel?
Daniel Kelly:
Patrick, you wanna take a stab at that one first?
Patrick van Everdingen:
I think we'll not lose our jobs to AI. I think we'll have, new jobs coming in thanks to AI, and we'll always have some sort of job where AI will play a big role in our lives. Yeah. I don't think we'll lose our jobs. No.
Patrick van Everdingen:
What do you think, Daniel?
Daniel Kelly:
Yeah. I I definitely agree with that. We're gonna lose our jobs as they currently stand. Right? They're they're not gonna be the same jobs.
Daniel Kelly:
They're gonna be same in some respect, in in a lot of the same respects. I mean, we we're we're still needed in that that development life cycle, but, yeah, they're gonna be different. It's not what we originally, you know, trained and and things for a %. Right? It's it's gonna look different.
Michael Thiessen:
So with these AI models getting better every couple months, we see that they're capable of more and more and and, you know, surpassing all these benchmarks and whatever hype we see on social media. Why why do you think that they aren't gonna take our jobs? Like, there's a there's obviously the other side that it's that's like, oh, they they can code better than the best humans. So, of course, like, it's only a matter of time. But, like, what would you say to that point?
Michael Thiessen:
And why, yeah, why do you think that they're they're gonna need us or we will need to use AI rather than, just having having a a button press and build the app for us?
Patrick van Everdingen:
Well, I think an important aspect of using AI responsibly is having the human in the loop, someone who keeps track of what an AI system is doing, whether that's be it's coding, marketing, writing content, you always need to have some sort of human approval in this entire process. I think we're doing that right now with coding. Cursor Cloud generates code for us. We check if it's valid. Hopefully yeah.
Patrick van Everdingen:
Hopefully. Will the code quality get better? I think so. I think if we compare to where we're standing two years ago, the advancements in Generative AI and Code Assistants is are are the steps they're taking are very huge. I I remember having that, wow moment when Claude's sonnet 3.5 came out.
Patrick van Everdingen:
It just blew my mind high how good it was with, with coding. And I imagine, we'll have some of these moments in the nearby future as well. I mean, they're constantly evolving these models and, yeah. But still, in the end, someone needs to keep track of, what the output is and, will still have a job in the end of the end of the day.
Daniel Kelly:
Yeah. Most certainly. And I, you know, I I think it's in in the short term, it does take over the responsibilities of, you know, the things that some people will be getting hired for now. Right? Because the people that are already on the team can do those things faster.
Daniel Kelly:
But in the the longer term, the web is just going to look different because we're capable of doing more. Right? And so you think about, you know, the early days of TV and video. Right? You got, like, black and white TV, and you've got, you know, things that just don't look as good, and then suddenly new technology comes out that makes processes and things faster to distribute it to more people and things.
Daniel Kelly:
You know? Does that mean people lose jobs in television because, you know, things get get get faster and and better and logistics improve? No. It actually creates a lot more jobs because you're you know? So I I think that, the web actually, you know, is a lot of the things that we do now, you're going to be able to do a lot of those things faster.
Daniel Kelly:
But it's the whole experience of the web is going to increase because we're able to do those things faster. And that's just gonna create the demand for even more. And so there's ultimately, there's gonna be plenty of jobs.
Patrick van Everdingen:
Yep. To add on that, a that, a while ago, there was this article, The Rise of the AI Engineer.
Patrick van Everdingen:
I still remember when I read that, I was a bit skeptical. Like, what is this what's the definition of an AI engineer? Are you responsible for training the models? Are you responsible for implementing those models in in software?
Patrick van Everdingen:
And slowly, this year, last year, I was starting to see more of those real job listings appear on LinkedIn. So I think there's a whole new demand for engineers who are implementing these models in existing software. So that also is another aspect. We lose certain kinds of jobs. Certain jobs go faster, but we also get new jobs because of this.
Patrick van Everdingen:
Yeah.
Michael Thiessen:
I've heard it said, in the past before that software developers are not programmers. They're like, we we solve problems with technology, and writing code just happens to be the output of that. It's like an implementation detail. And this was, like, well before, this the AI in the last couple years. And I think that that's, that's that's still true.
Michael Thiessen:
And it's just like coding is just one of the the things that we do, and maybe that changes. We we're wrangling AI a bit more and more over time.
Alexander Lichter:
Yeah. And I think it's also a key part, Michael, what you just mentioned. Like, not the programmer to see as a code monkey. And hopefully, maybe that's, let's say, annoying part of, like, I don't know, that the boring tasks can be automated away and be stripped away like it is right now of, like, okay. Maybe scaffold some test cases for us.
Alexander Lichter:
We just have to check if they're valid. But if we don't do that, well, then we have a problem. So it's also maybe a bit of, like, using AI for what it is good at and not just like it's a bit like I don't know. If you only have a hammer, everything is a nail. So, like, okay.
Alexander Lichter:
I just use that as, like, generate me everything, and the outcome will not be good if you don't really check it in a way.
Patrick van Everdingen:
Yep. Yeah. Garbage in garbage out, they say. The quality the output of the quality, the quality of the output of your generated code is as good as you how good you prompt, the AI, how well you communicate, what the context is of your project, how, what what it needs to do. And my experience with these kind of things is the longer your the the task is that you define that the AI needs to solve, the higher the probability that the output of the code becomes, less usable.
Patrick van Everdingen:
That's my experience. I like to break things up in smaller chunks. And if you know how to prompt these smaller chunks, I think your life as an engineer becomes much easier and faster.
Daniel Kelly:
Yeah. For sure. And it's it's kind of like having, a little bit of the I mean, extended memory as less so than, like, another strategic part of your brain. Right? It it helps you remember where some things are and what patterns are kind of in the code, but it's not necessarily always thinking strategically as a as a human could.
Patrick van Everdingen:
Yep.
Michael Thiessen:
So we'll go to our next question, which is how have you integrated AI into your daily workflow when you're writing code? Why don't I go to you, Daniel? You gave a talk yesterday. I think you covered some things about this. Maybe you could give just, like, a couple things from that talk.
Michael Thiessen:
Don't don't redo your whole talk here.
Alexander Lichter:
Yeah. Watch watch the talk if you haven't seen it. Yeah. Exactly.
Daniel Kelly:
Honestly, for for me, so in content creation, I don't code every single day. But when I do, a lot of the ways I'm using AI is to brainstorm, directions for for my app. Like, you know, if I'm I'm building this, you know, example example application for some, you know, video series, what, you know, what features can I build in this app to showcase the the the concepts that I'm teaching? Right? And it can help me say, okay.
Daniel Kelly:
Try try this feature out. If you wanna do, you know, real time, features, try building a a chat app or a, you know, a stock, real time stock prices thing, or it just gives me some ideas. That's one way that it helps me. But then it goes and, you know, I can ask it, and we'll suggest some, you know, some tooling and some libraries or things like that that can help me in in this context. But then in terms of actually coding, when I am working in the code, of course, I use the autocomplete stuff.
Daniel Kelly:
It's extremely handy. That's just the, you know, kind of the second memory part of your, your brain. Okay? And then every once in a while, I I use that composer, you know, to to to bootstrap a new component and and things like that.
Patrick van Everdingen:
Do you trust composer mode with the YOLO mode enabled?
Alexander Lichter:
Have you heard of it?
Daniel Kelly:
No. I do not.
Alexander Lichter:
Maybe
Patrick van Everdingen:
Have you tried it.
Alexander Lichter:
Give maybe a little context to everybody out there who's not knowing what the composers or YOLO modes of people not using Cursor. Maybe that's a a good part there, and then I'm happy to hear more about experiences with that.
Patrick van Everdingen:
Yeah. So, composer mode is a feature within Cursor IDE, which in its term is a fork of Visual Studio Code. Looks exactly the same as Visual Studio Code, has the same extensions, plug ins, and it is a feature that allows, a large language model to, I would say, semi autonomously, construct a new feature for you, and that is not tied or limited to just code generation in a file that you're working in. It also allows a large language model to, access the file system, create new files, run stuff in the terminal. So, I think it's a really innovative feature that kind of helps you get in the right direction given that you have enough, supervision.
Patrick van Everdingen:
You give it enough supervision, it will work pretty well. And this YOLO mode, it's a a flag that you set in the settings. It basically means, alright. I'm gonna give up all control over my code editor, and I want the large language model to do whatever it wants, which also means running commands in the terminal. And I once tried it, and I said, hey.
Patrick van Everdingen:
Let's build this new component. It has this as requirements. Ensure, this, this, this, make an API call that. And I pressed enter. I went out to make coffee, returned it for ten minutes, and it just created 42 new files and just messed up the entire code base.
Patrick van Everdingen:
I don't think it's very useful yet, but I can see where things are going in the next, coming years with this new feature. I don't know how your experience has been, Daniel.
Daniel Kelly:
Forty two forty two new files. That's that's wild. Yeah. No. I I I haven't tried it.
Daniel Kelly:
I the fact that it had access to the to run commands scared me, and I just didn't wanna trust it to do that. And I didn't wanna take the time to, like, learn how to sandbox it or something. So yeah.
Alexander Lichter:
It's also, like, spamming is like, oh, r m r f slash oh, oops.
Daniel Kelly:
Yes. Right. Yeah. Yeah.
Alexander Lichter:
Or, like, oh, just the wrong prefix. I want to remove the component.
Michael Thiessen:
There's a checkbox that says disallow, you know, removing files. So they may
Alexander Lichter:
Ah, okay.
Michael Thiessen:
Built that in to to it so it's a little safer.
Alexander Lichter:
Than it can only execute some malicious things. That's good. Yeah.
Michael Thiessen:
Exactly.
Alexander Lichter:
Yeah. Yeah. But at least that's a that's a good way to, like, make sure that not everything is lost accidentally. Like, oh, yeah. The git folder is not gone too bad.
Daniel Kelly:
Yeah.
Alexander Lichter:
But, I mean, it also fits in a way what what you two of you described of, like, okay. AI is useful if you have the supervision if you check your code. I think maybe that also comes pretty nicely in, with another point, which is what is your best tip or advice for, like, other developers doing AI assist development in in Vue.js specifically? Besides, of course, check what the output is. Mhmm.
Daniel Kelly:
I would say just, you know, take it in small chunks. And we kinda already said that. Right? Don't try to do everything at at one time, and this is going to be the most helpful thing. Remember, you're still the brain in the operation.
Daniel Kelly:
It's just it's just helping.
Patrick van Everdingen:
Yep. Yep. I, agree. I agree. My advice would be, use Cursor's documentation feature.
Patrick van Everdingen:
It's basically, it allow Cursor allows you to ingest, both, local and online documentation of your favorite framework or an obscure plugin that nobody's ever heard of. And it basically uses that as a reference to write, your code. So when I started with this, it greatly reduced the amount of hallucinations because it use it it uses grounded facts and documentation to write code. I think that's really, useful. And, yeah, I I'm I'm completely on team Cloud 3.5 Sonnet.
Patrick van Everdingen:
I think that's super good stuff. Yeah. And
Alexander Lichter:
Also expensive, I heard.
Patrick van Everdingen:
Sorry?
Alexander Lichter:
Also expensive, I heard, or at least compared to other models.
Patrick van Everdingen:
Quite alright nowadays. I, have a Cursor subscription, $20 a month. It basically gives you 500 fast requests with Cloud Sonnet, which I think, yeah, suits me pretty well. Yeah.
Daniel Kelly:
They they have, so called, like, premium models or a few that you can choose from, and Claude's no more than the, than the latest OpenAI model on there.
Alexander Lichter:
Yeah. That's nice. Yeah.
Patrick van Everdingen:
And what's also good to know is that, Claude is also available within Visual Studio Copilot as of a couple weeks ago. So if your company has a, GitHub Copilot subscription, you can always, switch from GPT to to Claude.
Alexander Lichter:
Yeah. That's what I did for most of the things because I'm mainly using GitHub Copilot and VS Code, and then it's like, oh, yeah. Claude, let's try. Gives better results, actually. So I was really happy about that that it happened.
Daniel Kelly:
Perfect results.
Patrick van Everdingen:
What I kind of dislike about the versioning of these large language models is that, they kept referring to Cloud 3.5 Sonnet as Cloud 3.5, and then they released a new Cloud 3.5, which they didn't call 3.6. They just called it 3.5 new. So, yeah, ensure that you refer to the right version of Claude, which I think has a date, time stamp in the model itself. So, always check because your your experiences may vary depending on the model that you're using. It's sometimes confusing to, yeah, in large language model land.
Alexander Lichter:
It also sounds a bit like versioning strategies should have been, well, improved a little bit. I mean, we, at least in software development, had have figured it out kind of with semantic versioning. But there are also suggestion other suggestions. But, yeah, that's good.
Daniel Kelly:
Jsut ask AI to what to name it.
Alexander Lichter:
Just ask. That's a good idea. Yeah. Alright. There is one question.
Alexander Lichter:
A lot of questions from the audience, actually. So let's let's, do something that's quite related, because Daniel, you mentioned the extended memory, and it's about the role of documentation. It's also FairPoint as as the perspective of a maintainer. The question is, does the role of docs diminish? Docs get auto generated from prompt questions, or also do people still read docs?
Alexander Lichter:
Why can't they just use AI to figure out what the library is about, like Michael showcased in a recent video as well? What do you think about that?
Daniel Kelly:
The way that we create docs visually for people to ingest, yeah, I think the need for that does diminish some, like having a website you can go to with docs that look nice. But does your app still needs doc does your app still need documentation? Yes. Absolutely. I mean, that's how you're teaching the the language model about your your app in a lot of ways.
Daniel Kelly:
So, in fact, that's one of the things I mentioned in in my talk. And to be honest, I haven't tried it myself. It's something that this piece of the device is something I've heard from others online is that, you should ask the, composer inside of Cursor to document changes for you in your app as it's, you know, as it's doing things in the composer. And so, ultimately, it it understands English really well, right, or whatever language. And so, you definitely want to keep well written, documentation around about how your project work.
Daniel Kelly:
Yeah. Not not just for your sake, but for the LLMs LLM's sake as well.
Alexander Lichter:
And do you think that, like, from a maintainer's perspective, couldn't you just say, okay. I have a lot of tests, and the code is expressive enough. Then, like, as you just said with, like, okay. If AI generates the code or, like, a AI reads through code base and generates docs from that, do you think this will have, like, a similar, let's say, impact in writing the docs by hand manually right now?
Daniel Kelly:
I mean, I think part of it could, but, imagine writing, a a super expressive expressive variable name versus writing a comment. Sometimes you've got variable names that are are good enough, and then you've got variable names that are precise. And a lot of those variable names that are really precise are actually a pain to write every time. Right? And so I I kinda see the difference there being writing things out in the comment doc style versus and and capturing more of the essence of what's happening versus, you know, something that's that's happening in in the actual code and variable names and things.
Michael Thiessen:
I'll say from, a content creation, like, technical writing standpoint, I've experimented a bit with trying to get AI to help with writing articles even or anything like that. And it's not very good, especially compared to, like like, really well written documentation. It just doesn't have, all of the nuance that you would expect or references to, like, oh, like like, design decisions, like view in the view docs, it might say, well, we did this because of this, or it'll reference another part of the documentation. And in theory, you could build a whole system that, like, that that work does this all, but it's would be very complex. I think the best use is in that, like, question answering, like, dynamic.
Michael Thiessen:
It can answer a very, very specific question in context of, like, what you're doing with a specific file or or whatever, but just, like, as general documentation of, okay. I wanna learn this thing, and I'm gonna read through, see what it does. It's not it's not there yet.
Alexander Lichter:
Yeah. I also made similar experience, actually. Like, it's especially because it also doesn't have all the context. If you would have let's say, you start a project and you feed everything into, like, the context window of the l m. Like, here are all our transcripts from our meetings and why we did certain decisions, which no one ever have, like, from years ago.
Alexander Lichter:
Just imagine, like, Vue is almost eleven year old framework. It's really difficult, plus probably also you can't even document all of that. Some people also sadly only bounce some some knowledge only bound to certain people, as in, well, knowledge transfer needs to be done in general. And with Nuxt, it's also similar. Like, there are some decisions made in team meetings.
Alexander Lichter:
Of course, we have recordings that could be transcribed, but, yeah, having the full context. And then also understanding what might be important for the developer itself. Not like, okay. This is how we can use it, but also maybe when you shouldn't use it. What is the the danger in certain situation to maybe, I don't know, use a composable in the wrong way?
Alexander Lichter:
It's always a classic example, like use fetch in a a normal method or something. Like, making that possible, I think, is the best that you actually well, use the framework and then note, okay. This is a problem. And then start to generate docs with help of AI. Say, okay.
Alexander Lichter:
Hey. I wanna write a certain part, some suggestions. Then he also would use that a bit like, finding ideas or something also phrasing for the, non mother tongue English speakers.
Patrick van Everdingen:
Well, I think the limitations of the context window might be a problem that gets solved in a nearby future. I mean, when compared to GPT 3.5, which I believe had a context window of 16,000 tokens or 32,000. Whereas right now, we have models that are capable of handling, I think, 256,000 tokens to a million tokens, I think. So I think, well, the technology is, gets better. We can stuff more context in the context window, so to say.
Patrick van Everdingen:
But for these kind of things where you, like, give the model a context, there's something in cursor called dot cursor rules. Maybe you guys have heard of it. Mhmm. It means that you can put all kinds of things in there that you prefer. Like, for example, I prefer the options API versus composition API.
Patrick van Everdingen:
Ensure to always output options API code. Yeah. That's an example.
Alexander Lichter:
But you wouldn't do that.
Patrick van Everdingen:
No. I wouldn't do that. No. It you there are people who might have this, preference. You can also stuff kinda all kinds of decisions in there.
Patrick van Everdingen:
You're not limited to just code as a language, but you can also, yeah, put normal English in there. It will help you, steer the the output of your code. So, I think we're getting there, that we're able to stuff all kinds of things in the context, and it doesn't matter how big the context is. But we'll see.
Alexander Lichter:
I guess the problem is also, like, you have to have the information somehow. Right? If it's only existing in the heads of the people, then you can, well, stuff in. But that's that's a good point to mention that, like, the the size the window size problem might be solved at some point.
Patrick van Everdingen:
Yep. Yep.
Daniel Kelly:
And in terms of, my apologies, Michael. Go ahead.
Michael Thiessen:
No. You you have something to say. Go ahead.
Daniel Kelly:
Well, and it's just as far as docs are concerned, and maybe not even necessarily docs being used for a context for, the AI, but just docs being useful for the developer. You know, one of the reasons we well, I say we we all like TypeScript, but one of the reasons that people who like TypeScript like TypeScript is because it keeps you from jumping out of your IDE and to your browser to documentation as much. Right? And so I think, AI will allow us to remain inside of our IDEs even more. It already is.
Daniel Kelly:
Right? And so that's another way in which the importance of docs will diminish some.
Alexander Lichter:
Maybe that's also a good point to mention that some frameworks, for example, I know Svelte is doing that, is providing LLM friendly documentation, just like an LLM dot TXT file with, like, hey, this is Svelte. Here's how you do things. A bit like how you could build your custom cursor rules saying, hey, this is how you use certain methods of the framework you use, but by the framework by default. So maybe that's a good segue.
Alexander Lichter:
Say, how do you think that framework library authors have to adapt to, like, make libraries or frameworks adapted to make them more, say, AI friendly for, like, more accurate code generation, for reasoning, for just, like, using things?
Patrick van Everdingen:
That's a very good question.
Alexander Lichter:
It's difficult. Right?
Patrick van Everdingen:
Yeah. Yeah. How can yeah.
Patrick van Everdingen:
It's not easy.
Alexander Lichter:
Maybe while you're thinking about it, just to add on that, especially nowadays, AI is really good at generating React code, obviously, because it has a lot of training data on that, given the, like, big React ecosystem and a lot of AI tools targeting React by default. v0, for example, has Vue support by now, but, had mainly React support before. But on the other hand, we can't just say, hey. We only use the most popular tool forever now just because AI is good at it, let's say. And I mean, AI can also generate valid Nuxt and Vue, and so on quote.
Alexander Lichter:
So, yeah, is there is there something that you could think of how how maintainers could improve things?
Patrick van Everdingen:
It was a while ago, I read an article by, written by some folks at Anthropic, the the company behind Claude. And they, had some suggestions, to get better structured output from, from Claude given a certain prompt as input, and they recommended placing, like, structuring your prompts in XML tags. And it could be XML tags that were nonexistent, for example, instructions or context. And I can't remember the name of the research paper anymore, but it mentioned that it could greatly improve the quality of the output of an LLM. So my first thought would be, when writing documentation for frameworks and libraries, place code examples in a XML tag named coach example or, a discussion tag if you have a controversial take or a hot item of something.
Patrick van Everdingen:
That's the only thing I can think of right now.
Daniel Kelly:
Yeah. I think that's very interesting. I think maybe more people will even rely on the more traditional markdown style, documentation that you see on on GitHub. Right? And you won't have a link to a web page from the GitHub docs or Readme.
Daniel Kelly:
You just, expose that that markdown file easily so you can add it to the context of your your ID's composer, and, you've got a lot going for you right there, I think.
Alexander Lichter:
Yeah. So, basically, the way having it all structured, so to say, either for, like, XML text, like, okay. I have my own, let's say, structured system or well, Markdown is also structured in a way, and not for, like, rely on fancy websites with, like, big animations. Like, here's here's the info as a dumb for maybe the LLM to learn. Yeah.
Daniel Kelly:
Yeah. Yeah. But but yeah. I mean, to your point, Patrick, could have have even more structure over the current, you know, just markdown format being headings and things like that using using XML to really tell it more semantic things. And maybe if there becomes some kind of standard around that, I don't I don't know.
Daniel Kelly:
That could be interesting.
Michael Thiessen:
Yeah. Yeah. I've seen recently, so there's this l m dot t x t file that's, that people are trying to popularize for websites. So it's similar to, like, your robots dot t x t file where, like, crawlers will will read that file for some information. But, basically, this file is, like, take all of the content of your website and just stuff it into a single single file.
Michael Thiessen:
And then the LLM can just go and read that. And instead of having to, you know, parse out all of the irrelevant UI that it doesn't need to to really read
Alexander Lichter:
Mhmm.
Michael Thiessen:
Then you can just put it in there. So I wonder if that combined with well, you could make this l m's dot t x t file structured with the, the XML tags and stuff like that.
Patrick van Everdingen:
Yep. Yep.
Alexander Lichter:
That's also what Svelte is is doing. For example, I have, like, a system tag making it saying, this is the developer documentation for, I don't know, for example, SvelteKit. And then they have markdowns, like, nodes and, and so on so on. So it's kind of a mix. It also goes in that direction.
Alexander Lichter:
Yeah. But I really think, like, in in the future, there will be some kind of, like, alignment at least to to make things more graspable, like, with less tokens, and more accurate for LLMs. So it would be interesting. Okay, Michael, what else do we got? Question wise.
Michael Thiessen:
Yeah, we have a couple of questions about environmental responsibility. So, I'm gonna pop up this one. Yes. Where does the environmental responsibility intersect with our adoption of LLMs? Because we know that these LLMs, they're using tons and tons of energy and, even, like, Meta investing and building their own nuclear reactors and, like, you know, crazy things like that because they're just so power hungry.
Michael Thiessen:
So how have you thought about this, or do you have any ideas on on this this question? A bit of an an ethical or moral dilemma here.
Alexander Lichter:
And before you answer, AGI will solve that is not a valid answer. I'm not making it easy here.
Patrick van Everdingen:
I would say before you consider using AI or implementing AI in existing software or applications, try to think, do I need AI? Do I need AI? There are these examples I see online, and it's primarily, within a realm of AILM engineering called agentic workflows. They often give, like, an example of, okay. Let's build an agent.
Patrick van Everdingen:
Well, it will independently and autonomously search the Internet and look for the price of a certain stock on the stock exchange. Well, we have an API, maybe a couple of them around, for this exact purpose. Why would you feel the need to spin up a large language model to do something that an API call can do, much faster and much less, prone to using power?
Patrick van Everdingen:
That would be my first concern.
Patrick van Everdingen:
And, the second will be, yeah, try to consider using, your own local large language model that you host in, in a in something called Olama.
Patrick van Everdingen:
Allows you to run models locally. That way you're not dependent on some sort of a third party that uses power, yeah, in a way that that doesn't need to. And I'm also thinking but I don't know much about this topic, but I also read somewhere that most of the energy usage goes into training the models, like making them train on all the data that's get, fed. So try to consider, not training your models too much or fine tuning them too much. But that's just an idea I have.
Daniel Kelly:
Well, going off of that, Patrick, the the, thing you mentioned about training being one of the most energy, you know, heavy parts, the the deep seek model, like, I saw someone just, mentioned in the comments as as well. You know, it took them far, less time to train this deep seek model. I mean, this exploded on the Internet, like, like, Monday, and they did it with, with with graphics cards or, you know, that were extremely they were the old stuff. Right? And I imagine that uses less power as well.
Daniel Kelly:
I'm I'm not a hardware guy. I don't I'm not that's above my pay grade. But, but, yeah, I I think the technology is certainly, going to improve, and we're already seeing that.
Alexander Lichter:
Yep. Pretty good points. And I think you're you're touching on two topics that we also had in mind. I mean, we can stick with DeepSeek for a little bit. Do you have a take on on DeepSeek r one, on the the big thing in AI recently?
Alexander Lichter:
Well, a lot of big things, but one of the most recent ones.
Patrick van Everdingen:
I'd say, DeepSeek, it's pretty impressive that you can run a reasoning model because that's what it is, a reasoning model that, is able to compete with OpenAI's flagship reasoning model, which costs, I don't know, $200 a month, I think. I think it's They
Alexander Lichter:
paid no profit on that. They're not profitable in a turn dollars. That's the craziest. Yeah. So
Patrick van Everdingen:
I think it's very good for our field of work that these new innovations, which, drive, innovation and, lower costs are coming out. I think it's awesome that you can just run it yourself. It helps you with certain reasoning tasks. Given the Vue.js ecosystem and coding, I've tried to use it to code.
Patrick van Everdingen:
I think I'll stick with Claude.
Daniel Kelly:
Oh really?
Daniel Kelly:
I think it's still a bit faster.
Patrick van Everdingen:
And, for example, I try to, give an example, create a server route, use use a async data, and it just made up something with Express JS. And I'm like, yeah. It doesn't it's probably not trained on Nuxt documentation, maybe briefly or not as much as GPT four or Claude is doing. So my experience, it's awesome that it's open source.
Patrick van Everdingen:
It's awesome that you can run a reasoning model in your own house for free. But I'm not using it in something production wise. No. I think it's not that good yet. That's my take.
Daniel Kelly:
Patrick, do you have any, were you interacting with it, like, through the the chat interface, or do you have any, IDE integrations with with DeepSeek that work yet?
Patrick van Everdingen:
Yeah. So I implemented it in my cursor workflow using Ollama.
Daniel Kelly:
Okay.
Patrick van Everdingen:
If you use it to run local models, you can pull the latest, large language models and then, use that. So I run it in the terminal, actually.
Patrick van Everdingen:
And what's interesting also is that, of course, it's a little political take, but, I believe it's also a model that's censored. You you talk about certain models certain topics, it will refuse to answer, especially in the web UI. But this is also the case with some topics that you, use in OpenAI, for example.
Patrick van Everdingen:
There's always some sort of censorship.
Alexander Lichter:
Some sort of biased. Yeah.
Patrick van Everdingen:
Yeah. Bias, censorship, but you can circumvent this with certain jailbreaking prompts, which I think is interesting as well. So all in all, yeah, I think it's awesome that it's has come out. You everybody can use it, but I'm not using it for anything right now. No.
Patrick van Everdingen:
What about you, Daniel?
Daniel Kelly:
No. I I honestly, I haven't tried it yet. I've I've heard good things, obviously, but, yeah, I'm I'd be excited to give it a whirl.
Michael Thiessen:
So our next question, and I think maybe our last question here as we, start to wrap up, is,
Alexander Lichter:
We still have a few, but let's let's find it.
Michael Thiessen:
So we'll do a question on data privacy.
Alexander Lichter:
Yes.
Michael Thiessen:
So the question is, here we go. Perfect. We're sending all sorts of data over to these LLMs.
Michael Thiessen:
And how how do you think about privacy? Are you concerned about this at all?
Daniel Kelly:
So in terms of using Cursor, like, there are ways to ignore certain files, you know, prevent cursor from sending data and certain files to the LLM altogether. And I think that's a good start. But, obviously, you can't hide all the files. There else is useless. Right?
Daniel Kelly:
I don't know. In in in terms of of code, it doesn't bother me that much for the most part because a lot of the things that we're doing, technically speaking, like, people already have done. Right? It's it's not, like, secret information. It's not, you know it's just architectural stuff.
Daniel Kelly:
But I don't know. I I mean, I really haven't tried to tease out of a LLM. Hey. What is this company's certain piece of software doing? I haven't tried to ask it to give me that information.
Daniel Kelly:
I don't know. Yeah, I I I I typically don't concern myself with it too much. I'm probably a little haphazard about it.
Alexander Lichter:
I guess, also depends on what you work on, so to say. I guess, when you work, I don't know, on a super secret government project and they're like, yeah. All the files are sent over to, I don't know, some server. Then people might be like, oh, yeah. Let's see.
Alexander Lichter:
Let's maybe not do that. I don't know. Yeah. But, Patrick, what do you think?
Patrick van Everdingen:
Well, I know for a fact I have a couple of colleagues who work at a, financial institution here in Holland, and they have a policy of, not letting their engineers work with AI coding. As a matter of fact, everything is blocked from OpenAI. Not even ChatGPT can be used. So they are a resort, yeah, they have to use local coding models, which you can also pull with Ollama. They were quite great.
Patrick van Everdingen:
You have, CodeLama made by Meta. Works pretty okay ish. There's a Chinese model, Quem, which works also really well with coding. And another aspect of ensuring privacy is that, you can also run your local large language models that are not connected to the Internet. Your data does not get sent to a server.
Patrick van Everdingen:
Everything stays on your laptop even without any Internet. So that will be a good way to, mitigate this this issue.
Alexander Lichter:
Yeah. I guess it also applies if we think about things beyond coding, like applications, including, I dealing with sensitive data, be it, like, I don't know, medical data, whatnot, then that would also be a a valid way to do that.
Patrick van Everdingen:
Yep. Yep. Yeah. Certainly. One thing you could also check out if you're not too technical, if you haven't worked with Alarm or if you can't work with it yet, there's this thing called the WebLLM project.
Patrick van Everdingen:
It's, very, very, much work in progress. But what it does is it allows you to basically pull a llama model in the browser cache and run a large language model within your browser. And when I tried that for the first time, I was like, my mind was blown. Like, you don't need any Internet. The model gets cached in your browser cache.
Patrick van Everdingen:
You can just run a low a local large language model within your browser. I also think that's the future where we're headed, like smaller optimized, language models that are very good at one particular task. Google is doing some experiments with this, AI API with which runs in the browser. I think that is a way that we can tackle privacy issues, in the near future. Yeah.
Michael Thiessen:
Yeah. The WebLLM thing is very interesting. I'll have to check that out a bit more.
Patrick van Everdingen:
Yeah. Do it.
Alexander Lichter:
I mean, that's just a start as well. Like you said, it's very much in development, and I guess there there are a lot of things coming no matter if, like, okay. Decrease the size while keeping the same, well, re reasoning capabilities. Yeah, versus running them. And I think especially given that I mean, I'm living in the EU, Patrick, you as well.
Alexander Lichter:
We like, I mean, I'm German, so we are the land of the GDPR, which is not necessarily a bad thing, but, well, let's not get into that. The main point is that a lot of companies are very much privacy focused. And I mean, that like, for them having a solution just like, hey. Okay. You wanna use AI for internal things?
Alexander Lichter:
Here, you can do it is is a really good way, especially, yeah, governmental, like, near government organizations or just people. So like, okay, I don't wanna leak anything to any, service outside. Sweet. Okay. We're slowly nearing the end, but we can do one more lovely question.
Alexander Lichter:
And maybe it would be great to cycle real quick back.
Alexander Lichter:
We talked a little bit about that, but the job market. I think the dev job market is, in general, like, not that easy right now and, well, hasn't been the last year, a couple years. But the question is, do you think AI will reduce the gap in the programming job market? Or we briefly touched on that, like, will it increase the demand for other IT roles?
Alexander Lichter:
Will, like, people search us for front end developers, but more like for people using AI? What's, yeah, what's your take on that?
Patrick van Everdingen:
I think we're looking at two processes that run parallel. I think on one hand, the tech market is quite bad in general. I mean, I'm speaking for for Europe. They aren't the golden days anymore. I remember, like, two, three years ago during COVID when a lot of companies had much more money.
Patrick van Everdingen:
Yeah. So that has dried up a little bit in Europe. So it's it's getting a bit tougher to, to to stand out from other software engineers on the market. I think AI also has a role in, yeah, convincing companies that, you might not need as much software engineers because they can do everything with AI. I think that's the premise that many companies are now, seeing in Europe.
Alexander Lichter:
Or even worse, the project manager will say, hey. We can every do everything with ChatGPT, and then you need people to fix that. Yeah. Yeah. Yeah.
Alexander Lichter:
Yeah.
Patrick van Everdingen:
Like, oh, I saw a TikTok reel. It said, you can make a website with ChatGPT. Why would I need you? Well,
Daniel Kelly:
not the case.
Patrick van Everdingen:
But just like I said earlier, I think there will be a demand for other IT roles, which is which could be the AI engineer who's gonna work and implement with, with all these large language models.
Patrick van Everdingen:
That are there will be software engineers, front end engineers who might need to implement and integrate large language models or fine tune data, create a data ingestion pipeline. I think there are more possibilities rising right now due to the AI AI hype and boom.
Daniel Kelly:
Yeah. For sure. I mean, it's it's definitely going to, you know, create as many jobs, I think, and it as it takes. They're just gonna look a little bit different. And, honestly, as developers, we're poised in a very good position to take a lot of those positions because of the knowledge that we have.
Alexander Lichter:
Yeah. Great. I think that's a a really good, a good end word here for everybody who might be concerned that, as we said before, nobody will lose their jobs. It might just be different. And maybe there will also be some new roles out there as well.
Alexander Lichter:
Patrick, Daniel, thank you so much for joining this lovely panel. Could you very briefly just like in one or two sentences say where can people find you or follow you just so people know that?
Daniel Kelly:
So I'm, on Twitter at Daniel Kelly IO. GitHub is, I might mention there's an underscore between Daniel Kelly and IO. It's on my tag there. But, yeah, that's mostly, where I'm at. Yeah.
Daniel Kelly:
And, of course, I am my my apologies, Patrick. Of course, I'm with, Vue School and, so VueSchool.io. Definitely, come there for, we've got some AI training and stuff there as well. Awesome.
Patrick van Everdingen:
Yeah. You can, find me on x or Twitter under reg xparser. Or if that's too hard, you can look for Patrick, the LLM engineer. And I'm also on LinkedIn, with my full name. You can find me there.
Patrick van Everdingen:
There's only one Patrick van Everdingen, so you'll find it.
Alexander Lichter:
The only one.
Patrick van Everdingen:
The only one.
Alexander Lichter:
Perfect. Thank you so much. And, that's almost the end of the show. Thank you so much, VGS Nation, for giving us the chance to talk a bit about AI, and Vue and how the future might look like. Thanks once again to our wonderful panelists.
Alexander Lichter:
Also make sure to subscribe to join to next year's event, right? Vue.js Nation 2026 over at vuejsnation.com. And Michael, you got something else for us, which comes
Michael Thiessen:
Yeah, there's also Front End Nation, which, is coming up in later this year. June. Yeah.
Alexander Lichter:
Everybody. June.
Michael Thiessen:
Oh, you can go you can go sign up for that. There's, the website is frontendnation.com. You can go check that out.
Alexander Lichter:
Yeah. And, if you wanna hear more about, Deja Vue, check out the latest episodes. We have them on YouTube and on your favorite podcast platform, Apple Podcasts, Spotify. Google Podcasts is that, so not there anymore. Wherever you wanna listen to, for example, we talked about, our twenty twenty five predictions with Daniel Roe, who speak later today.
Alexander Lichter:
So shout out to him and also with Justin Schrader about FormKit and a bit more AI. Definitely check it out.