The Ruby AI Podcast

CRMs Don’t Have to Suck: Rebuilding Business Software with AI and Ruby with Thomas Witt

Valentino Stoll, Joe Leo Season 1 Episode 17

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 59:36

Many “AI startups” today are little more than thin wrappers around large language model APIs. But what happens when those APIs improve and the platforms absorb those features?

In this episode of The Ruby AI Podcast, Valentino Stoll and Joe talk with builder and investor Thomas Witt, founder of Vendors.ai and operator of the pre-seed firm Expedite Ventures. Thomas shares why he believes the next generation of durable companies must deliver real value deep in the product stack rather than bolting chat onto existing software.

The conversation explores why traditional CRMs are widely disliked and how an AI-native CRM might look completely different. Instead of rigid forms and required fields, Thomas describes a system where conversations themselves become the primary data source. Emails, meetings, and messages are embedded, searched semantically, and transformed into structured knowledge automatically.

They also dive into the architecture required to support this shift. From Ruby on Rails and Hotwire to DynamoDB, vector search, async Ruby, and multi-model LLM workflows, Thomas shares practical lessons from building AI-heavy production systems.

Along the way the discussion touches on agentic coding workflows, LLM-as-a-judge evaluation patterns, telemetry for prompt chains, and why small teams may soon replace the massive engineering orgs we’ve grown used to.

If you’re curious where Ruby, Rails, and AI systems are heading next, this conversation offers a fascinating glimpse.

Show Notes

Guest: Thomas Witt
Founder of Vendors.ai
Investor at Expedite Ventures

Topics we explore

• Why many AI startups are just “wrappers” around LLM APIs
 • What an AI-native CRM looks like when conversations become the database
• Why Thomas chose Ruby on Rails with minimal JavaScript using Hotwire and Stimulus
• Using Amazon DynamoDB instead of relational databases for AI workloads
• Hybrid keyword + vector search with OpenSearch and Elasticsearch
• Async Ruby patterns using fibers, the Async ecosystem, and the Falcon web server
• Orchestrating many concurrent LLM calls within a single user interaction
• Background job systems and queues such as Amazon SQS
• Code quality workflows with StandardRB and RuboCop
• Using models like Claude, OpenAI Codex, and Gemini together in multi-model workflows
• Observability and prompt tracing with Langfuse
• Why AI tooling may enable much smaller engineering teams

Mentioned in the Show

Vendors.ai – Thomas’s AI-native CRM platform
Hotwire – HTML-over-the-wire approach for modern Rails apps
Falcon – Fiber-based Ruby web server
Ruby AI Builders Discord – Community of Ruby developers building AI tools
Chaos to the Rescue @ Artificial Ruby

Valentino Stoll  00:00
Allright. Hello, everybody. Welcome back to another episode of the Ruby AI Podcast. I'm one of your hosts today, Valentino Stoll, and joined by Joe. Joe?

Joe Leo  00:08
Hi, I'm Joe. I'm the other host. I'm really excited today because I don't have to be the one to say a bunch of provocative or controversial things, because we are joined by Thomas Witt, who, in addition to being a builder and a Ruby and AI enthusiast, says plenty of controversial things for the three of us.

Joe Leo  00:29
So, Thomas, welcome to the show. It's great to have you.

Thomas Witt  00:32
Thank you very much. Really happy to be on.

Valentino Stoll  00:35
Thomas, I kind of want to digright in. You've got a startup that you're going to tell us all about, and you're a businessman and an engineer, which I respect because I like to think of myself as that as well. You said something that I thought was interesting recently where you talked about your, it's a pre-seed investment firm,right? What's the name of it?

Thomas Witt  00:54
Yeah, Expeda Adventures.

Valentino Stoll  00:56
Expeda Adventures.

Thomas Witt  00:57
Collector of CTOs.

Valentino Stoll  00:58
Okay. And you said that, and I want to get thisright here, you said that CodeScript many AI startups are actually naked, implying or outright saying that, hey, a lot of startups that say they are AI, I'm using air quotes here, are really just wrappers around a Codex or a Claude Sonnet API.

Valentino Stoll  01:19
And I'd love it if you could tell us a little more about that.

Thomas Witt  01:22
Yeah. So basically we started Expeda Adventures like around 2020, and we've been through many phases. So we had obviously also a Web3 phase where we got a lot of Bitcoin and Ethereum pitches. And now we're basically two years ago or three years ago now, we got a lot of AI pitches.

Thomas Witt  01:42
And basically everybody started building AI, obviously with that watershed moment of ChatGPT being released. And we simply saw a lot of like very thin. Now like become a thing or even as a ChatGPT wrappers, basically. And I think there's nothing bad with a ChatGPT wrapper because in theory everybody can build anything,

Thomas Witt  02:03
but like the devil is obviously in the details. So if you execute really well, which is key to most, it really works. But basically people are building features and not products. That is our observation. So you have this like whatever optimization for ads or something.

Thomas Witt  02:20
So yeah, most likely Shopify will build that into their stuff or Meta will build that in whatever you do. Or we've seen a lot of like observation startups of how well I am doing on ChatGPT. Well, I mean, there's people like Semrush and whatever. So it's just another feature to build that in. So we think, and now they're talking about the SaaS apocalypse basically,

Thomas Witt  02:42
and we think you really got to deliver a lot of like value and really get into like the value chain at a very core point in the company to create a product which is really lasting and will not just be replaced by ChatGPT.

Valentino Stoll  02:56
I mean, I think you're spot on. And I wrote about this recently in relation to the tech selloff that you just hinted at with like the, you know, all SaaS products are doomed,right? You know, with the kind of three states where it's like if you're a company that does one thing, it's over,right? You may as well pack up your bags because AI can do that one thing. If you're a platform and maybe,

Valentino Stoll  03:16
and you're sold into the enterprise, maybe you can hang on for a couple of years because enterprises are slow to rip things out,right? Or maybe you're doing something really well with AI, in which case you can just hang on for a little bit as long as OpenAI and Claude allow you to exist, which I think is a really ridiculous thing. But, and you know, look, markets overreact.

Valentino Stoll  03:36
That's what happens. People overreact to things. But what I'm really curious about, and I know we want to get into this on this episode, is here you are. You're building a platform. I mean, anybody who looks at a CRM, typically they think platform,right? You think Salesforce, you think HubSpot, you think these systems that inevitably branch out into different places in your marketing and your sales processes.

Valentino Stoll  03:56
So here you are building this. You have a stated goal, I would assume, to not be just a feature and to not be a thin wrapper around an existing AI API. So how do you do it?

Thomas Witt  04:09
I think first of all, it's important to understand the state of the whole CRM industry we're in. I mean, basically every company needs a CRM. So at least if you sell something, which most companies do. But the problem is they all suck. So nobody wants to actually use them. Show me that one person who says,

Thomas Witt  04:28
"Oh my God, I can log on in my CRM in the morning and it's delightful joy." No, it's not. It's terrible. You get like bombarded with form fields and have to fill out stuff. Basically I tried to set up one of those when we started Expeda Adventures just to manage our deal flow. And I said, "No, that's it. I'll just use Excel. That's better Google." Yeah. So everybody's frustrated by CRMs.

Thomas Witt  04:49
That's generally a good point to start a company with because usually you're not the only one. And I think we are interestingly at a tipping point for general AI applications because many people have been focused on B2C apps when it comes to AI. And B2B has largely been untouched.

Thomas Witt  05:09
And the existing companies, like you named Salesforce and HubSpot, by the way, they both lost 60% to 70% of their value on the stock market in the last year. So if you short it, you're a happy person. Otherwise you might be not. Obviously they see what's coming and the only answer they have is to build on a chatbot. That's basically for me the equivalent to Clippy in Word,

Thomas Witt  05:29
if you remember that one. That's what everybody does. And if they are like very fancy, like HubSpot, they build an MCP server. But I think that's not the answer to it. Especially in the CRM case, there are two main aspects. First, I think just putting everything in form fields doesn't cut it. For example,

Thomas Witt  05:48
we treat conversations as a basis of our data structure. As we are having a conversation now, you're having Google Meets, you're exchanging emails, you're exchanging WhatsApps or iMessages or whatever. And that contains a lot of data. You might not even know that you might need it later.

Thomas Witt  06:05
And fortunately with AI, we are now at a point where with embeddings and semantic databases and all that kind of stuff and vector databases, computers can actually understand the meaning of what you're saying. You don't have to fill out form fields anymore. For example, one thing we built in is we don't have any required fields in the CRM.

Thomas Witt  06:24
We just have fields we don't know yet about. That's basically one philosophy. So that's important because that's a totally different data model. Whether you say, "Okay, I want to have a zip code, I want to have a name, and I want to have an opportunity stage in percent." Rather you say, "I'll try to see your Google Meetings, your Google emails, your Meet transcripts and say,

Thomas Witt  06:43
'Hey, I see you're in New York next week and you have two hours.'" Maybe you want to meet Peter because you have done business with him a year ago, but you didn't for the last year and you made a million with him. So maybe you want to have a coffee with him. Here's an idea for an email.

Thomas Witt  06:58
So that basically demonstrates, I think, that you have to rethink on the one hand way on what kind of data you're building up. Although that influences a lot the decisions, what kind of data stores you're using, by the way, also in Rails. And on the second thing, I think we also need to rethink UIs. Basically we are building in the product in a way that we don't expect that people will use our UI.

Thomas Witt  07:20
Of course we have a UI and we need a UI at this point, but we think maybe in three years or two years we will live maybe in ChatGPT or in Messenger or whatever. And I think this whole you can use a B2B app solely by text, chat, voice will be, in my opinion,

Thomas Witt  07:39
a very defining pattern how people use B2B software. And you need to be prepared for that.

Valentino Stoll  07:46
What do you say to that, Valentino? You make a lot of great points. And I wonder, where do you decide as you're diving into all this, what the most valuable touchpoint is? You know, a lot of people start to question, should they be building anything for an AI application when it could be,

Valentino Stoll  08:05
you know, one of these model companies could just decide they're going to offer it? Well, how do you not just decide, "Oh, I'm going to build for ChatGPT integration" or something along these lines versus what is the value? You know, how do you weigh that value of product building at this point? Because I feel like a lot of people, they get in that feature building mentality and they're like,

Valentino Stoll  08:26
"Well, yeah, I'll just build the features until the model company gives it to me and then I'll just stop building that feature and then have it for free and still pay the model company." Where do you see that product value and product building really translate for you?

Thomas Witt  08:42
First of all, the very old saying is still true that people massively overestimate the short-term impacts of technology and massively underestimate the long-term impacts. So there will be huge impacts for every one of us, especially in the software industry in the next 10 years.

Thomas Witt  08:58
And the next year maybe not so much because specialized applications are like really hard to build. I mean, we barely have self-driving cars. So if AI is so great, why doesn't it just build overnight Tesla self-driving car software? No, it can't. It's hard. It's really hard to do. And obviously, for example,

Thomas Witt  09:19
when we started, we talked, which is generally a great recommendation, which I would do to anybody who's starting a company, talk to a lot of people without writing a single line of code. We interviewed 300, 400 people. How do you use your CRM? Or do you use the CRM at all? So you learn a lot at that point and half of them don't need the CRM at all.

Thomas Witt  09:36
They're just fine with Excel because it's just a very small company and maybe they save a lot of money.

Valentino Stoll  09:41
And also because Excel is actually great software. It's been 40 years. It's great. Everybody wants to dump on it. It's great software.

Thomas Witt  09:48
If software competes with Excel, your main competitor on the slide should be always Excel.

Valentino Stoll  09:53
Yeah.

Thomas Witt  09:53
But that's it. And from the other you learn a lot. And the main question was, "Hey, why don't you just build a better frontend for HubSpot and Salesforce?" And that's the short-term reaction. And we could have obviously done that because, for example, you can talk with our system and all that stuff and we could have put that into via MCP to HubSpot or whatever.

Thomas Witt  10:12
But that doesn't cut it because first of all, you're not really building a platform and you're not owning the data. And therefore you have a very limited understanding of the data. And what I just said, that the conversation is the basis for everything for us that simply is not possible with the data model of all legacy CRM vendors.

Thomas Witt  10:31
And I think we are going to like some phase. Obviously we have the old ones like Microsoft and HubSpot and Salesforce. And they're all fine for enterprise. By the way, for example, we are not targeting enterprise. We think there is so much stuff you have to consider integrating ERP system, whatever. I think Salesforce is greater than that. They just have that market. That's fine. But there's a huge market around that.

Thomas Witt  10:52
There were a lot of like other competitors who came up, especially around the data augmentation thing like Attio, Clay, Augment, vFlow, and how they're all called. And I think they solved one thing with agentic stuff that they basically find out more information about the people you're talking to mainly. That's what most of these systems are about.

Thomas Witt  11:12
But nobody really changed that radical thinking about CRMs. Like what if you only have a chat prompt to interact with your system? Hey, show me the sales reports of last month. Oh, now divided by state and now down to the city level or by a sales rep and now by product and whatever.

Thomas Witt  11:32
And it just gives you the graphics without clicking through interfaces. And I think this is where we maybe will end up in because yes, look at Claude Cowork. That's what people expect now.

Valentino Stoll  11:43
Yeah, you make a lot of great points there. The question of do we build a product has always existed. And I think 37 signals is notorious for like proving everybody wrong, that you can just like make something well built and have minimal features added to it and it just works great for your customers that you're trying to serve. Right?

Thomas Witt  12:04
Well, the question was always why doesn't Google build it, for example.

Valentino Stoll  12:06
Right. Exactly. So you've decided you're going to build this company and you reach for Rails. Why? I'm curious.

Thomas Witt  12:16
First of all, I really love Rails. I got into Rails in 2007. And funnily, I heard that long six hours interview with GHH. And I can relate to a lot of points because before Rails, I thought I was a really shitty programmer because I really hated to get into details about pointers and C and whatever.

Thomas Witt  12:37
So mentally I understood it, but it didn't like to do it, to be honest. And Rails really, even though that's a very old phrase, it really gave me joy in programming. Everything came together and it worked. If you're doing a web application, obviously. I mean, it might be different if you're doing self-driving cars.

Thomas Witt  12:54
And therefore we were based on objective C at that point in the company and we made a big plan to revive our whole, it was kind of management system and customer experience suite, revoted all to Rails. And that was very successful and we were really happy with it and built software which is used by millions and hundreds of millions of users back in the days.

Thomas Witt  13:13
And then AWS came along and although there I learned a lot. And for example, I was always a fan of non-relational databases for many applications because many applications are not actually relational. Even a CRM is not relational. You have very simple relations in a CRM.

Thomas Witt  13:30
You have people who work at companies and opportunities who might belong to companies and might belong to people, but that's it.

Thomas Witt  13:40
And especially in the now in the AI phase where you basically get backstructured output, which is literally JSON, it's really interesting in putting that in a database and then indexing it also, for example, in OpenSearch or Elasticsearch at the same time. And then basically just understanding it and slicing and dicing with the data,

Thomas Witt  14:02
which is really, really hard when you have a relational system, which is not really built for that kind of kind of structured, unstructured data, I think. But back to your point, I love Ruby. I love Rails. I built a successful company and out of it sold it and there was no even a question.

Thomas Witt  14:17
And interesting, every time I look, I don't want to start a rant here, but every time I look into like TypeScript and React and Python, I always think, how does people get away with sometimes building all these packages which you have to install and then it doesn't work. And Ruby just works.

Thomas Witt  14:34
It is a very beautiful language and a beautiful ecosystem and with very friendly people. I've just seen it. I was at Rails World in Amsterdam and nothing changed over the last 15 years.

Valentino Stoll  14:45
You are ranting, which is good. We are pro rant on the Ruby AI podcast. So thank you for that.

Thomas Witt  14:51
I don't understand it. Basically when we started, we said we don't do any JS. We couldn't really hold that through. So I really was saying, no, we don't want Yarn. We don't want NPM or whatever in our system.

Thomas Witt  15:04
We have to now for certain stuff with like Tailwind, but we still have that philosophy of never using JavaScript unless it's like absolutely necessary with stimulus and whatever. And I think it's tremendous what that whole stimulus hotwire ecosystem has produced. It is so amazing how you can work with that stuff and it's just great. It's awesome.

Valentino Stoll  15:25
I'm curious. I would like to drill in on the sort of the architecture. So you've spoken about DynamoDB. So you've seen you're a proponent of NoSQL. You're a proponent of non-relational databases. And you also, you wrote a little bit about the product you just shipped, the open source library you just shipped,

Valentino Stoll  15:44
which is the AWS SDK HTTP async. And I'm curious to know, so what problem were you seeing in production that made this necessary?

Thomas Witt  15:54
Yeah. So what's really interesting is what I like about DynamoDB is part, it's a non-relational database, which is suitable for the kind of data we have, is you don't have to deal with managing the database. And I think that's the point people, in my experience, running at least in a scaling B2B app, constantly underestimate.

Thomas Witt  16:14
I've seen, yeah, but it all works with Postgres. Yeah, it does, obviously, but you really need to have somebody dancing around and it works until it doesn't. So then you forgot an index here or it doesn't scale here. You need to think about a lot of things.

Thomas Witt  16:26
And the beauty about DynamoDB is if you design itright, you can throw literally petabytes of data on it and it will scale if you exactly know what you're doing. And that's what I like about it.

Thomas Witt  16:39
What I had to look into is when you first used Ruby OpenAI Gem and then moved to Ruby LLM because there was a new hot kit on the block and it still is. And one of the things it proposed was the use of ASIC. And that's really interesting. And that is something I actually haven't touched a lot.

Thomas Witt  17:00
I knew it existed, but well, but it became very clear that you have to deal with ASIC when you're building a modern Ruby application with AI because it's not CPU intense. You're basically waiting for HTTP calls through LLMs all the time. And that's when I had to relearn a lot of stuff I learned in Ruby, actually.

Thomas Witt  17:21
So I learned using threads is actually bad. Or just using simple pipe equals operator, obviously, is very, very bad. That was already bad in threads. But I have to now use fiber with key or I used to have to use concurrent maps or sleep doesn't work because it blocks React.

Thomas Witt  17:40
I have to use async sleep.sleep. So I felt I had to relearn a lot of like idioms, which were like totally naturally for me for the last 10 years. And I must say Ruby LLM brought that to me. And we are running, for example, Falcon in production as a web server, which is the web server of the async.

Thomas Witt  18:01
And everybody says, yeah, it's so easy. And apparently Shopify uses it. But when you really use it in production, there's a lot of like very undocumented feature, to put it mildly, or bugs. And I totally admire the work that Socrates guy has done. But it's far from, oh, I just install it and then it works.

Thomas Witt  18:21
I recently just updated from like 0.28 to 0.29 and it totally crashed because of a different behavior. And to be honest, AWS support of Ruby is okay-ish, but not great because I don't think they have a very big team. They're very responsive and they're very nice, but I don't think they put a ton of resource in it,

Thomas Witt  18:41
which is a shame. So shout out to AWS. Put more resources into Ruby and not that much into TypeScript, Go, and Python, I would say. And the main problem is that DynamoDB also is basically a service which you call via HTTP. So it's basically simply the same. And I saw there's a lot of like stuff which blocks each other or simply does not vade correctly.

Thomas Witt  19:03
And it's really, really hard to debug because it's a library you don't own. So I basically tried to make some kind of patch to support more modern libraries. And there's this async HTTP library, which works well, but it's really hard. So I think many Ruby developers have to relearn a lot of things when it comes to async. It's my observation. I had to.

Valentino Stoll  19:23
Yeah. Yeah. I feel like it's not just async. I feel like we're all relearning how to build applications because we are leaning more and more on this kind of LLM, PyIO, IO-bound workflows. And you'reright. Like I feel like some web servers,

Valentino Stoll  19:42
we won't name names, but like some web servers just aren't built for heavy IO use. And that's acceptable. And that was where things were before is we didn't have a lot of IO-bound tasks other than your database. And if you can couple that to your user, then it can work well for very specific servers.

Valentino Stoll  20:03
And we're kind of like shifting to like this new way of serving customers where the user isn't bound to that request or the data and many things are involved in accessing the data at the same time in a more async fashion. And so we're kind of like, yeah, doing this dance of like relearning what the best conventions and configurations are.

Valentino Stoll  20:25
And that's going to even change,right? Like maybe everybody's workflow isn't making the most use of like these LLM calls in the same ways. And it might not make sense to just use Falcon,right? Or using async for everything. And it is going to be like a long process of finding the best use cases for how much you're leaning into what.

Valentino Stoll  20:45
I guess my question for you is, at what stage did you decide how are you mapping this out when you're building the thing? Did you decide going into it that you kind of knew that you were going to be so heavy with all of the LLM use and chaining or you had to make use of async or did you find that out after the fact?

Valentino Stoll  21:05
Where in your building process did these kind of shifts change?

Thomas Witt  21:09
I had to basically because before we were using Ruby OpenAI Gem, so that used threads and it somehow worked-ish, but it wasn't like very natural. But the thing is, for example, what we are doing is we are often not just firing one prompt, but we're firing a magnitude of prompts when we get in user input.

Thomas Witt  21:28
For example, when you're talking to the system, there's something, obviously we have to first interpret what you're actually saying. So we get basically a stream of data back. At the same time, we send that stream data of data back to find out what are the entities in that stream. Joe is a person. Valentino is a person. Ruby is something else or whatever.

Thomas Witt  21:49
30 seconds is the company. So that gets sent back and forth. And when you're then ready with talking, we try to immediately show an executive summary. So what we understood is you met with ABC. You talked about blah, blah, blah, blah, blah. That means it's another back and forth with a different model because it needs to be a faster model,

Thomas Witt  22:09
which is like better on summarizing stuff and so on. And suddenly we have another model which tries to get tasks out of it and try to understand, okay, I met with him. What's the next steps? We want to present him like suggestions, what to do.

Thomas Witt  22:23
And that's when it became really clear that just even one conversation with the system fires a lot of like different prompts, different interaction with different model providers. It's not even just OpenAI. It's also Gemini and others. And there's text-to-speech and blah, blah, blah, blah, blah.

Thomas Witt  22:41
So that's when we found out we need to look into that async stuff on a way because if you take that from a multi-tenant perspective and see multiple people are doing that at the same time, you can't make them wait for so long.

Valentino Stoll  22:54
Yeah. That makes a lot of sense. And it makes me think too, we're almost like Ruby kind of came from this like, you know, Lisp small talk phase of language building, trying to be object-oriented as a base.

Valentino Stoll  23:10
And do you see like that your programming of Ruby leans more into that style of programming? Like has your use of Ruby changed based on how these things are evolving? Do you find yourself still making service classes? Are those like tricks still in play or do you notice new kind of patterns evolving?

Thomas Witt  23:31
What I definitely see is there is a lot less stuff going on in models and a lot of more stuff going on in services and concerns to these services and stuff. So that is definitely a total difference of what we did before. And for example, although when we composed the prompts, I was like very Ruby-driven at the beginning.

Thomas Witt  23:52
So I basically created a class LLM service user input, which inherits from LLM servers. And then I included LLM prompt with context, LLM prompt with actor, LLM prompt with entities, with performance indicators, and whatever. And it simply wasn't manageable. So the classic inheritance stuff didn't work.

Thomas Witt  24:12
But there was stuff which worked. So for example, as you said, we did our own prompt components. Maybe we turned it into a Gem and released it publicly, which is simply like view components. And you could learn a lot of like how view components were built in building prompt components because it technically is the same. They can all inherit from each other, but not all at the same time.

Thomas Witt  24:32
They get inputs. They get outputs. They have a part which does calculations. They have a card which just generates a markdown. So that's really interesting to do that. So that changed a lot. But this whole service orchestration and having an overview which model calls what at a certain point is really,

Thomas Witt  24:51
really tricky because it's not just like the model. It's like you have agents. You have tool calls, which are basically classes in itself or subclasses, which might be or might not be different between different services. You have all the what many people don't do at the beginning, a lot of telemetry and tracking. So for example, we have big chains of LangViews.

Thomas Witt  25:11
Shout out to them. Great product. Everybody should use it. It's not that easy to integrate in Ruby if you want to do it well, but that's not on LangViews. That's on Ruby because you have like traces and spans and exactly what I described. You have one, let's say, one thing or one input and that triggers a lot of like different prompts and you want to see what happened at what point.

Thomas Witt  25:33
And then comes the next model out and you want to see, okay, what would have happened if I ran that with not 4.1, but 5.2? And this is when it really gets tricky. And this is like the whole, which is not totally Ruby related, but basically our whole LLM code do many, at least 10 things. And then comes chain invalidation.

Thomas Witt  25:52
You change your system prompt. You can't continue conversations on the same system prompt. You want to design them that they're casual by OpenAI and so on and so on and so on. And that is really hard. I don't think I have fully found the answer to it, but my application or our application vendors looks now very different than a various application I might have done like five years ago, I would say.

Valentino Stoll  26:14
So how do you see that scaling? Do you have to like rethink your scaling methodology as well? Is it not just like vertical over horizontal scaling? Like do you find yourself like maybe considering serverless more because that maybe aligns with your objects more?

Valentino Stoll  26:31
Does any of that kind of change in your building process or is that pretty much like the same considerations?

Thomas Witt  26:38
No, I don't think the scaling is at least our problem because the way we build it, so DynamoDB scales like there's no tomorrow. OpenSearch hosted on AWS also scales like no tomorrow. So it will take a lot of like revenue until we get to limits on that point. Whales also scales.

Thomas Witt  26:56
Falcon is like really fast with that async stuff if you once got it managed. And if you have these auto scaling, we got all our infrastructure totally automated. So it scales up and down and we have different environments and blah, blah, blah. Although we are using very classic patterns of queuing. So we're using SQS on AWS, but could be also Redis or whatever.

Thomas Witt  27:16
So basically every time when we call a LLM, it fires a background job, which gets picked up by a class or a fleet of workers. And then the workers might update your turbo stuff in the front end, which is a kind of a really interesting thing, but it really works well, that pattern.

Thomas Witt  27:35
So we really try to keep the web front end as lean as possible and try to do as much stuff in workers. And I think that pattern scales very well. And that's not nothing new in Ruby. So that's very pretty standard stuff with sidekick or whatever people have been doing for decades. And so there's a lot of patterns,

Thomas Witt  27:56
which are very natural to Ruby or the ecosystem, which come very normal when you're building AI applications. I felt I didn't have to bend backwards just to get something done. It was always clear in what way you do it. I think it's just you have to think how do I do it that I keep it maintainable. That's the main point.

Thomas Witt  28:16
And not just for me, but also for agents. You said you talked with a lot of people like about how they build it. And obviously we use also a lot of agentic coding. And that requires very strict discipline about documentation. So we have a huge folder, md docs, where we basically document every feature we're doing in detail.

Thomas Witt  28:35
Maybe we should have done that 10 years ago in all applications as well. But that keeps it a bit more manageable. But even if I don't remember, oh, how was that actually built? I can just ask Claude or Codex, hey, and it gives me an answer in like 30 seconds. So that's good.

Valentino Stoll  28:51
Hey, I'm curious about that last part because you talked about sort of Claude code rules and how there's a potential there to improve style of code,right? Which I think is, I like that viewpoint because there's so much focus on just churning out code and not even looking at it,

Valentino Stoll  29:11
let alone kind of observing style or design as it evolves. Now you just mentioned that you use a lot of these rule files,right? In MD library. Are there other ways that you enforce style, for example, through repo conventions or through the tests themselves?

Thomas Witt  29:27
Yeah, absolutely. I mean, I was always a pain in the ass when it comes to formatting code. So I hated when stuff looks different. So the first thing I did is to put a CI pipeline on, oh, no, using bin CI since Rate 8.1. And so we have one linter face and it calls a lot of like different linters.

Thomas Witt  29:48
We're using Herb. Big shout out to Marco. Great library. We used EIBLint before. We're using Rate's formatter. We're using Rufo. We are using RuboCub. So basically every code gets checked three or four times and it's built in all in our agent rules. You must not deliver any code before Rate app formatter is actually run.

Thomas Witt  30:08
And you can't even check it in both on GitHub as well as on the deployment if there's even the Rufo, RuboCub, whatever style rules don't match. So I think like that's super important. And it might sound a little bit anal, but it's really helping to have a very consistent code base. Although look at these like Ruby 2.4 with this it syntax,

Thomas Witt  30:30
for example, I really like it. So I said, I want to have the syntax consistent in all the code base. And that's task, which can be really done well. I think we put a lot of work in the orchestration between Claude and Codex because just for our use case, I don't know what's it for everybody. Claude is really great at planning, but Codex is in our experience much better in implementing.

Thomas Witt  30:52
It follows the rules of the actually implementation much, much better. But obviously the application itself is better in Claude. So what we did is we created a lot of like agent and skills to do that. And interestingly, it was a bit of an investment in the future because you had to nudge Claude that it actually uses it. And with every release, it uses it more.

Thomas Witt  31:11
So it's better although with agent teams. And they even wrote shell scripts, which said, okay, you touched the controller, and which reminds it, you have not run that skill. So you need to run that skill to check whether it complies to our whatever rules. And you really have to put in effort. You have to understand how these tools work.

Thomas Witt  31:31
If you have a bad Claude and agents MD and a bad setup with that, I think you're not getting very far if you have a huge app to maintain. So that's where I'm putting a lot of effort. And we have a lot of like standardized slash commands. So I can say, for example, say vendors plan in Claude and that calls, it does this plan and then mandatory calls Codex.

Thomas Witt  31:52
You can run Codex as an MCP. I even wrote a small NPM package called MCP agents where you can call Gemini and it says, if you disagree, don't even ask Gemini about a third opinion. And what you get back then when they all figured it out with each other is like really gold, especially as Rails is so structured in terms of what goes where.

Thomas Witt  32:12
And that really works out extraordinarily well for us.

Valentino Stoll  32:16
Yeah. The world, do they call it the conferring of judges or something like that?

Thomas Witt  32:21
Kind of that.

Valentino Stoll  32:21
Yeah. Yeah. Group think. Yeah.

Thomas Witt  32:24
Totally. There was a big Shopify talk on the last Rails to Word about that. I think they put it to a totally different level. We are not like that elaborated. But for instance, LangViews also has features to do that LLM as a judge. And I think that gets more and more important. And it's not just about the coding side, but also when we get back an output, like an important output when it comes to like money or whatever,

Thomas Witt  32:46
you might want to double check that with a different model.

Valentino Stoll  32:49
Yeah. I do that too. I do that with text. I do that with like strategy and stuff like that. 'Cause something comes back and it's like, okay, well I think this is smart, but it's going to outsmart me because it's a word calculator. It knows how to put these great words in place. So I take it, you know, whatever it might be. And I'm like, allright, well let's see what Gemini has to say about this. Let's see what Claude, you know, has to say about this.

Valentino Stoll  33:09
Let's see what OpenAI has to say about it. Yeah, it really does work.

Thomas Witt  33:12
Codex finds in 90% of the cases. It finds something where Claude says, oh, great findings by Codex. I haven't thought about that. Let me put that in. And that is.

Valentino Stoll  33:21
It's funny that you get that because a lot of times I get, that LLM is wrong and here's why, which I love. Like, oh, you're just throwing shade at Claude or whatever. It's so funny. I wonder if there's like a competition MD file that's secretly stored somewhere. Yeah. Yeah. They don't do too great at these points.

Thomas Witt  33:41
Exactly. Yeah. Maybe. But especially when it comes, for example, I could have never found all the problems or stuff we did wrong with fibers if I wouldn't have like a rule or an agent in that, which checks it. And on five, you have, at least I'm not smart enough to do that. I have no chance to find all these edge cases where,

Thomas Witt  34:00
oh, you need to put that in a fiber because that could have a concurrency with A, B, C. No chance. And it really is so much better in terms of code quality.

Valentino Stoll  34:10
Yeah. So I'm curious. It seems like you're investing a lot in the coding agent help, which is a common pattern that's evolving as well. And so do you see yourself having to manage that a lot or is it once you get the good mechanisms in place,

Valentino Stoll  34:28
it kind of just starts going on autopilot a little more? How has that workflow been as far as like onboarding new members and getting other people to use the tooling? Is it pretty smooth sailing or do you find yourself like circling back and you're just working on it?

Thomas Witt  34:44
I think it's a bit of 80-20. You always have to force yourself not to over-engineer that stuff because there's always some new feature, which you could, oh, I didn't try that. And if the agent calls now the skill and blah, blah, blah. Well, you can overdo it for sure. And we even, we did also try it with stuff that we have a Ralph loop, which basically constantly loops over the whole code,

Thomas Witt  35:04
code, gives it to all the things, finds out, produces a big file. And when it's done, it starts from the beginning. So sometimes it's good. Sometimes it's not. So I think you can totally overdo it. I think it's important for an onboarding of new people to have some kind of standards in that. It's really hard.

Thomas Witt  35:21
So we, we always say we plan with our vendors minus plan or Rails minus plan slash command, which calls Codex because it simply makes the output better. So that's important. And so I think, for example, we are a really small team. We are like my co-founder and I and our first employee.

Thomas Witt  35:40
And we found it really hard to find employees. I was also at Rails to Word looking around, hey, maybe there's some person I can work with. And I found there's either junior people who are like totally hyped about AI, but really lack a bit of like, maybe is that gut feeling? So no, that architecture feels wrong because you don't have the experience.

Thomas Witt  36:01
And so many senior people I talked to said, oh no, I don't know. It's not as great as if I chose a code myself. And so there's a big disconnect. And I felt like I'm in my late forties. So I thought I'm sometimes like not the most flexible person, but I met a lot of people where I thought,

Thomas Witt  36:21
okay, this is like when I met a COBOL programmer, like in the early days, and we were talking about Rails. And the things that will be people, either you adapt to that because it's so good and it's going to get better every day. Every second week comes out an innovation, which kind of blows my mind.

Thomas Witt  36:37
But I tried that agent team features of Claude code and it started five TMUX windows and did different evaluations and came back and said, this is your problem. Okay. Wow. Good luck. So I think there has to be a certain recalibration of mindset in the programmers community. And I mean, apparently even DHH recalibrates.

Thomas Witt  36:57
So that means a lot. I don't know. What's your opinion on that?

Valentino Stoll  37:02
I'm torn because I lean heavy into all of it. I've also lived long enough to have seen the alternative. And so there are benefits to both sides. And like sometimes waiting can be more fruitful in some things. I'm having a hard time finding that balance myself.

Valentino Stoll  37:20
Nobody on this podcast is young or mentally or physically flexible. However, we all lean into this because to me, it makes me feel like when I was a younger, less experienced engineer with a ton more to learn. And now,

Valentino Stoll  37:39
yeah, I know that there's always a ton to learn and I'm no expert at nearly anything, but I have thought highly of myself as a software engineer in the Ruby world. And now I look at it and say, wow, there is so much to learn.

Valentino Stoll  37:52
I think, Thomas, that your example of both learning that you had a lot to learn with respect to fibers and asynchronous Ruby and using AI to some capacity boost your learning and to another capacity boost the output is a perfect example of what I would shoot for as an IC,

Valentino Stoll  38:13
as a contributor to a project. There's the threshold where it's like, oh, I don't know as much as I thought I did. And hey, here's a bunch of tools that help me to not only learn it, but also use them to be continue to be productive. I really don't see a downside. Show's over. That's it.

Thomas Witt  38:33
Mic drop. Go generate some.

Valentino Stoll  38:34
We have a soul smile. Let's dig into the, 'cause you did mention, you know, maybe adopting some new technology like Falcon. That wasn't exactly straightforward, even based on your path and experience,right? It still isn't. I'm curious first, like what are those areas that are pain points working with it,

Valentino Stoll  38:53
like figuring it, maybe getting used to the new style of working with a web server? What aspects of these facets of your traditional Rails development over the past, I imagine, decade or more, probably more, what facets are the bottlenecks for you?

Valentino Stoll  39:15
What are challenges? Where could somebody maybe more junior be more beneficial because they don't have that backlog of experience, maybe causing some friction?

Thomas Witt  39:26
So obviously we talked about fibers and async, and that's clearly the biggest thing. And not just from a technology perspective, when do you write which code, but also what actually happens. For example, there is a lot of like places in the code where the same data gets written to the same objects.

Thomas Witt  39:47
For example, the executive summary might come in first or the expected entities might come in first, but you all want to have it in the same object. So stuff like if you were in a relational database world, transaction, but even a transaction might be blocked. So what do you do if the data has been updated? Are we talking about the same data or not? For example, we also never deleting data.

Thomas Witt  40:08
We are having versioned data and we can come back and we have another LM, which basically tells in natural language what has been the difference between those versions, what actually changed. So that kind of stuff. I think you have to very much get used to that at the same point, data can be written from multiple places. At least that's for us a different point.

Thomas Witt  40:28
And before you could say, okay, I collect all the data, I just do one transaction and then it's written. And that's, I think, not the case anymore. So data can come in at any time from any agent, from any long-running job, from any deep research or whatever. And you sometimes have to deal with stale data, sometimes even stale prompts.

Thomas Witt  40:46
And what you also in that terminology had to rethink is we are building obviously a multi-tenancy app. So that takes it to some kind of a 3D chess level because everybody else has different prompts. Somebody who sells drinks as a customer has totally different prompts customized to that person than somebody who sells SaaS software.

Thomas Witt  41:07
And also like totally different data structures because that first person maybe, or let's say you have a service versus a product, you have a day rate and you have a product budget and whatever, whereas you have churn and with SaaS and monthly plan and all that kind of stuff. So the data is vastly different from client to client and all the prompts are.

Thomas Witt  41:29
And keeping track of that and that multiplied by which model are they using is really tricky. And there's also not a lot Rails does for you in the terms of multi-tenancy stuff. There's a lot of things. And then making that observable with LangViews or whatever and tracking everything because we are tracking everything because we're saying the conversation is the basis of our data.

Thomas Witt  41:49
So if we get an email, we have to pull it back maybe two years later and reanalyze it for a certain angle, which the client wants to know about. So where do you save it? How do you embed it and whatever? And that especially in a multi-tenant context is complicated, I would say.

Valentino Stoll  42:09
Would you say you're on board with DHH's new proposal for like getting every customer their own kind of server in their closet?

Thomas Witt  42:20
Yeah. That's a new idea. When I started my company in like 1999, I was sitting with CDs and data centers installing software. So if you're going back to that, I don't want that back, to be honest. That was terrible. That was terrible. No, I don't think so. But I think this, again, this multi-tenancy is a thing.

Thomas Witt  42:38
And I think also the orchestration stuff. For example, we think about a lot of like embeddings and how to logically understand data. And for example, OpenSearch and Elasticsearch made a lot of advancements in order to combine classic search with the actual understandings of something in terms of embeddings, vector search.

Thomas Witt  43:00
And I think that is something which is barely touched or I wouldn't say understood, but explored by the Rails community so far and will become much, much, much more important because it won't be just about the relational data. And that's basically also my big grant I have about like, let's say the direction of Rails in general,

Thomas Witt  43:20
it's very focused on active record. And as we are not tying active record, you wouldn't imagine how many libraries we find, which somehow more or less require active record. And there's already an alternative. There's active model. It's very easy to turn active model into kind of any database bucket, but no, they have to have these foreign keys. They have to have these transactions or whatever.

Thomas Witt  43:40
And I think that would be something, even with all this new solid queue stuff and solid emailing stuff and what they all build, that is all tied to active record. And I think we have to get used to that.

Thomas Witt  43:52
I mean, not everybody has to run DynamoDB, but having a vector database or OpenSearch or something on the side of it will become very normal if we deal with all that embedding because literally what our system does is translating structured output to unstructured output back and forth all the time. And simply active record is not built for that.

Thomas Witt  44:11
You end up with an SQL form field disaster like we talked about HubSpot in the beginning.

Valentino Stoll  44:16
So I actually am curious about that because you've mentioned DynamoDB a couple of times. And so the last time I've, I used a non-relational database and a Rails app was not that long ago, but I remember there being a lot of overhead and sort of some unexpected overhead with instrumentation,right? Like orchestrating applications to talk to each other similar to what you're describing.

Valentino Stoll  44:38
So do you find that that is a significant sort of headwind in development against a, a NoSQL database?

Thomas Witt  44:44
It is. Totally. It is totally. I mean, it's first of all, when people come and try to learn our code base, it's really hard because the usual patterns do not apply. And you see that with LLMs, you really have to teach them no active record in every Claude MD. And then it.

Valentino Stoll  44:58
Right. That assumption, just like with humans, is gonna be there. Right.

Thomas Witt  45:02
Exactly. And everybody behaves the same. So that's really hard. And I would say that the kind of a blocker or not a blocker, but something which is like not so straightforward in our development. And it's not just us with DynamoDB. If you use MongoDB or whatever, you will run into similar problems. So it's not just this like, oh, I don't like AWS anyway world.

Thomas Witt  45:22
And again, in the end, OpenSearch or Elasticsearch is also nothing else like a large DynamoDB or a large MongoDB in a way. So yeah, there is significant headwind because a lot of things you take for granted from very simple helpers to, let's say, single table inheritance.

Thomas Witt  45:40
You simply can't have a class which has another class and then you query the base class and you get all the results of it. That simply does not work because it does not how it, how to do it. So I looked into a lot of like active record code or investigated together with Claude code to understand what did Rails do.

Thomas Witt  45:56
And often it's very smart decisions, but unfortunately they're very tied into that relational database thing.

Valentino Stoll  46:03
Yeah. It's the double-edged sword,right? Conventions are made to make it easier for LLMs. But if you wanna do your own thing that maybe is just a little bit outside of the conventions, I was just gonna say, yeah, you're focusing on those like customizations.

Thomas Witt  46:18
But in general, I mean, for example, when I started with my career, like back in the middle nineties, something which was very popular was functions in databases, PL/SQL, if you remember that. That was the most horrible thing you could do because you write something in a database and totally believes totally unexpected. So the worst thing. And still, that's what I don't like about relational databases.

Thomas Witt  46:39
They work great until they don't because there is an index missing or there's something or this transaction blocks the another or whatever. You have no idea with inner join, outer left, join, blah, blah, blah. You really have to be an expert sometimes. And I think a lot of stuff which is put into the database could be also very well solved in a model or in a service in Rails,

Thomas Witt  46:59
like especially when it comes to like verifications and stuff. And so if you go down to very primitive types, DynamoDB only has basically lists and strings and numbers that, that's it. That makes your life easier and also makes your code easier. So there are advantages of doing it that way.

Thomas Witt  47:14
And I hope that the Rails community at least acknowledges that there's something like active model and stuff can be solved differently. And I think again, with that AI stuff, I think it will because basically you're only dealing with JSON all the time. And good luck mapping that JSON to a relational database scheme all the time. It's not working.

Valentino Stoll  47:34
You know, there's a popular alternative to active record for relational databases called SQL that maybe even predates active record in some ways. And it works great. And, you know, you try and use that kind of in your own application and try and make agentic use out of it.

Valentino Stoll  47:55
And it's gonna struggle a little bit in comparison. And so I kind of wonder, do you have any ideas here on how we can like better integrate with these agentic coding use cases for these maybe like one-off conventions that are conventions, they are popular,

Valentino Stoll  48:15
sometimes the use case does apply,right? And so how do we maybe make things better so that it can work for these conventions and set up maybe some kind of convention or integration or tooling or something that we can establish the patterns and make it easier?

Thomas Witt  48:32
Yeah. That's a good question. And I thought about it a lot or thinking daily about it when we are building Vendors AI. So I think we have to rethink the way we deal with services because regardless of the relational versus non-relational debate, most of stuff is an anti-HTTP endpoint these days. So the, the emails we are getting is in,

Thomas Witt  48:53
is in S3 obviously, but then you have own endpoints at OpenAI where, for example, when you put a load out of a batch, you have to fetch that batch. So it's another endpoint. And then you have your database and then you have your telemetry services, which tracks all that. And you have five different model providers you're talking with. And then you have an open. So clearly that pattern,

Thomas Witt  49:13
I mean, you can all include it and it works well with these like service.call.blah, blah, blah and services. So it's not bad. It's good. But structuring it and keeping track, I think there must be more to it.

Thomas Witt  49:25
So I think we need to have some joint brainstorming in the Rails community to think how to make services better and maybe tie services more to HTTP endpoints. And you do a lot of like glue code with like retry stuff and every library is different and you couldn't reach it.

Thomas Witt  49:44
So capacity limit reach. So you do retry on that stuff and Google handles differently than OpenAI and blah, blah, blah. So that's not great. And I think there is more to it. I have no idea how to do it or what would be better, but I think that needs to be a direction where Rails should be thinking about.

Valentino Stoll  50:02
Yeah. I don't have a solution. So.

Thomas Witt  50:04
No, me neither. I'm not smart enough for that. But maybe somebody.

Valentino Stoll  50:10
The overall goal of this podcast is to just find people that maybe have the answers for me. You actually do have a lot of answers out there, you know, related to Falcon and Async and all of your new adjustments. So it's exciting to, to hear where the world is going 'cause everything is shifting every day.

Valentino Stoll  50:29
So. Yeah. I agree. I like the sort of, I don't gonna use the overloaded term agile, but your agile approach, by which I mean agility, I don't mean a framework,right? To the constraints that you're facing and the opportunities that are there in equal measure.

Thomas Witt  50:45
Totally. I mean, we couldn't build Vendors AI with traditional ways because there's so much stuff to orchestrate and it is the, the existential 10x. I definitely feel it. I know whether it's 10x or 8x or maybe 6x, but with like a three-person team, you can get really, really far in.

Thomas Witt  51:01
I think that's when like Sam Altman with this quote, "Oh, the next person will be a one-person billion dollar company." Maybe not a one-person company, but I mean, hey, come on, WhatsApp built like a billion, multi-billion dollar business with like, I think 30 people, 40 people. So what I definitely think is the time of like huge teams is over.

Thomas Witt  51:20
And you can't tell me that sometimes people have like 200 developers. What do they do all day? I don't know. I mean, to.

Valentino Stoll  51:28
In fairness, I never really knew.

Thomas Witt  51:31
No. It was totally, totally, I mean, there are e-commerce sites which employ 600 developers. What do they do? It's mind-blowing. So I think like really having small teams which all understand the code base, which can commit, which can deploy like anytime and which have all these modern tools at your disposal.

Thomas Witt  51:52
I mean, for like $400, you get both an OpenAI Max and Claude Max subscription and gets you very, very far. So I think it will be the age of smaller companies doing a lot of great software where you would've needed to be Salesforce.

Valentino Stoll  52:08
Mm-hmm. I definitely feel the X developer, whatever, whatever number that is. For me at first, it was definitely, well, now I get to complete all my side projects and, and the ideas that I have. Right.

Thomas Witt  52:20
Exactly. That's a good thing in summer.

Valentino Stoll  52:22
You know, at some point that transition will fall off a cliff and you'll run out of things that you want to side project. And I wonder what that world looks like when people are kind of exhausted by the maybe speed at which they can experiment with them and maybe what draws their attention then. Right.

Thomas Witt  52:42
Oh no. I think there's so much stuff left to be built. I mean, just, I don't know how it is in the US, but just take a look at e-government. There are so many processes which could be digital where you think, I have to fill a form out for that. If just that would get better, our lives would all become exponentially better. So there is a lot of stuff which is not digitized yet.

Thomas Witt  53:03
And I think even maybe software where, where you need a lot of consulting and whatever. So I think the role of all these consulting agencies will change as well. But I strongly believe there will be the need for software like CRM and Basecamp, but, but citizen intelligence. I don't think that like OpenAI comes around and now you manage all your products in OpenAI and nobody uses Basecamp anymore.

Thomas Witt  53:25
It's really hard to imagine. And sometimes it's so specialized and there's so much domain knowledge in it. And sometimes you see how great it is, but sometimes you think you didn't catch that. You don't understand that. I'm telling you now for the third time and you're still implementing it wrong because apparently you didn't get the problem. So make no mistakes. It's obviously in two years we will laugh about many of these problems,

Thomas Witt  53:46
but good product management and what everybody says, like having a good product manager, make, doing good documentation, all these like very basic skills which were valid 25 years ago, they are now more relevant than ever. And if you master them, you know, just start developing something in like some language because it's so cool and everybody uses React,

Thomas Witt  54:07
so do we. I think it can get you very far.

Valentino Stoll  54:11
Yeah. Yeah. Maybe you'reright.

Valentino Stoll  54:13
Maybe we are seeing the revitalization of the Excel era and everybody was once able to, you know, solve all their problems in Excel and then they hit limitations and now they get a little bit step further, you know, where they could solve all their problems with ChatGPT or something, except for the

Valentino Stoll  54:33
nice packaging that comes with using a product that somebody has focused a very specific use case for. I already know what's going to happen to you, Valentino. I, I already know the future. First, you'll never run out of side projects. And what's going to happen is that what you deem a side project today is going to be laughable three years from now because you're gonna say,

Valentino Stoll  54:53
oh, my side project is basically rebuilding Salesforce. Your little kind of experiment will grow in size and complexity. That's your. It's funny you mentioned that. I finally installed Oven Claw. Honestly, I'm not here on that.

Thomas Witt  55:08
Here you go. Yeah. Great.

Valentino Stoll  55:09
I pay for domains. I have so many domains. I've had them so long and I'm like finally just like, allright, here's a list of all the domains I have. What's valuable? What could you make a product out of? And then, oh, go build those products. Right. Well, let's say I have three pull requests that I have to review that are supposedly done. Yeah. All I gotta do is enter my Stripe credentials.

Thomas Witt  55:31
Yeah. Three billion dollar companies already is done. Just need to release the. Maybe one thing I would, because when I thought about it and I already said it before, I think what's massive is in which quick time these platforms, let's just talk ChatGPT, could amass such a sheer amount of users.

Thomas Witt  55:52
I don't think even the iPhone, I mean, when I had my first iPhone, obviously I had it on the first day and I was totally excited about it. Many people came out, no, but I have my Nokia communicator or whatever. So the adoption curve was not that high because, oh, it was so expensive. What was it? $600. Now everybody, people pay 2,000 for a phone. So that changed. But don't make no mistake.

Thomas Witt  56:12
They have a huge user base and I'm sure they will try to become a platform themselves where people log in just like Google was for a long time. And I see it with everyone around me, not technical people who ask every single question to ChatGPT. And I think we should prepare for a world in terms of UIs where we will live in other applications.

Thomas Witt  56:33
I mean, we had that a little bit with the iPhone where we had apps which also lived in something else. And you see what kind of great market of, of billion dollar companies emerged simply out of that. And I think once they really start to monetize that as a base and make interface and they already start doing it with these SDK where you can have like little mini apps for, I think, booking.com or something.

Thomas Witt  56:53
But I think this will inevitably come and maybe the others will also jump on the bandwagon. Meta will say everything now less than WhatsApp and whatever you type will also be routed to 15 different companies, whatever. I don't know.

Thomas Witt  57:06
But I think that's the thing we should prepare for, to go away from UIs and prepare for voice and chat as a main means of communication with apps. I think that will be the biggest change in the next five to 10 years.

Valentino Stoll  57:19
Yeah. I'm game.

Valentino Stoll  57:23
You know, we'll have to have you on next year when you're a billion dollar company and we'll have to learn how you scale. We will publicly sympathize with you for having to share the billion dollars with two other people. That'll be the sad part. Exactly. Thomas, it was really great having you on the show.

Thomas Witt  57:40
It was great having you.

Valentino Stoll  57:41
It was a lot of fun talking with you. And, uh, yeah, we'd love to talk with you again.

Thomas Witt  57:46
Yeah. Absolutely. Looking forward. And everybody who's looking for a CRM, reach out to me. Happy to onboard. We are private better, but I'm happy to onboard customers. Yes. Vendors.ai or write me an email at Thomas@Vendors.ai and I'm happy to walk you through.

Valentino Stoll  58:01
If people wanted to find you on social, do you, are you on?

Thomas Witt  58:04
I'm on X, on GitHub, on LinkedIn. Thomas Witt basically ever either it's Thomas Witt or Thomas underscore Witt. So it's kind of easy to find me.

Valentino Stoll  58:12
Yeah. And he's active in the Ruby AI Builder Discord. So.

Thomas Witt  58:16
Yeah. That's what I would really recommend to everybody, to join that Discord, the Ruby AI Builders, because there are many people there with like although sometimes very controversial opinions, but you can learn a lot.

Valentino Stoll  58:27
Yeah. There's lots of great stuff in there. I always like seeing the controversial chatter. You know, sometimes I'll find I'm on one side and then quickly on the other by the end of the. Well, that's a good thing. If you're in New Yorkright now, come and see Valentino, our very own,

Valentino Stoll  58:47
give a talk at Artificial Ruby tonight. I made a Ruby gem called Chaos to the Rescue where it uses method missing to patch itself in real time.

Valentino Stoll  59:02
I love it. It's a lot of fun. It's mostly fun. We'll see. I mean, maybe it will be a serious thing someday.

Thomas Witt  59:10
Method missing claw.

Valentino Stoll  59:12
Yeah. Method missing claw. Ignorantly. The ultimate loop. I finally got it the other day. When I'm in an IRB session, if I type quit, it actually exits the program. Oh.

Thomas Witt  59:23
Nice.

Valentino Stoll  59:25
Instead of saying, I don't know what quit is.

Thomas Witt  59:27
Yeah. That wasright. Def quit.

Valentino Stoll  59:30
Allright. Well, thanks again, Thomas. And until next time, folks, happy hacking. See you everybody.

Thomas Witt  59:35
Bye.

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.