Conversation with May Habib, CEO of Writer on Successfully Transforming Enterprises with Generative AI
Transformative Workflows, RAG and Knowledge Graphs, Build your own LLM, Quality and Compliance, People and Process Change, Human-Centric AI, Future of Work
In this edition of the Uncharted Algorithm, I speak with May Habib, the CEO and co-founder of Writer, a leading Generative AI platform built specifically for enterprises.
May shared her fascinating journey from growing up in an entrepreneurial family in Lebanon to working in finance and investing, before ultimately diving into the world of AI and natural language processing (NLP).
Both May and her co-founder Waseem Alshikh, recognised the transformative potential of language models early on before ChatGPT and prompted them to build Writer, with the mission of transforming work through AI.
We get into how May’s involvement with AI evolved from a content localisation startup into a full-stack Generative AI platform. May emphasised the importance of being laser-focused as an AI founder and having the courage to start fresh when pivoting, rather than bolting new tech onto legacy products.
Some of the key insights and takeaways from our discussion:
Generative AI is enabling true work orchestration and knowledge work transformation in enterprises. May shared examples of how Writer's customers are collapsing months-long workflows into weeks and achieving incredible efficiency gains. The key is focusing on high-impact internal use cases vs. just automating shallow tasks.
Scaling enterprise AI solutions requires overcoming three major challenges: 1) Maintaining high quality and accuracy as apps get smarter, 2) Navigating compliance and risk, and 3) Driving organisational change and restructuring.
Retrieval augmented generation (RAG) with knowledge graphs is crucial for enterprises who need highly accurate, deterministic outputs from language models. Fine-tuning is overrated.
The Generative AI stack is evolving rapidly but May predicts consolidation ahead, with many point solution providers getting absorbed or fading away. The real opportunity lies in deeply understanding customers' needs and workflows.
Fears around job loss from AI are becoming entrenched, even as the technology gets rapidly normalised. May believes the future of work will be humans focusing on what they love, augmented by AI handling the rest. Change management is key.
One thing that really struck for me was May's laser focused customer-centric approach, bold vision, and commitment to human-centric AI. Writer's strategy from building their own LLMs (large language models) to enabling last-mile customisation offers a glimpse of how the enterprise AI platform of the future is taking shape.
I hope you enjoy this conversation as much as I did. May's insights and experiences building a fast-growing AI company are invaluable for anyone trying to innovate in this space.
Below is the transcript of our conversation.
Introduction
Aditya Kaul: Hello and welcome to the Uncharted Algorithm podcast. In today's episode, we have a special guest, May Habib, the CEO and co-founder of Writer, a leading Generative AI platform built specifically for enterprises. Welcome May!
May, you have an impressive background. You grew up in Lebanon, graduated from Harvard. You worked at Lehman Brothers and Mubadala before diving into AI and NLP, specifically with a startup called Qordoba.
So, tell us a little bit more about your journey and what made you dive into becoming a startup founder and specifically why AI and NLP?
May Habib: I've always wanted to be an entrepreneur. I grew up in an entrepreneurial family - entrepreneurs by necessity versus by choice in that they couldn't really get any other jobs. So very similar to other immigrant stories, but I knew I wanted to start something big, not a mom and pop.
The finance and sovereign wealth fund years were definitely a detour from that. But funnily enough, in the sovereign wealth fund period, I worked on semiconductors and investing in semiconductors and researched a little company called NVIDIA at the time. So in so many ways, everything I've done has led me here.
I really connected with my co-founder, Waseem, when we first met more than 10 years ago now on language problems. I was really fascinated by the social inequities that arise from folks speaking different languages. We initially partnered to build a machine translation company, discovered, and started using transformers during those years building up the localisation startup.
And, you know, once we saw it, we couldn't un-see what would happen to language generation. We called it source language at the time, English. We are now in 32 languages, so life really has come full circle as Writer, but using enormously different techniques to get there. And it is very cool what role semiconductors and Moore's Law and compute has really played in that journey.
So I do feel like I've had a very holistic career, lots of different parts supporting one another. But it definitely didn't necessarily feel like that while in it.
Content Localisation to GenAI Platform
Aditya Kaul: So what made you transition from Qordoba to Writer, moving from content localisation and management to specifically a GenAI platform, which Writer is, and we'll talk more about that. But could you share that journey specifically? I'm mainly interested in viewing this from an AI founder lens.
May Habib: We had to start a new company. And the reason for that is, even though the skills and the people on the product and AI side were the same folks, the product we felt needed to be completely different. You just could not do both - the beginning of that problem, writing the source content, and the translation part of the problem at the same time.
And now it is absolutely obvious why even transcreation in a Generative AI based environment. So we do transcreation, people use our platform for transcreation. We don't do localisation, and we're very clear about that, especially that we serve compliance-oriented industries. Translation is still absolutely a must if you absolutely need, for example, an insurance policy in six languages that says exactly the same thing.
So all to say that when we were in it, really trying to make the decision of are we going to incorporate Generative AI into the localisation business or are we starting a brand new business, there were even so many more thorny and hairy issues that we couldn't have even anticipated that we would have faced had we continued on the path that we were on.
In a lot of ways, I think we exited early. And we weren't massive, we were single digit revenues, mid single digits, about 50 people. Starting a new company was obviously the right choice, especially now that I see many products, whether it's in the customer service space or the marketing space, even systems of record, that are building GenAI into their applications see very poor adoption of the GenAI features. People aren't there in your app for that.
And so you just got to be, as an AI founder, super thoughtful and honest about the actual change management and activation energy required to get people to do the thing you want them to do. And if it's bolted on to something else, it's just going to be so much less likely.
So there were sort of gut decisions that played into this being a fork in the road that I think are even starker in retrospect.
Writer’s Core Mission
Aditya Kaul: Wonderful. So what is Writer's core mission and how do you see GenAI transforming enterprises?
May Habib: Our mission and vision is to transform work. When we started Writer a few years ago, we saw that AI was going to get better than people at reading and writing, certainly faster, and all three of those things have happened. The company is still called Writer because the natural language interface to AI is English. If you can write it, you can build it. That is the world we are going into, and soon.
It's been extraordinary to watch builders, non-technical builders, from both tech companies and your vanguards or L'Oreal's who are really diving in, understanding the technology, understanding the building blocks and really transforming work. And I don't mean something that used to take me an hour now takes me a minute. I'm saying something that used to take us four months now takes four weeks. True orchestration and reinvention with AI at the heart, that's really what we are about.
Automation 1.0 wasn't like this. I used to be a bookkeeper in high school, so I did a lot of this automation 1.0 - taking an invoice and punching it in and that being automated. That's really task-based automation. What we're able to do now, what our customers can do now on the Writer platform, is large-scale work orchestration and knowledge work orchestration. Stuff that used to be done cross-app, cross-functionally, across months, now those processes can be completely rebuilt.
“The building blocks that we have built are a platform that we call a fully integrated, full-stack Generative AI platform. And this is what we think enterprise Generative AI is just going to look like. It's the LLM with built-in RAG that makes it very easy to plug into structured and unstructured data, very easy to do the continuous data management that you need for those RAG pipelines to continue to be relevant and scalable because you want to build 50 apps a quarter, not three apps a year.”
And again, these apps should not be kind of narrow substitutions, but true collapsing of work. That's what the product enables.
On top of the LLM and RAG are reusable AI guardrails. So everything from brand to compliance, to things that you want to audit for your own responsible AI strategies, etc. And then the ability to basically create wrappers around that technology. So easy to build interfaces. You can use the Writer product to host your apps or embed them into your workflows and other tools.
That's what we call an enterprise Generative AI platform today. We certainly have lots of competition. It is a new, dynamic, and fast-moving space. So it's a very confused buyer today. But we can come in with a very differentiated path to fast ROI and really deep, meaningful transformation.
Platform Shift
Aditya Kaul: I would love to dive into some of the product details, especially the RAG aspect of it, and we'll get to that. But before that, I wanted to just look at the bigger picture for a little bit more. I was listening to a YCombinator podcast, and within this recent batch that they had in winter 2024, they see a lot of energy, a lot of enthusiasm around AI. And they genuinely feel that there's a big platform shift happening, something like what happened from mobile a few years ago, maybe a decade or so ago, and the PC platform shift. Do you feel that as well, as a fast-growing AI startup CEO? As you talk to customers, is there a big platform shift happening from your vantage point?
May Habib: I love that so many folks in the enterprises we talk to are learning how to be an AI dev. Today, for simple zero-shot calls to LLMs, it doesn't require the kind of NLP know-how that it used to require for a lot of use cases, for the use cases that we focus on. There is just such a need to very crisply understand and integrate with the data that's used for retrieval because the accuracy requirements are so high. We're talking healthcare and financial services.
The thing that you sometimes see is folks confusing a platform shift and the need to learn new platform skills with the requirement to rebuild the platform. And I do think that is a mark of just the early yet to mature part of this space. But absolutely, AI as a medium, AI as clay is very exciting. And it's not even the AI, it's intelligence.
Going back to the capabilities really being human-level already. "What would you build if intelligence were not a constraint?" is a very interesting question for a company.
Still, they're finding it really hard. You would think that you could easily build a data product now over the best unstructured data that you sit on if you are, let's say, a custodian bank. But that's still very much an old-school product management process.
So yes, there has definitely been a platform shift and new platform skills, but there's just no getting out of the fact that the way we build products and the way you get users to use them and buy them, I don't think that has fundamentally changed with the introduction of human-level intelligence being essentially free.
Real-World Problems and Scaling GenAI
Aditya Kaul: Yeah, that's well said. In terms of what type of problems that you're trying to solve for your customers, what are some of the main real-world day-to-day challenges that you see enterprises using GenAI for? And I'm also curious how many enterprises are doing POCs versus actually applying GenAI in production.
May Habib: I love knowledge management use cases and agent assist use cases. But it would be very easy to take even those families of use cases very shallowly, like pulling up a customer record or being able to query against a knowledge base. Those are good use cases, but they're not transformational.
“But I'll give you an example of a transformational use case. You've got a call centre that costs you four to five billion a year because there are 10,000 people that sit in it. You're a financial institution and it used to be that calling in to figure out whether they could take a loan out against their 401k needed to go to a specialist and added eight minutes to the initial call in addition to the specialist call. Well, if you could actually really quickly and intelligently help that first agent support that customer, that's a real transformation.”
If you are a wealth management company and you've got a set of support agents, you've got a call centre where very experienced wealth advisors are calling in to ask about products. This is kind of internal support. And it is like a 25-year-old answering the phone and there is like a real cultural mismatch in this exchange that hurts the company's brand and costs the company money because they're calling in three or four times because they can't figure it out. Those are the types of really deep transformations that are happening and are possible.
And then, in the digital marketing space, like in CPG and retail, these are incredibly powerful use cases and acceleration in time to market. We're talking three times faster to get your Maybelline out, to get your Tylenol out, brands that we work with. It's just an incredible pace of change. But what's behind it is humans doing really hard lifting.
To get to your second point, Aditya, around are they POCs or are they scaling, there are three big mountains you gotta scale to scale.
The first is quality. If your app is dumb as hell, it's not going anywhere. If it can't be smart at scale, if it can't be smart over time, if there's no way to keep it fresh, everyone is going to have very high expectations. And if you can't maintain the quality of how the app works, it's just going to be really hard to be successful or to have confidence that this can scale. So that's the first mountain.
Some people scale that and a few companies have climbed that mountain. Once people have climbed that mountain, they face another mountain around compliance and risk. And even after the first set of approvals to even get the first POC out, we're starting to notice people not wanting to be the last person to have said yes. And so a new gate gets introduced and that is a whole other set of people and process.
“And then the third big mountain, in terms of scaling POCs, is inaction or maybe more positively said, the courage that it requires to switch stuff off, headcount that we thought we needed that we didn't, restructuring that now has to happen, re-skilling that now has to happen, moving agencies out, moving contractors out, restructuring those agreements.”
So we're still sort of like, "Oh, the POCs don't scale for most people because it's a quality problem," but there are other problems waiting. So we're going to be working our way through this for a long time.
Internal vs Customer Facing Applications
Aditya Kaul: Interesting that people processes are coming to the fore in this sort of platform shift happening, but there's a culture aspect to it and the human aspect to it.
So in terms of the overall landscape of all these companies and the use cases, what percentage, just very broad numbers I'm looking for, what percentage of these are sort of internal facing versus external customer facing? Any sense of that?
May Habib: We build almost nothing that is customer facing. If you're a global brand, let's say you are in the hair business, your users don't want to talk to an AI stylist on your website. It's a graveyard of chatbots that nobody uses that help people check the AI box. And so for commercial reasons, also like first principle reasons, those are things that we don't touch. We don't build conversational AI bots for websites, we don't do pricing models, we don't do mortgages. We're not interested in stuff where there isn't going to be a human in the loop.
And so the body of use cases that you choose to focus on really determines the kind of company you end up building, the kind of product you end up building. And that's why I'm so bullish on the space.
“The whole media wants everybody to believe that hyperscalers are going to invest their way to absolute domination. But there is a whole universe of work in the last mile that hyperscalers will never get to touching. And there is a lot of building that needs to happen because we're not going to focus on all the use cases or even all the verticals or even all the sub-verticals. And so there's a lot of other companies to be built.”
It is just the reality, once you get really deep in with customers, their needs are very specific.
Finding the Right Use Case
Aditya Kaul: Talking a little bit about how these use cases emerge. I was just curious, do they emerge from the customers themselves? I guess it is a spectrum, but in general, I was just curious, what is the trend that you're seeing? Are you having to go in and kind of work almost like a McKinsey or a consulting project to carve out these use cases, because a lot of these GenAI use cases are not very obvious. So just curious how you see that.
May Habib: So, we're a product company. So anything that we do needs to be productisable for us. The unit of productisation gets more and more refined the deeper we get into a vertical and the more comfortable and more repetitions we get on families of use cases.
And so we build solution maps for a vertical before we let sales loose on it. It is a distillation of sometimes hundreds of conversations, sometimes hundreds of different apps people have built on the platform and experimented with, but it essentially says to customers, this is where the gold is, this is what Generative AI is really good at. And these are high business impact use cases.
“You know, some of these numbers are so crazy in terms of ROI that we pare it down a lot. Because no CIO is going to believe 50x ROI. They have been burned so many times. That's what we're seeing. And so you soft pedal it a bit and really get very practical. We challenge ourselves not to say GenAI in pitches. That's when you know you're having a real conversation with somebody.”
So the use cases are specific. With a lot of respect to the consultancies, they're not getting this specific.
Aditya Kaul: So you're getting the customers to work with your platform, with the LLMs, and with the data that they have, and then attack specific problems that they're facing. And then from that exercise, something emerges. Is that an accurate way of saying it?
May Habib: With a thesis for sure. So we come in with a thesis where it's like, in asset management, the big areas are - everything's a three, right? One, two, three. These are the big solution sets.
You do have a conversation, you're not forcing yourself on the customer. They're telling you stuff, but you are helping them really see mistakes that others have made before them.
And so it's very important that our go-to-market team have a real depth of understanding of the industry they're serving and can talk customer.
“The magic isn't when they learn how to use our building blocks, although that is very magical. It's when we can together diagram out a workflow and then build an app that actually can mimic and collapse that workflow.”
We do have to understand a lot of it. We do have to interview a lot of people. So from a data ingestion strategy and consulting perspective, yes, it looks similar. Have we done it a lot before so we can collapse it in half a day and not two months? Yes.
But we've built a collaborative platform for business users and engineers to collaborate on this very thing of last mile data, last mile workflow, last mile annotation. And so that is where the quality comes from.
And in the methodology from there, it is very much about essentially building the scaffolding for how you're going to engage with the customer as a platform on an ongoing basis. Those initial discovery conversations and initial workshops are very important.
RAG, Knowledge Graphs and Large Context LLMs
Aditya Kaul: Talking a little bit more about the technology. I was specifically interested in the RAG aspect of it. You have a knowledge graph-based RAG system on your platform. What are your thoughts on RAG in general? I know there's a lot of talk, a lot of excitement about it. There's a lot of excitement about large context models as well. What's your view on how this is going to play out, especially from an enterprise context?
May Habib: Yeah, large context windows are not a solution to retrieval use case problems. They are amazing for the kind of more open-ended and generative types of use cases, but where you are really trying to do work across data sets and looking for insights that live between and across documents, that is not a solution to high 90s kind of accuracy. That's what we've seen.
“I mean, we've got really big context windows too, but there is not a substitution of that for graph-based RAG in retrieval use cases. “
So what we've done for retrieval use cases is train a separate LLM basically to build nodes and edges on folks's data in a use case specific way. And so you get a much higher accuracy rate as a result of just having a much denser understanding of the data, much richer relationships and context to draw on.
And so that has been a real breath of fresh air for customers. They can sometimes feel pretty disillusioned in building applications that are wrong 6-10% of the time, or that don't work once it even goes up to 20 documents, or that need full-time data people monitoring and looking at the data for the accuracy to be usable.
So we've built a whole GUI around it as well, so business users can update files - the continuous data management here is really important - or update connections depending on where the connectors are. It's a really essential part of getting the engineering right.
And it used to be a month and once a week. Now, once a day, we hear "It took me months of experimenting with insert model and ‘hyperscaler X’ here, and I got it to work in Writer in two days." That's the kind of thing we want to see because the cumulative effects of everybody building AI that works and that gets used, building 50 Generative AI applications a quarter and not three shitty ones a year, it makes a really big difference to the culture and how leaned in people are.
How many business people want to work on AI and then how you end up climbing those other two mountains, right? Like, okay, this stuff works, it's good, and we can know how to scale it. We've got now the activation energy we need to solve the people and process problems.
Palmyra LLM
Aditya Kaul: Got it. So you have your own Palmyra model that you've built. When was that decision made? Was it right at the start that you said, okay, well, let's just do our own LLM? And I also had a question around the fact that you have a closed source version, which is the 72B model, and the open source as well. But maybe first, the first question was around what made you decide on your own model?
May Habib: It was a really important decision for the company. And we made it before ChatGPT came out on the scene. And of course, folks who weren't really sure what was under the hood at Writer were very curious, including our own investors, that we were investing in our own models 18 months ago. That was pretty contrarian. Now, every Generative AI company out of Y Combinator is building their own models.
And for us, that investment in keeping the training data up to date and doing the vertical specific models and the instruct and the chat versions of the models, etc., is one that we've decided to make, because the ability for us to serve applications that are at the deterministic level that enterprises need, and the consistency, it's just impossible when you are hooking up to a closed source model, somebody else's closed model. You just, there's nothing you can control to your customer.
And so for us, it was really a no-brainer. We're an ML company. We had been working with Transformers since before anybody was doing it. And I, we just never felt like we weren't up to the task and the challenge. And our models are up there on the leaderboards and we open source kind of the lagging ones. And the most powerful models, we make very transparent to our enterprise customers, obviously under NDA, etc.
But the training data, just the methodology, all of our policies around how we treat the customer interactions with the model, all of that is transparent and auditable. And especially in a world where the RFPs and the responsible AI work and the EU AI Act are all really leaning into models that are fair for commercial use and you aren't going to get folks into trouble, we can just answer depth of inquiry in a way that we couldn't, even with open source models, right? Like other people's open source models.
And a lot of these enterprises are working with open source models that they themselves can't scale, because they themselves aren't meeting kind of the bar for internal AI security. So again, sort of like the decision to even start Writer, one that has proven to make even more sense in retrospect. I guess that's what the mark of a good decision is, but actually felt pretty obvious even in making it.
Enterprise LLM Benchmarks
Aditya Kaul: Yeah, it seems to be working out for you pretty well. And as I said, it's already on the leaderboard doing pretty well. I saw on the benchmarks itself. Do you see that there's a dearth of sort of good benchmarks specifically for enterprise use cases or are the current ones good enough? What's your thought on that?
May Habib: Good question, Aditya. I 100% think that we need new benchmarks and benchmarking companies. And the Stanford team is working on it. I think they are the most ahead, but they certainly need everybody's help. And what use cases folks are leaning into and actually finding value in really determines the benchmarks, because what you're really looking for is what model behaviours are best suited for what types of use cases.
In our AI studio, when you're building an app in Writer, there's a model dropdown and you can choose between 18 of our models now. You get a lot of help. So depending on the use case, folks know which type of model is required, from vision to instruct to chat. And there are different models that have got different context window sizes. Again, there are different trade-offs and different types of use cases that are best suited for that.
So that's what folks need help with. There are 1500 LLMs. If LLMs were the blanket answer to everybody's AI dreams, everyone would have the AI program of their dreams inside their companies. And we don't have that. So the model benchmarks are powerful, but I think are going to be seen as crude compared to what is going to come.
LLM Stack
Aditya Kaul: Talking a little bit about the AI stack or the LLM stack specifically, how do you see that evolving from your vantage point? Again, you have a sort of a full stack solution that you're offering. There's lots of activity happening in various parts of the stack, specifically from tooling. There's obviously verticalisation startups. There's other platform companies, there's hyperscalers you mentioned. Overall, where do you see it evolving in the next 12 to 24 months? Where are the opportunities? Where is the consolidation happening?
May Habib: Yeah, good question. I do think there'll be some consolidation for sure.
“I think we've realised nobody wants to build their own LLM - enterprises don't want to build their own LLMs. So I think there's a whole host of companies built on the premise that enterprises will be building their own LLMs at scale.”
There are companies built on the premise that fine-tuning will be huge. And we do very little fine-tuning today. The majority of the use cases are, it's chained prompt engineering, it's example writing, and it is kind of tuning and choosing the right behaviour of the model, it's retrieval, versus fine-tuning.
So again, I think some will survive, but it's not gonna be a big market on its own, it'll be a cottage industry.
Then, from the hyperscalers' perspective, and the folks that are building Generative AI adjacencies, I think AI attached to a lot of different structured data types makes a ton of sense. Generative AI and assistants and being able to chat with databases, that's a market. And it's probably a market that lives inside of a database company versus being a separate company.
So do I think that chatting with anything outside of your actual CRM data is going to be a big business at Salesforce? Probably not. And so I do think there are folks who will probably retreat on some of the AI functionality that they are pitching. I mean, there are kind of system of record companies that are out there pitching building domain-specific LLMs to customers.
“I think there's a lot of grasping at straws in the EPD (enterprise product development) teams of the large tech companies trying to figure out where the Generative AI revenue is. And I think folks will retrench to stick to really exciting, powerful features that are closer to what their systems do today.”
People & Process Issues
Aditya Kaul: Right. I wanted to touch upon the people process aspect. One question I had was around, I do a little bit of consulting for clients on the GenAI side. I've noticed a lot of them, while they're very excited, there's sort of a nervous anticipation and fear as well, mixed feelings, I guess, when they actually start playing around with some of these models and seeing what it's capable of. You yourself mentioned about the amazing, unbelievable ROIs that you see yourself and you kind of downplay it.
So the question I wanted to ask was, a lot of them, I sense that they fear for their own jobs or the team that they're leading. So there's this really uncomfortable, nervous anticipation in some sense. Do you see that as well? And how do you handle that, this human element to this technology as you're working with customers?
May Habib: It's a really good question. I feel like sometime last fall, it became very uncool to talk about job loss associated with AI. So you really shut up about it. The fear has not gone away. Outside of AI, the major layoffs being done, nobody's on a hiring spree and it has been normalised to trim a few percentage of your workforce. People don't even write about it anymore.
And so I do think the fear is becoming entrenched. I was on with a CIO yesterday, of a Fortune 100 pharma company. There is an operations leader inside of their company that literally nobody can talk to, will not talk about modernisation, digitisation, AI at all. And look, if we continue on this track of super high revenue per employee, companies can actually decide which parts of their business they're just going to keep analog, because that's how those people want to work. And I think there's a beauty in being able to decide what you keep old school and what you bring AI into.
It used to be that I used to say AI is for the repetitive, the drudgery, those team workflows nobody wants to touch. Now the reality is one person's drudgery is another person's creativity. AI is good at all of it. And so it's like, "I'll do what I love doing, analog. And I'll write AI tooling to do the rest."
I think that's a lot more what work looks like in the coming years - being able to democratise the ability to work with free intelligence. That's what we're trying to make happen.
And hopefully, when we see this in our customers, those users get very excited about learning those skills because those skills are transferable. They're the skills of the future. And we see it as our responsibility to make sure that, best efforts, no user left behind. We're going to do workshops, ‘AI Days’ and office hours, to get to mass adoption at your company. That's our problem.
Entrepreneurship Lessons
Aditya Kaul: Great. So towards the end of the podcast, I wanted to ask you a little bit about the entrepreneurship aspect. As a founder, you've navigated pivots, fundraising, challenges. You're currently in the middle of scaling a startup in the growth stage, in the scaling phase. So what have been some of the toughest moments in this journey and how did you cultivate the resilience to push through some of these?
May Habib: Yeah, that's a really good question, Aditya. I always had folks in my family, my best friends, my husband, who, in the days where - and it's been a long time since I was truly unsure - but early days of Writer, are we a consumer company? Are we prosumer? Are we going to do this enterprise thing? I was really torn up about that.
And then in what felt to me like an abandonment of our translation mission. There were very deeply personal reasons why I was in that space and choosing to focus on English felt like selling out a bit. Now we're in 32 languages. So I can breathe a little, but that also felt like a pretty tortured decision. And you're raising money and there's all sorts of existential things.
It's not always been obvious what the next step should be. So I always had people around me that were like, "Come on, you're having fun, toughen up, suck it up, take the next step." No one who was ever like, "Oh, this is a lot of work, May, what a sacrifice." I never had somebody who did that. So that's probably me knowing what I needed mentally and surrounding myself with those folks.
And then my co-founder, I mean, we have an infinite trust battery. I say that all the time. A strong co-founder, when they're down, you're up, when they're up, you're down. Nothing is ever very bad and nothing's ever amazing. And that's always been an amazing partnership that we have had. It's been a very creative partnership.
He runs enterprise product development, I run go-to-market. Together, we do vision and product. And he knows a ton about customers and I know a ton about ML and the AI we do. So it's a division of responsibility. But we are really deeply involved in all aspects of the business.
And so it's just a very tight partnership and we like giving people visibility into the tension in our relationship too. We think out loud with each other, we think very differently. But we don't avoid showing kind of conflicting points of view, arguing in front of the team. We want people to see that that is perfectly acceptable. We don't run things by each other before we talk with the team or with leaders. It's not like mom and dad decide and then bring it to everybody. We want to demonstrate what's possible when you have deep trust with somebody, how much you can challenge each other.
Book Recommendations
Aditya Kaul: Is there a good book that you can recommend for anybody who's trying to build a startup and, again, scale it?
May Habib: Yeah. There are a couple that I love.
I love "The Great CEO Within" by Matt Mochary. I've never met him. But it has really helped me. And then I love the "The 15 Principles of Conscious Leadership." Both those books, plus "Amp It Up" by Frank Slootman, a classic, also love "The Mirando Method" by Mitch Mirando. That has always been kind of a touchstone. Those books have been pretty much around my desks and I'll consult every once in a while.
Keeping up with AI Advances
Aditya Kaul: Last question, how do you personally stay engaged with the latest advancements in AI? I know you have a busy schedule, you're meeting customers, you're growing the company. So how do you maintain the curiosity, creativity to drive Writer in the first place?
May Habib: I don't do any personal social media. So in all of my free in-between time, Twitter/X is the best way to read all the latest research and see all the latest demos. And me and my co-founder probably exchange 15 DMs a day on Twitter/X, kind of a running conversation. But yeah, that's my favourite way.
Our team also publishes and I join those conversations. And it is a very applied, driven program. So the problems that we see with customers and that we run into with accuracy and trying to get stuff to work, we're solving real problems, not theoretical ones.
Aditya Kaul: Wonderful. I appreciate your time today, May. It was a great conversation, learned a lot, and thank you again for taking the time.
May Habib: Aditya, super thoughtful questions today.