MAU [Talk]

Ep. 006 Cem Kansu: Retention & Growth Tactics

January 05, 2021 Cem Kansu | VP of Product at Duolingo Season 1 Episode 6
MAU [Talk]
Ep. 006 Cem Kansu: Retention & Growth Tactics
Show Notes Transcript

In this episode, Cem Kansu, VP of Product at Duolingo, the world’s largest language learning platform with over 400 million users, chats with Adam about the tactics that have led to Duolingo’s ongoing success in user retention and growth.

To connect with Cem directly, catch him on LinkedIn @CemKansu or on the MAU Vegas website, MAUVegas.com.

MAU[Talk]:

Hey guys, welcome to MAU [Talk] a new podcast from MAU Vegas, the premier mobile acquisition and retention summit. Today we have Cem Kansu from Duolingo. Cem is going to talk to us about the specific marketing tactics and advertising tools that Duolingo tests and utilizes in their mobile app to drive growth. Taking it away, Adam,

Adam Lovallo:

Cem, welcome to the podcast. Thank you.

Cem Kansu:

Hey, there. Happy to be here.

Adam Lovallo:

Cem Kansu, in case people want to look him up, Director of Product at Duolingo and many time MAU speaker. Although I think Cem, I'll call you out. I think for the 2020 conference edition, I reached out to you said "hey, do you want to be a speaker?" And I think you said "only if I could speak on the main stage" as he's moving up in the world! This daunting on me. But I respect that and you know what? You are main stage caliber speaker. So that's credit to you.

Cem Kansu:

We'll make this a mainstage podcast that

Adam Lovallo:

Absolutely, absolutely. I'm, we're already off to a good start. Okay, so Duolingo is, I mean, pretty ubiquitous, but just to be safe. Would you tell us a little bit about the app? And broadly speaking your role, and then we'll get into some of the more tactical stuff.

Cem Kansu:

Sounds good. Yeah. So Duolingo, for anybody that doesn't know, is the world's largest language learning platform. We teach more than 30 languages. On our apps, our websites, were a heavily mobile company, more than 80% of our traffic is mobile. We have more than 400 million users worldwide, we're very international, roughly, only 20% of our users are in the US, the rest is international. So that's our short store, but also the most downloaded and also the top grossing education app and both of the app stores as well. And I'm so personally, I've been that Duolingo for four years now and I lead products on two of our product areas. One of them is monetization, that's all of our revenue products. And the second one is user engagement, that's where we kind of drive retention and stickiness.

Adam Lovallo:

Can you remind me. Duolingo for years was not was like pre monetization, not deliberately not monetized. When did you start collectively experimenting with monetization stuff?

Cem Kansu:

So Duolingo stories is actually very interesting. There's a pre 2016 and a post 2016. And I can maybe put myself in that marker and says, and say pre me and after me. The first idea for doing those monetization was actually translation, which was crowd sourced translation, as language learners solved exercises, they would translate content.

Adam Lovallo:

And this caused like a Mechanical Turk sort of a thing.

Cem Kansu:

Exactly, exactly. So as crowds translated certain content, similar to actually, the more common is reCAPTCHA. o our founder Luis, he was t e founder of reCAPTCHA, which s a way to digitize books. So e ery time you type a charac er, you're actually digitizi g a book text. And the idea was hat would apply to language lear ing where you would solve exerc ses translate, and that transla ion would be sold as a tran lation service to various webs tes or companies, to CNN need to get an article tran lated to Spanish. That was the usiness model. This had two majo issues. One of them was tran lation is a race to the bott m. Meaning it's, it's you do i for $10 an hour, someone else in the Philippines does it for 2 an hour, and then some ody new comes in and even unde cuts that price. So you can' really build a profitable busi ess easily. That's one, two, you always need to always need a third party verifier. If new anguage learners are doing your translation record, because it's just not super high qual ty. So there's always a thir party that you have to brin into the mix to have high qual ty translation. Long story shor, this model didn't work out. And in around 2016, we want

Adam Lovallo:

And the ads are like just general advertising d to do more, I guess I woul say traditional consumer mone ization. And we started test ng things like ads, in app purc ases, and subscriptions. And hese found good product mark t fit, and that's around the ime I joined. And then we did ll of that. Now today, we'r primarily a subscription driv n business. One thing that is u ique is our mission. So we would never, what we stand for is acc ss to language education. So ou content is free to everyone whether you pay on Duolingo r not, you can access the whole language learning content that we provide, you get additional bells and missil s like features added to to your user basically, if you're a subs riber, so that's where a f eemium subscription bus ness model that also adds ads in the free user experience. from networks and stuff, or is it some special placements? You guys have created?

Cem Kansu:

No, it's programmatic ads from the the usual programmatic ad networks you would you would think of.

Adam Lovallo:

Got it. Got it. Got it. Okay. Okay, awesome. I love it. I'm so, let's talk about this notion of product driven growth, which is, I mean, not a new idea. But I think in Vogue, you know, like, the reforge courses and stuff are very, like, proliferating this conceptually, or at least as a label.

Cem Kansu:

It's hotter now. Yes, that's true.

Adam Lovallo:

No doubt, no doubt. So like, practically speaking, what does that mean versus product management as its traditionally defined, or even growth teams as they're typically defined? In your world? Yeah, well, actually, one thing that really resonates in the concept of product driven growth is our business model itself. So if you look at most subscription apps, which are generally premium apps, not freemium, you have to have this marketing group, right, you pay to acquire users, you get some LTV off of them, and you use that LTV to acquire more. Instead of doing that what Duolingo did is primarily using the free product as the marketing engine for the subscription, meaning since we offer free content, we get a lot of users, and it's very engaging. Duolingo is built like a game. So we run a lot of product driven growth to making it engaging as well. But we use the free product to basically have users learning language on Duolingo already, and upsell them as they engage with the product to say, "hey, there's an axiom even a more fun version and a more kind of feature heavy version of Duolingo, would you choose to upgrade?" So instead of acquiring users to the subscription, our goal is to get users because the product is free and good, and then eventually convince them to upgrade. So that's one piece that is kind of ingrained in our business model that has to drive through product driven growth. But I think the other piece is, the problem we're trying to solve language learning is actually the hardest part is not how to teach somebody a language. The hardest part is keeping people motivated to keep studying, like anything with self taught learning, I guess has this problem, like meditation, working out, it's all about how you keep yourself motivated. Language learning is the same. And that's why we put a lot of work into make get fun, engaging and sticky. So our growth has primarily come from increasing the retention of the app. And we've done quite a bit of gamification stuff. We're famous for our jelly push notifications, we use a lot of funny. So Duo is our owl mascot. And we use his voice in our notifications to gamify just the dry notification of saying,"hey, come back use our products." We say things like"Duo misses you" and we really personalize the messaging. Or we're also famous for this concept of a streak not that we invented it, but it worked really well on Duolingo. Because just learning a language really requires a daily habit, this helps you build a daily habit. So we run a lot of experiments on any of these dimensions to basically keep upping our retention curves. And since we started, I think our D one retention roughly increased three to four times then what it was by basically inching higher and higher every time but this is again, eight years of experimentation at this point. Okay, love it. Let's talk about we'll talk about monetization. We'll talk about retention, I think those be the two focus areas, but you run tons of experiments and product driven growth. Can you describe the infrastructure systems and I guess, to a lesser extent, processes, if you want that support that? Like for example are you using some off the shelf? You know, A/B testing SDK thing? Or have you built a thing? Like where has that evolved over time? Like, how do you physically do it?

Cem Kansu:

Well, it's everything we now have is custom built, meaning we built it to our own needs, because we just realized, we're off the shelf only goes so far. And I mean, with every build versus buy decision. It's a long term commitment. We have a whole team that has to support our A/B testing and analytics tools. But they're in-house built, they're custom built, and when they first came out, they were pretty janky but now they've gotten really good and it saves us so much time when we're trying to run like 1000's of experiments a year. So that's that's kind of our setup and every PM or anybody I guess that works on a product use experiment fluence. They know how to read data, they know how to talk about stat six so that we kind of make sure everybody's trained on and then let them use these tools to run as many experiments as reasonably possible. That's our setup. Another kind of you mentioned process. One process we use a lot is the concept of guardrails. I don't I mean, I don't think this is unique to Duolingo. But it lets us run many experiments independently without crossing territory over various teams, which is we set guardrail metrics. So we say to the monetization team, look, you can run whatever you bought, it's gonna be a fully randomized experiment. But if it turns out you up your metric, but hurt another core metric, you just got to stop doing that and figure out how to fix that. We make teams run very independently, but we put kind of guardrails in place. So revenue could go up but do you shouldn't go down anytime a team is running stuff on their own. And now we're roughly reaching 400 employees. So it's gotten to a place where we really do need process because we don't see everything that's going on every day.

Adam Lovallo:

Do you? Can a certain user get exposed to multiple segments at different stages of the product? Or are you holding like, tons of little cells of people that are like only controlled for their one experiment and not exposed to other stuff?

Cem Kansu:

No, one user would get exposed to a lot of stuff. Like, at any point in time, we'll probably have more than 180 experiments running in the app. Like if you pull it up today, you're getting treated into roughly 100 different things. And you will, by random design, you will get assigned various experiment buckets, by that nature. And if someone else pulls a phone next to you, there's a high chance they'll get a complete different setup with that combo.

Adam Lovallo:

Have you ever spoken to anyone in like the mobile industry? That is, I mean, maybe this is commonplace, but like that runs a comparable like that sentence you just said would be the thing that they would say like that, to me is, um, precedented. But is that normal. Like I think in big companies?

Cem Kansu:

I think big-big, we don't really consider ourselves big. But I think big tech companies, this is kind of the norm meaning but I say big tech in the in like the public company level, like Facebook, Google, etc.

Adam Lovallo:

Sure, sure sure sure.

Cem Kansu:

They obviously run a lot because they also have the staffing to I think for, we do hit heavier than our weight in the sense that like our number of employees, we have and and the number of experiments we run is proportionally very high from any other company I've seen. Or I mean, this also, I think it's cultural, too, I think we are like, we kind of have this rough mandate that like, don't just YOLO and push out a change, always push it out as an A/ test so you get some data. Because you know, even though you might be sure of the effects, even you're fixing a bug, we just push to put it out with an A/B test because just things go wrong and we want to now what happens every time and ave a record of it. So it's a it cultural. And we maybe we ometimes A/B test way too much on like, only changing the outl ne of a button is maybe not more than A/B test. But since we'v made it so easy to run thes, we just run a lot of them

Adam Lovallo:

I see. Okay, that makes sense. And the systems that you've created, I presume work a little bit differently, whether you're talking about iOS, Android, and I mean, is there like a web app client, too?

Cem Kansu:

Yep. Yep. It's sometimes, so it, it's very different. I mean, I think depending on where the change is going, or monetization, we are roughly iOS first, because that's a lot of our revenue is coming from that platform. So we test features iOS only first, if it works, well then bring it to other platforms. That's generally our approach. Some of our experiments are fully on the backend, so they could treat all users. So when we run pricing experiments, for example, we generally want try to run cross platform and get data cross platform. So it really most feature experiments that are obviously natively built into the clients. They're going to go into a single platform.

Adam Lovallo:

Got it. Okay, awesome. Let's talk about I guess where everything is in the context of experimentation. But let's talk about some retention stuff. First question. Obviously, you do push. Do you get really strong opt-in rates, just given the nature of the product? Or, you know, relative to what you know, in the industry would you say your opt-in rates are kind of normal?

Cem Kansu:

Our opt-in rates are above average. And we've obviously spent a lot of time optimizing the opt-in experience as well on when and how to do it. The way we do it now is actually so one of the things around our onboarding is we basically try to make sure that the user gets value before we ask them to do anything. Sign up for an account, opt in notifications that feels, I guess, best practice at this point. Anyway, but this has worked really well for us. We call this concept delayed onboarding. So the moment you install Duolingo, you basically don't do anything. You pick a language and you jump into a lesson. If you pick Spanish, we're gonna throw some Spanish exercises at you. And then we start kind of, I guess, asking you new things. One is account creation, and another one is notification opt-in. And the way we present it now is, we basically say in order to build a language learning habit, you need to do this daily. If you want us to remind you we can send you push notifications is roughly the messaging we use. And it really fits because I mean, people get that you're not going to learn a language in a single day you need to do it consistently and we just asked them within that context. We do a lot of push and push has been very, very strong retention lever for us. We have the standard, I guess our push setup is we use what we call the practice reminders. So if you come to the app, today, you do a Spanish lesson at 3pm, we are going to try to send you a notification at 3pm the next day if you don't come back, because most likely, you're going to do it at a time, which is, which is going to work the next day as well meaning you did it in the morning you're at home or in your commute, whatever it might be. The next day, what we've found timing works really well. So we try to schedule the next practice reminder. Literally 24 hours after.

Adam Lovallo:

Your and that logic report is all I mean, obviously that's all totally automated.

Cem Kansu:

Right. Totally automated. Exactly.

Adam Lovallo:

And is everything on the retention side that you're describing. I'm sorry to interrupt you by the way. Is it, I assume that's all home grown? I mean, you're not letting them?

Cem Kansu:

Yea, all home grown.

Adam Lovallo:

Yeah, yeah. Yeah. Did you ever use at any point in the lifecycle of this business, were they on one of the messaging platform things? Or was it just the logic was already too complex that it overwhelmed those?

Cem Kansu:

I think it was always.Yeah, I think it was always in-house. One, our volume is really high. So third party services was just expensive to begin with. And we're like a very engineering driven company. So we love writing our own stuff, as much as we can.

Adam Lovallo:

And when you say you were to change that logic, the logic that would make it 24 hours instead of 22 hours, you know that like is that the sort of thing that could be run as an experiment in and of itself,

Cem Kansu:

Yes, we've run so many of those. Turns out the magic number is actually not 24 is 23 and a half.

Adam Lovallo:

Well, that makes sense, right?

Cem Kansu:

I guess it does, because the moment we schedule, this is when you ended your session, which only takes 15 to 20 minutes. So 23 and a half, almost puts it around the next time,the next day when you would start doing your lesson. So that's what we learned 23 and a half is the magic number, we've tested all kinds of stuff. The one barrier we try to put in is obviously not being spammy. Like that's our mental barrier to not like we could send every hour right. But obviously, you don't want to do that. So that's our mental barrier. But we've tested a lot of stuff around notifications. Now we're also getting into the game of using machine learning to optimize the copy we use inside the notifications. The one obvious learning there was when we use the same copy every day or different copy, it just goes down really fast. Because what we're saying at the end of day is come practice your Spanish. You can see it in 10 different ways. But if you give it to ML to optimize one, it starts finding the strong ones automatically to it starts using recency and optimizing. So two days in a row, it's you're strong, you respond to the strongest message, but that decays. When it starts decaying, then it starts rotating in other messages and it's hard to do manually. And so we let the ML optimize, and that's given us good gains on retention as well.

Adam Lovallo:

And the model is optimizing at the user level. I mean that it's literally making this selection at the user level not like wow, that's extremely hardcore. That's awesome. Wow. What about um, I have to ask about Rich Push. And I know Rich Push differs on iOS and Android side, although they're sort of converging functionally. Do you bother with any of that? Or have you done much experimentation on that stuff.

Cem Kansu:

Not yet, we are going to try it on Android, because

Adam Lovallo:

Although to be fair, I mean, correct me if I'm Rich looks really rRch, you can really customize. It's not that Rich on the iOS side from what it looks like. So we're gonna try it on Android. I have high hopes, because it just looks really nice the way we designed it. So hopefully, if anyone has Android, they'll see this soon. But um, we're gonna give it a shot. The one worry I have about it. It's kind of like spammy a wrong. I think Rich Push technically has been supported little bit. Because imagine, like, iOS would never allow what Android is allowing because it just breaks the OS design. And if every app did a Rich notification, your notification center would look pretty bad. I feel like this channel is gonna close, is how I feel about ich notifications. Personally, I don't think this will last very long, because it's a spamm channel a little bit. on the Android side for like, multiple years at this point.

Cem Kansu:

That's true but adopted a lot, I feel like.

Adam Lovallo:

Oh, I really have you seen that. Okay, fair enough.

Cem Kansu:

That's what I'm seeing. But obviously, this is anecdotal. So it's hard to say. But long as the channel lasts obviously, we're going to test it out, see how it performs and hopefully don't feel spammy to our users.

Adam Lovallo:

Yeah, of course. What about-- do you do any SMS stuff? Or even email, you know, regular old email like, are those important? Or is it really push, push or nothing?

Cem Kansu:

We do a lot of email. Actually before push became a thing for us. We started out as a website. So email was actually our first channel. Yeah. And we do a lot of email. Email, we do various things. One, there's like programmatic emails that trigger as a reminder to practice, they're individualized, or they can send you a weekly report or how you how much you've learned how you're progressing. And we also do very old school, like email newsletters to the entire user base, and say, "hey, here's our new features for the month of July", or "its new year season, hey, come create a learning language learning habit", whatever the theme might be, but we do both of these and they do drive quite a bit of use. And the reason it works also well, for us as we're very sporadic use app. So somebody decides to learn Italian, they learn it for two months, and they're like, oh, you know, I only went to Italy so my motivation died, they disappeared from Duolingo. And then six months later, it's January, they get an email and they come back. So we trigger a lot of sporadic use through email. SMS, we only do it in markets where push doesn't work and for us, that's been China primarily. It's also a culture there that most apps do send you SMS messages or WeChat messages. So we're actually using SMS in markets were pushed just for some reason either doesn't deliver, or people are blind to it. And that's been China, and it's working really well for us. The problem with SMS is it's expensive. I think you're paying for delivery, quite a bit per SMS. But we've been okay with that cost, because it just really helped our retention numbers.

Adam Lovallo:

Got it. Okay, cool. That's awesome. All right. Now, shifting gears monetization side is a totally open ended question. Like, what are some fun experiments that you've run recently? Or the features that have worked that you are, you know, that aren't a big secret?

Cem Kansu:

Yep, yep. Well, we've, we've run a lot of stuff on monetization. So let me see, um, the one piece we really nerd out about is how our purchase flow looks and works. Meaning that I mean, so we have millions of users passing through and the moment we present something, we can easily A/B test the effects. So we test pricing, we test copy, we test, free trial link, we test how the dual smiles on the purchase page, like every little detail we play around with. And funnily enough, everything makes a difference, you change one word to something else. And it's like a 5% change in conversion. So purchase page is so sensitive and high ROI that we run a lot of stuff. But one big lever that has been very important for us is the free trial. So adding the concept of a free trial to our subscription has done wonders for adoption. It's not I guess, this is obvious to companies who've tried it, it wasn't to us at first, but it just makes it so easy to do not worry about entering into a subscription, you test it out if you're happy, obviously converse automatically. And how you how we present that as made big differences as well. Say there's a difference between saying 14 days free trial to two weeks free trial, um, it's actually the same length, but saying two weeks does 5% better, for example. Um, so there's a lot of examples there. But we do like if you come across our purchase page, it is heavily optimized is what I can say. And we're at any point in time, we're also changing and testing new things with it.

Adam Lovallo:

Are all of the experiments held to the same level of, you know, statistical rigor/ significant/whatever? Like, is that a universal company's standard? Or do you guys sometimes I don't know, cheat, or even have or not the opposite way, have an even higher standard for something that you consider to be so critical that you just, you know, want to be even more competent?

Cem Kansu:

I think for from a metric standpoint, we are always using the same standard. However, there are cases where you run something is dead neutral, like no metric moves. But you we just like it as a better user experience so launched anyway. So there's no metric gain to be made as we liked it, and we did it becomes the answer. Or sometimes there's, we take a really small hit for the fact that you know, it opens the door for new feature experiments. So we do free trials, for example, it was a big win, but also it just opened a lot new experimentation area, so we would have been okay, even taking a hit maybe. Um, so those type of examples exist, but majority, it's like, if your P value is less than.05, that's the criteria we use in our dashboards. That's when you when we call it stat SIG, and everybody kind of works under the same assumption.

Adam Lovallo:

I have a question for you. This is a very specific and maybe I'll say no, that's stupid. But I once worked at a company where we ran an experiment, experiment, experiment p-valu, p-value every time and okay, so we're only pushing winners, only pushing winners. And then a year later look back at the numbers. And I was like, well, these gains should be compounding, right and they're all statistically significant. And so therefore, what was at the time, you know, 1x should by definition now be 2x because I could see the wins. Like, that's math. And like, in fact, it was like somehow, you know, 1.1x. So, I don't know if that's I'm just curious, like, is that a stupid experience that I had that like, defies logic? Or do you guys ever even think about like, do you even like, ever look at things in those terms?

Cem Kansu:

Yes, yeah.

Adam Lovallo:

What is the reaction to that?

Cem Kansu:

Well, this is, this happens to I think every team that runs a lot of A/B experiments, and then looks back. There's one thing that I've learned over the years, which is, if you run different experiments at different parts of the user flow, you're gonna end up treating different number and set of users, right. For example, let's assume a 10 step flow to something let's say conversion to a subscription, you might be getting a 5% gain in step one, or you might be getting a 2% gain on step 10. In jelly, when you do that math, you're not necessarily taking into account like step one might have had million users coming in, but by step 10, you might have had, I don't know, 10 users, because everybody dropped off. So at 5% uplift in which step you got maybe determines what the outcome will be. So compound, when we do the compounding math, we're not taking into account how many users we treat it. So the standard math doesn't really apply when you look back and add all the experiments unless you always had the same number on the first page.

Adam Lovallo:

Right, right, right, right.

Cem Kansu:

Exactly. So when you experiment with different parts of the user flow, the compounding math doesn't apply directly. The one very good way to measure this is holdouts, right? Like, you run for six months, and then you find the version what the app looked like six months ago, and actually bring it back and run it as an experiment, compare, you know, cumulatively, how it has done. Most people don't do it, because it's just extra work. And you might as well move on with your life rather than worry about the past. And I think that what I found is like, if you really want to get good measurement of your long term compounding effect, you basically bring back what the app look like six months ago, and test that as a version.

Adam Lovallo:

And is that something like in your job day to day, does that ever happen or not?

Cem Kansu:

It's just too much work for getting data on again.

Adam Lovallo:

A ton of technical work to just be able to basically merge back a bunch of changes. I have seen companies, have you guys thought about and do you ever hold global control groups into perpetuity? Because that's really common in advertising, because it's trivial. It's like no, more than 1% hold out. Okay, great. Now, that's gonna be forever the case. Have you ever considered doing that? Or did that? I assume, as technically.

Cem Kansu:

We have tried doing it. But it always goes wrong. And the reason I say that is this is what happens. We say, All right, for this experiment, we actually did it with most of our monetization experiments, like we put ads into the app were like, well, let's go measure the long term effect like we measured two week effect. Great. Let's keep it 5% hold out for our ads experiments. So 5% of our users never see ads. Let's keep it running for six months. What happens is that condition gets forgotten really easily, because it's not the majority of your users. So when you start making changes one, the team, after a couple months starts forgetting that those users exist who never see ads, the whole company forgets, and then you start introducing changes that start diluting your clean holdout. The second problem is the condition never gets tested. Because it's a minority that people forget about. Because it's the holdout and people test the ad based version, then the app probably breaks after a few months as well, since you're not testing that condition consistently. So your holdout is polluted because this either has bugs or other experiments that had ads in them started polluting it. So this is what happened every time we tried to do a holdout. Now my take is let's not do a holdout. Let's just test that longer and close the experiment and move on.

Adam Lovallo:

Yeah, yeah. I mean, you get at some level, you got to just live in reality, like conceptually gray and then like you know, you don't realize that these experiences just broken for 5% of people for six months, because you don't look at it. And like, I've never actually seen anyone do that outside of paid ads, where it's really easy to do, and like, live their lives and function. I just, it seems a possible.

Cem Kansu:

An unscalable model I've seen is when you keep holdouts always close it out with a deadline. So I've heard companies that have done this successfully is when you start a hold up every end of the quarter, it has to be shut down. That could work I guess, when you're like disciplined about how you run them, close them out.

Adam Lovallo:

That's smart, that's smart. Okay, I mean, that's the perfect amount of time, Cem. That was excellent. So let's say people want to find you. So they go on LinkedIn do do Twitter do write blog posts, like what's your, what's your?

Cem Kansu:

I'm a Twitter guy for the most part. So Twitter can probably be the easiest.

Adam Lovallo:

What's your handle?

Cem Kansu:

My handle is my name and last name. So, C-E-M K-A-N-S-U and I post some product insights or anything that comes to my mind, I post a lot of random stuff, so anybody can feel free to hit me up there.

Adam Lovallo:

Okay, awesome. Well, this was amazing. Thank you. And I said this, everybody, but hopefully sometime in the not too distant future we might see each other in person at an event and hang out. Because this is great, but it's also very different. And yeah, anyway, thanks again.

Cem Kansu:

Thanks, man.

MAU[Talk]:

Thanks for joining us. You can find Cem's contact information in this podcast description or at mauvegas.com. Make sure to subscribe wherever you get your podcasts, and we'll catch you on the next episode of MAU [Talk].