Back

Moran Cerf | Neuroscientist and AI Consultant

Mar 2026 | 70 min

Moran Cerf, neuroscientist and AI Consultant, joins Nancy Lashine to explore how brain science and AI are transforming business, leadership, and investing

Moran_Cerf_Real_Estate_Capital_20260314_R01


Moran Cerf (0:00 - 0:30)

So in a way, you can think of psychopaths as the best person in understanding emotions. They can simulate them perfectly without actually experiencing them. And I think AI is exactly that.


It's really, really good in pushing out buttons and saying, okay, if I'll do this, then Nancy would feel that. And I know that in writing, if someone feels that they do that, I can get humans to do a lot of things that I want in a kind of anthropomorphizing AI for a second without actually experiencing them. And in that sense, it is scary because they aren't feeling them, but they can definitely capitalize on them.


Nancy Lashine (0:31 - 4:58)

Hello, and thanks for tuning into Real Estate Capital. I'm your host, Nancy Lashine of Park Madison Partners. Capital is the lifeblood of the real estate industry, but the decisions on where and how it's allocated are driven by people and personalities.


Who are they? What motivates them? What can we learn from their experiences?


On this show, we introduce you to some of the real estate industry's most influential thought leaders and decision makers. And we talk about what is important to them, how they make critical decisions, who has influenced them, and a lot more. 


Okay, so you might think that a former hacker turned neuroscientist turned Hollywood consultant turned Columbia business school professor is a pitch for a Netflix series, but no, this is a real person. And he's our guest today. Moran Cerf is the academic director of executive education at Columbia Business School.


And honestly, his bio reads like someone just kept daring him to do more impressive things. And he kept saying, sure, why not? After serving in the IDF, he started his career hacking into major corporations and advising intelligence agencies.


Yes, that kind of hacking. And yes, he actually robbed a bank in person four times. Then Moran decided to get a PhD in computational neuroscience from Caltech because apparently that felt like the natural next step.


And he discovered that neuroscience is his passion. After Caltech, he literally opened people's brains, awake living patients, and recorded the activity of individual neurons to study how we make decisions. I cannot stress this enough, awake patients.


I have questions. He's advised the White House on defense issues, consulted and advised the largest AI companies, and somehow also found time to be the brain behind hit shows like Mr. Robot, Limitless, and Bull. His work has been published in Nature WIRED, Scientific American, to name a few.


Now you might be wondering, Nancy, what does any of this have to do with real estate capital markets? And the answer is everything. Because at the end of the day, what we do as investment professionals is make decisions. We assess risk. We read rooms. We build conviction under uncertainty.


And Moran has spent 20 years studying exactly how the brain does and doesn't do these things well. Today, we get into how AI is reshaping the cognitive world at the heart of real estate investing. Whether we're all collectively overestimating AI's importance, how our brains actually adapt or fail to adapt to market shocks, rapid technological change, the fascinating science of group decision making and social synchrony, and the most important question of all, what is the fundamental difference between the human brain and AI?


And well, should that keep us up at night? I learned so much in this conversation. I think you will too.


Moran, so great to have you. I'm super excited. It's been really fun.


It's been a little nerve wracking to prep for this. I always listen to my podcasts on 1.2x, 1.3x, sometimes 1.5x. I had to go back to 1.0x. So a warning for everybody who might be listening. If you are generally listening at speed, Moran talks really fast and has a lot to say.


My biggest concern is how many episodes of this podcast we're going to have to have to get through all of our topics. You have probably one of the most unique and eclectic backgrounds of anybody we've ever had on the show. You may have one of the most boring titles, which is Academic Director of Executive Education at Columbia Business School. That's not really boring. You've been a hacker. You have robbed banks, I think in person four times.


Okay. There's a couple of stories there. You've worked on screenplays. You are kind of a nationally renowned storyteller. Oh, and by the way, you're also a PhD neuroscientist who have held multiple patents and have taught at places esteemed like Columbia, MIT, and Northwestern. So thank you so much for spending time with us.


Moran Cerf (4:58 - 5:00)

It's an honor. I'm happy to be here.


Nancy Lashine (5:01 - 5:07)

So you grew up in Israel and somehow, and I'm assuming you, did you serve in the army there?


Moran Cerf (5:08 - 5:09)

I did. Four and a half years.


Nancy Lashine (5:10 - 5:22)

Wow. Okay. When I was thinking about your background, I was thinking about that book about the two guys who kind of met in Israel.


The name just escaped me, but they talk about thinking slow and thinking fast. 


Moran Cerf (5:22 - 5:24)

The Undoing Project.


Nancy Lashine (5:24 - 5:38)

Yes, the undoing project. 


And I thought, I was just curious what, you know, you must have had so many different interesting experiences of groups and groupthink being in the army that led to where you are today.


Moran Cerf (5:39 - 6:25)

So first of all, Kahneman, one of the characters in the book is one of my mentors. So this hits close to home. He just passed away this year, lived a few blocks from where I am right now, and suddenly influenced my life.


I think that dependent on what kind of domain you're in, when you read these books about him and Tversky, his partner, you find different stories. Some people find the story of the science. Some people find the story of Israel.


Some people see the story of friendship. So these are two guys who couldn't be more different than each other in their style of thinking and approach and found a way to work together, produce some of the most interesting results. To me, that's a lesson of how you create good teams from people who think differently.


It's a great, great couple.


Nancy Lashine (6:25 - 6:31)

So tell us about your path from Israel through to some of the things I've just mentioned.


Moran Cerf (6:32 - 9:09)

Sure. So I was born in France. I was raised in Israel.


I went to the military, to the Israeli intelligence, which is a breeding place for a lot of the tech industry and the academic world in the US and in the global markets. Like most people in the Israeli intelligence, when I finished intelligence, I went to work for the people who were a few years older than me in the military, and they started a cyber tech company. So I went to cyber tech for several years, first in a very successful company called Checkpoint, that's still very, very big.


And then I was one of the small team who founded a secondary offspin of Checkpoint, if you want, called Imperva. Imperva was this company that had the idea that you need to secure not just the network, but also the person if you want. The hackers aren't just attacking their systems.


They also find ways to get Nancy to give them the password or they do things that we call now application security. They go above the network. And we had this idea early on, but the world didn't buy into it.


It would go to banks and say, you need to have this other type of security, and no one would buy. You would say, we already have one, so we're done. And then the marketing, if you want, approach was to send me and a small team to essentially tap into the bank and show them that we can get their money or their information, and then go back to the same people who said we don't need it and say, look, here's what we can do, despite you having the best security thing.


Now we should buy our solution. And that was essentially the marketing arm of the company I was at for several years. The company did really well, went public, and then you have this crisis of what you do now.


And for me, the answer was, you go to the US and you start a PhD in neuroscience. There is a little backstory of why neuroscience has to do with black boxes and security in the brain, if you want I can elaborate. But now it was just like a next step.


Neuroscience turned out to be my passion. I didn't know it when I went in, but now I can definitely tell a story where this was what I was supposed to do all my life. All the operations were, for me, becoming a neuroscientist.


As a neuroscientist, I was part of a team that did something that is unique even within neuroscience, which is studying the brains of humans with electrodes inside their brain. Most neuroscientists do imaging, or they do animal studies. And we're the only group, now there are a few back then, we are the only group who would open a person's brain, a living, awake person's brain, put a chip inside their head, and decode their thoughts while their brain is open.


And they're just telling you things, and you can see what their brain is telling. And this became later on popular...


Nancy Lashine (9:09 - 9:16)

I'm having trouble breathing as you're saying this. I'm trying to fathom, why would a human be willing to do this?


Moran Cerf (9:17 - 10:22)

So those are patients. So we worked only with patients who had some kind of brain disorder that requires a surgery independent of the research part. So in epilepsy, if you have epilepsy and you're not responding well to medications, one of the solutions is to actually open the brain, put a chip inside, figure out where the source of the epilepsy is, resect that, and take out the chip, and take out everything, and you're fixed.


But in the process, you have a human being with open brain and a chip inside their head waiting for, say, two weeks to have a seizure, so doctors can see what's the location of the seizure onset, and then come on and say, forget now the clinical part. You have an open brain, a chip inside, wires coding the activity from within your inner parts of the brain. Would you mind talking to me about any of your preferences, choices, how you think about economy, how you make decisions about food, whatever, and we're going to see how the brain works.


And that was a shock to a lot of people that we can even do that. And because of that, a lot of companies were interested in my research and in the outcomes and how do you get to implement that. Fast forward to me becoming a business professor.


That's it. That's the story.


Nancy Lashine (10:23 - 10:33)

Yeah, there's the story. It's so simple. I'm going to ask you a big question and answer as you wish.


What was the most important thing you learned about studying the brain?


Moran Cerf (10:35 - 10:48)

I'll give you one that will be also relevant for your audience. I learned that there is no correct brain. There is no kind of, okay, if you do this, your brain is going to be perfect.


There is what you call correct brain for Nancy, but it's different than the correct brain for Moana.


Nancy Lashine (10:48 - 10:50)

What do you mean by correct? You mean operating?


Moran Cerf (10:51 - 11:42)

I'll say the following. I got to work with a lot of leaders in the world and they all kind of say, okay, what should I do to kind of, what's the answer? Should I sleep more? Should I do this? 


What we learn is that there is no kind of one profile that is the best, but there is one for Nancy, which is some people work best in the morning, some in the evening, some when they're hungry, some when they're full, some when they're alone, others when they're with people, some just before the deadline. And we can't figure out what is your ideal profile.


And so you have an ideal profile. There is an answer to every brain. What is their ideal profile?


But it's not categorically everyone does best in the morning. And I think that is a misconception that people have. And when we study them, we say, “Hey, you know what? You'd be surprised, but you actually do best in the afternoon. And you do best when this person is next to you and you're angry. And when you're just after lunch.”


And in that sense, the misconception is that there's a correct answer. And the correct answer is that there's multiple types and you can figure out what yours is.


Nancy Lashine (11:42 - 12:04)

I'm curious when you say best in your study of the brain, like I've obviously heard some of your podcasts where you talk about how long it takes to decide if you want steak or fish or, you know, sort of simple decisions. Is it the time it takes to make a decision? Or do you ask people questions that have correct answers?


And is it whether or not they got the answer correctly?


Moran Cerf (12:05 - 12:09)

There are sometimes correct answers, but most times we just ask Nancy to make a choice. 


Nancy Lashine (12:09 - 12:10)

It's a processing time. 


Moran Cerf (12:11 - 12:44)

And then we look back with two weeks after we say, looking back, how happy are you with this choice?


So you are always the one telling us, Hey, you know what? Looking back, I feel that this was a mistake or this was not the one I would make right now. And so, and so we use you as a judge for your own behavior in less stressful time in the past.


And we say, okay, now that we know what you consider yourself as a good or bad choice, let's look at all the choices that were good for you and bad for you and see what's similar in the brain in all the times we made choices that you say are good.


Nancy Lashine (12:45 - 12:50)

So what did that teach you about the role of emotion and subconscious processing of the brain?


Moran Cerf (12:50 - 15:16)

I'm going to give you a long answer and then you can cut if you think it's too long. So we humans tend to think of emotion as a separate thing from cognition, or as if like there's like the thinking and the feeling. The brain doesn't know there's this separation.


The brain has a lot structures and they all fire and they all act. And we decided to call some emotion and something. And the real difference from a neuroscientist perspective is that some of them are under our control fully and some less.


So total thinking is fully under our control. We can decide when to raise our arm and lower arm. Feelings are somewhat, but not fully under our control.


If your best friend is sick, you don't say, okay, my best friend is sick. I'm going to now turn on sadness for 10 minutes. Sadness is on.


Okay, enough. Let's turn it off. It's kind of happening to us.


We can influence it, but it's not. Now, if you look back historically, back historically, meaning go back a hundred thousand years ago to the savannah in Africa, the way cognitive thinking turned into emotion was by routinely repeating a good choice. Here's the analogy.


There are two monkeys in the savannah looking at a tree full of bananas. And they kind of contemplate going and picking a banana when one of them says, Hey, there's like a little lion that's about 200 feet away from the tree. And while we approach the bananas, we might be the lunch of the lion.


So let's decide if we want to go there or not. And one monkey opens an Excel and starts putting the numbers. How hungry I am?


How tasty is the banana? How fast is the lion? And he does all the numbers and his equations lead him to say, you know what?


I think it's a risky choice. I'm not going to take it. I'm going to go home.


I'm going to give up this banana and won't do it. We're the descendants of all the monkeys that made the right Excel and didn't eat the bananas when it was too risky. And at some point the brain says, you know what?


It seems that this calculation that they do cognitively leads to good outcomes. Let's just put it under the hood. So it happens automatically.


We call this feelings. They happen automatically without us actually doing the equation anymore. So when you go to a dark alley, you don't really say, okay, what's the probability of something bad happening?


And so we just feel scared. And this feeling scared is a hundred thousand years old of calculating with Excel what's good and bad about alleys and then at some point making this into a feeling. So emotions in summary are the sum of a hundred thousand years of mistakes that were avoided by our ancestors that in turn lead us to just feel right or wrong about something without knowing why and living with this decision.


Nancy Lashine (15:16 - 16:10)

So now I have to ask you the question, which is obviously the real topic that I want to make sure we cover on the pod today is AI and how AI is changing our world. But when I hear you talk about this, it just becomes apparent. Is that a or the fundamental difference between the human brain and AI?


The human brain has the experience of those hundreds of thousands of years of monkeys to give it, you know, take the emotional response into a rational response. Whereas AI is just taking all the stuff that's been thrown on the internet and putting it in and will it ever be able to make the same type of decision that are good for humans? And maybe I just qualified it once too many that the human brain can.


Moran Cerf (16:06 - 16-08)

So, in principle…


Nancy Lashine (15:08 - 16:10)


And good luck answering that question, by the way.


Moran Cerf (16:11 - 17:26)

So I would say in principle, because AI is learning a lot about our actions, it implicitly also gets some emotions, like it doesn't know that it's emotions, but it gets that when this happened, this happens. And sometimes it feels to us like it's actually experiencing an emotion. It's not.


It's not experiencing any feeling, but it can replicate them well. I think the analogy that I use sometimes is a person with a sociopath. So a person who is a psychopath or sociopath, they are amazingly good in actually reading your emotions and responding to them, even though they're not feeling themselves.


So when you talk to them and you say, I'm feeling sad or scared or happy, they aren't empathizing, they're not feeling it themselves. But they're remarkably good in saying, okay, this is the button that I need to press right now to make Nancy do this or that for me. So in a way, you can think of psychopaths as the best person in understanding emotions.


They can simulate them perfectly without actually experiencing them. And I think AI is exactly that. It's really, really good in pushing our buttons and saying, okay, if I'll do this, then Nancy will feel that.


And I know that in writing, someone feels that they do that. So I can get humans to do a lot of things that I want in a kind of anthropomorphizing AI for a second without actually experiencing them. And in that sense, it is scary because they aren't feeling them, but they can definitely capitalize on them.


Nancy Lashine (17:27 - 17:54)

Okay. Well, for everybody who's now thinking, oh my gosh, AI is like a sociopath, let's get to the other side of that story because that's a pretty scary prospect. One of the things you've spent a lot of time researching is, I think, what you call social synchrony or group decision-making and how, when you spend a lot of time with people, your brains start to operate in sync.


And it could happen in your family. It could happen in the boardroom. It could happen in an investment committee.


Explain that to us.


Moran Cerf (17:55 - 20:08)

Sure. So one of the things we were interested in is how brains sync. And the message, if you want, is that stories are the best tool to sync brains.


This is why, if you want to summarize it to your audience, I would say that the role of a leader of a company, more than anything, is to be a good storyteller. They are the kind of people who embody the story of the company. If they can tell a good story, they can rally the entire team behind.


But what's remarkable about that is, in a company or in a team, you typically have many brains. And if you remember what I said in the beginning, brains are different. There's one brain who prefers morning decisions and one that prefers afternoon and one that is getting angry faster and so on.


And somehow, there is a magic in good leaders and good artists, I would say, in that they are able to tell a story that makes all the brains, despite their differences, look the same. What we do is we measure multiple people's brains in simultaneous recordings. We look at how they synchronize.


What is the moment in the story? And we look at the leaders telling a story. We look at investors making a decision.


We look at child banks making a choice. And in all those cases, we see the moment where the brains become more similar. And when they become more similar, it ends up being more likely to lead to a decision that people are, A, in concert on and also feeling happier about.


Now, I'll say that to qualify that, sometimes you want brains to be very similar. Sometimes you want the opposite. So if you're in the Ferrari pit stop and you want to change the tire as fast as possible, you want to have eight people who are perfectly in sync.


And so when I say, scalpel, the nurse knows immediately which one to give me and so on. But if you're working in marketing, and your job is to come up with a new idea for a campaign, you might want eight different brains that think on differently. So when someone has an idea, it's not the same idea that the other seven are going to come.


So I'm not saying that it's always great to have eight aligned brains. But on many teams, on many boardrooms, on many groups where you want to get a consensus, knowing how to synchronize brains, knowing when brains get synchronized, knowing if they're not, what made them not synchronized is a really good thing. And what we do right now in neuroscience is just that.


We look at the brain and we say, hey, Nancy is able to get all of the board to agree with her when she says this thing, she tells this story, and knowing that is really helpful.


Nancy Lashine (20:09 - 20:18)

And when you think about how brains adapt to market shocks or changes in technology, some of the market disruptions, what should we be thinking in that respect?


Moran Cerf (20:19 - 21:54)

So brains, to our, I guess, surprise, neuroscientists, change fast. We thought it takes decades. We see now that it takes less than decades, years on some things.


In that sense, we see already in our lifetime, some brains that have changed. We see big differences in brains between, say, Gen Z and Gen X. That's 20 years apart or millennial.


So we already see that. And we certainly sponsor technology. So we see that things in the real world are changing their brain.


I'll give you a quick example. You and I are probably much better in navigation than our students and children. We had to navigate with maps a lot.


We had to navigate without an online compass, if you want. So we had to keep track of where we are in the world. And this means that the hippocampus, the part of the brain that contains the maps of the world, is enlarged, more dense in our brain than it is in a 20-year-old right now.


They just don't have the machinery to navigate. Let's give them a win on us for a second. You and I, if we're writing a document, and suddenly an email pops up, and we turn from documents to the email, spend a minute answering the email and come back, it will take us between three to five minutes to actually go back to the same place we were in the mindset writing documents, whereas a Gen Zer would take about one minute.


So they would hop and back and get to the same place much faster. So here's a win for the people who grew up with a lot more kind of multitasking. It means, in summary, that the brains change in response technology to respond to economic shock, to respond to things, and every person who has been through them live them is different.


Nancy Lashine (21:55 - 22:18)

Yeah. It's a little bit like the psychology of money, too. When you think about how people think about money over time, it's so much a function of what your experience is.


I mean, as you're talking, I'm thinking back to all the road trips I did where I would call AAA and get a Triptych, and then I'd have to follow it. Even when I took my kids to go look at colleges, you have to open it up and all the fights ensued with, honey, you took the wrong turn. Yeah, those things.


Moran Cerf (22:18 - 23:02)

Even in money, I mean, you said it correctly. So we think of money as a currency, but money can come in cash, in a check, in a credit card, in a mortgage. The way you transact in the money has different kind of brain response.


People feel a lot more pain if you want, really pain, like the insula, the part of the brain that fires more when they pay in cash compared to when they swipe a credit card. So it's the same money but the brain actually says, when I see the money going away from me, like in cash, I feel a different level of response than it is if I swipe a credit card, let alone if I buy a mortgage or something even more kind of removed from cash. So we can see that even the brain responds differently to a currency that does the same thing.


Nancy Lashine (23:02 - 23:25)

Yeah, wow. Well, that intuitively makes a ton of sense. So let's jump to AI because there's so much to talk about there.


In your thoughts, kind of big picture, how will AI reshape or even take over cognitive work that many of us investment professionals think of as our personal competitive edge?


Moran Cerf (23:26 - 25:11)

So I'm going to wear now my professor hat first, I can't even tell you about research. There's a lot of research from the last three or four years on decision making by leaders with AI. And the message is that both extremes are bad, and you should be in the center is what I mean by bad.


Extreme one, there's research that shows that a board that gets AI advice tend to listen to it almost categorically. So if you're sitting in the board, and you're thinking, okay, should we go left or right, and you're debating and so on, and then someone comes as like, hey, I run this thing by AI and the AI says we should go left. The chance of the board then saying, let's go left thinking that it was their choice is much higher, meaning people tend to trust AI too much.


That's extreme one, bad idea. You want to be more critical of AI and not just take it because it's AI. The other extreme is the opposite.


We have studies on doctors mostly, but it's true for other professions, where they trust AI almost none because it's AI. So you have a doctor that says I spent 10 years, like in my profession, and now you're coming to bring me an AI that says that it's definitely lupus, I'm going to immediately say it's not lupus. And that's another kind of extreme, trusting it automatically, not trusting it because it's AI are bad.


The answer is, we should be somewhere in the center, and people have a hard time. Right now, AI is new enough that more and more we see senior people picking a side, and both are bad. Trusting it automatically, never trusting it is bad.


You need to think of AI as a great tool. We don't have options, but the big thing should be in the hands of humans still. And it should be a responder to a question, meaning you ask a question, you get an answer.


And that is the data you use to make decisions yourself. You don't ask it, what would you do? You don't ask it, what would someone like me do?


Like you don't ask anything that will inform your choice. You just say, give me data in my preferred way of seeing it.


Nancy Lashine (25:12 - 25:53)

You know, I wanted to ask you whether you think professionals are over or underweighting the influence of AI now in their decision making. We're taping this just days after the invasion of Iran. And I'm hearing so many stories about all of the AI driven information that allowed this to happen or that to happen.


Could this, in your estimation, obviously, could this have happened without, could the US and Israel have invaded Iran in the same way with the same level of what seems like fairly rapid success without the use of AI?


Moran Cerf (25:54 - 29:24)

So I'm going to tread carefully on what I say right now, but I'll tell you things I never said before. First of all, we know already that the attack on Venezuela from a few weeks ago now was very much AI driven. They used AI, it's now in the news that Antropic who gave them the AI is kind of pulling back.


So we know that AI was already used, what, like a few weeks ago in an attack and in many ways. We have some already knowledge on the Iran attack that happens while we decode it, where AI is being used. One thing is we know that the Israel American team was able to hack into the cameras in Tehran and basically the location of people.


And they were not just able to get access, but also able to create new imagery. So the people there would see different images on their monitors than there were. We know the AI is being used right now to choose a location.


When an airplane flies, it can bomb a lot of places and you need somehow to decide what's the less civilian, most weaponry and so on. And they use AI live. So we already have kind of incident that AI is there.


If you follow the news, we know that there's a discussion right now on autonomous AI making decisions, which there's a big kind of beef between the government and Antropic on whether it should be allowed or not, but it's a possibility. And I'll tell you something that is maybe new to your audience and not a lot of people know, but I was involved for several years in helping the government, the White House, think of how the nuclear launch protocol should be done. So this is a project I've been doing for the last several years, started in the early days of the Biden administration and now leaks into the Trump administration.


And the question there was, should we change somehow the nuclear launch protocol? The protocol in the U.S. right now is very simple. The president says, I want to nuke Nicaragua and 10 minutes later, there's a missile being launched or a bomb being dropped.


And for a while, there was a lot of criticism of this protocol with people saying there's a problem with this because of the lack of democracy here. It's a one man decision because there's no checks and balances, a lot of things people don't like. But then you say, OK, if we don't like this, what's the alternative?


When I was invited was when someone said, OK, let's say we asked you to come up with an alternative, give me a suggestion. And when I came in 2021, one of the things I said is let's consider using AI in decision making. And first I was laughed off.


People said AI was not what it is now back then. It was like, we're never going to do that. And it's not on the menu.


And forget, that's not me. And in the five years I was involved, it became not just like, come on, let's not use it. But of course, it's going to happen.


It's just like, let's think about what it is. Now, I don't think that we're even close to having AI make decisions or even being involved and so on. It's not where we are.


But now people think, OK, should AI be involved in the decision that we are under attack? Should we rely on AI telling us that there is something coming? And that is already a problem because AI is hackable.


Should AI be making suggestions? Forget decisions, just like you ask AI a question. And I think that today's war in Iran is a good kind of reminder of the big risks of that.


It should be clear, I'm teaching the AI class at Columbia. And it's, some say, one of the most popular classes right now in the country. I'm very negative on AI.


I think that AI has a lot of risks. And when I teach the class, I spend four days telling people how amazing it is. And the last day, how terrible it is.


And I do that in this order so that they will leave with thinking, OK, there's a lot of bad things, not just all the good things that we see.


Nancy Lashine (29:26 - 29:36)

I really want to ask you about the Anthropic decision and OpenAI jumping in and where you stand on that. Can I do that? 


Moran Cerf (29:35 - 30:58)

Of course. So first of all, I'll tell you, I'll give you a disclaimer, which is I know well the leadership of those companies. So I know them as individuals. That's one.


And also, I was involved in some of the working of those companies. I'm involved, I'm invited to OpenAI's forum every couple of weeks to kind of comment on things.


I'm close to them and so on. So I know that there are tensions here between three different things. One is national interests.


They think of us as American and they want to help America at the expense sometimes of other countries. Business, they want to make money and their own values. Right now, I think that the community easily says the good guys, they are kind of the ones who choose American values.


They are the ones who choose to lose business and OpenAI are capitalizing on that and they're choosing business and so on. I think it's a little bit more nuanced, but that's the simple answer and I'm there. I think right now the easy way to think about it is to say anthropic are doing the things that I would do if I were in the position and OpenAI are doing the things that kind of look bad.


I think that there are derivatives of that which are more complex. I'll give you two just to get a sense. One is the entire economy right now is AI based in the U.S. 


Nancy Lashine (30:49 - 30:53) 

The entire economy in the U.S. is AI based. 


Moran Cerf (30:54 - 30:58)

Meaning if the bubble of AI collapses, it's going to drag a lot of other industries.


Nancy Lashine (30:58 - 31:03)

You're talking about the stock market and all the dollars being poured into data centers, etc. Yeah.


Moran Cerf (31:03 - 32:14)

We still have Costco and Boeing. It's not bad. There's so much in the U.S. is riding on the AI and technology companies. If something bad happens, if it's collapsing, then it's going to drag us all in different dimensions. Now, if you think about it, you say, hey, you know what? It's sometimes something that the companies think about, which is, okay, we need to make money.


If we make something that we don't like as a kind of momentary decision, but it's going to survive as an industry, maybe we should do it. I think that OpenAI leadership right now is thinking about that. They say, we also don't like some of the things that are happening right now, but we also want to make sure that we make the economy work.


We'll make a choice that is good for business at the expense of our values. When we are in the end, we're going to change. There are all kinds of choices that are a little bit less flashy in the news.


To give them the credit is they're not just like, think, okay, well, there's a business opportunity here. Let's just take it. They say, okay, our job is to think of more than just kind of whether it looks good or bad.


I think that that's what OpenAI would say in what kind of easily look like, okay, they're capitalizing on the failure of the other competitor and they're coming in. They're also thinking about that. There's something there.


I think that there's a thinking among the OpenAI/Anthropic people. I promised you two, I'll give you those two.


Nancy Lashine (32:15 - 32:21)

When you say that though, what's killing me is literally, they're letting the genie out of the bottle.


Moran Cerf (32:23 - 34:13)

I was going to say exactly almost that. They say something like, we're the only ones who understand it. They say, and I think I wouldn't say it as simply as I'm saying right now, but they say, we understand it and the Pentagon doesn't.


So we should be the ones to do that. And once we're on the in, we'll kind of lead the way to divide uses of that. And I think that both Dario Amodei from Antropic said it in an interview a few days ago and Sam Altman from OpenAI said it.


So basically, in response to how are you to decide, who are you to decide on national security, you're an AI company. They said, yeah, but we understand AI so much better than anyone else that we can see also the bad things. And you've got to trust us rather than trust yourself, even though you're the security guys and we're the AI guys, we know in a way how it can be a security better than you guys.


And that's hard to prove, but also a big statement and important one to at least consider as a person who makes a choice. I think it essentially ends up, that's my last remark on that, as a political question. If you're on the right, you're more Pentagon loving. If you're on the left, you're more Antropic loving, kind of as a simple approach to that. I don't think that Antropic is aligned with Democrats on this thing, but it kind of leads to a kind of clash between defense above anything else or not. And I think that that is a moment that I think every company had.


Apple had that a few years ago with James Cormie trying to hack into the phone. Microsoft had that. All companies have that.


It's always faster in AI. Everything is a little bit faster in the world of AI, but this will be a case that I think once it's resolved, I can make a petition on where and how. It will set the stage for the next couple of months on how we think of AI in national defense.


Nancy Lashine (34:15 - 34:19)

Yeah. Well, let's hope that the right choice is made.


Moran Cerf (34:20 - 34:23)

You can see I'm an open book, so you can ask me anything.


Nancy Lashine (34:23 - 34:40)

I appreciate that. You might guess where I come out on all that. But so taking the incredible importance of AI right now to the economy, to the equity markets, to where capital is flowing, are business, are investors overestimating or underestimating the importance of AI?


Moran Cerf (34:41 - 36:19)

I think there's this thing, I think Bill Gates got that we always overestimate how different the world's going to be in 10 years and underestimate in one year. I think the same is true for AI. So I don't think that in 10 years, we're going to be escorted by robots who make every decision for us and so on.


So it's going to be still the same thing. But I think that in one year, we're seeing a lot of differences. So I talk to a lot of companies and I see what are the projects in the pipeline right now, and they're moving a lot of their core business to AI.


I also see the AI companies that I told you, I'm criticizing them sometimes, but also I'm working with them, thinking about businesses they can go into that are outside of AI. So they are really not seeing themselves as like an AI company, but as a thinking or a brain company, if you want. And they say, there's no reason why making it up, there's no tool, but why OpenAI shouldn't be in the banking industry.


We know how to help people make decisions. We know how to manage money. Why should only Bank of America, Chase and Citi be bankers and end up using us for a lot of the decision making because they forward their choices to us?


We can open a bank. It's going to be OpenAI bank, OpenAI construction company, OpenAI grocery store. And in that sense, I think that the surprise that people are not seeing is that AI companies are not seeing themselves as just like we're just answering a prompt and that's it.


They see we are thinking, anything that involves thinking is our business. They already went to LinkedIn's world. They already start to kind of sniff around domains outside of just kind of answering questions.


And I think this is the big thing for the next year.


Nancy Lashine (36:19 - 36:40)

So if you make that an analogy to say, Amazon, when Amazon first came out, if you were a bookstore, you were nervous, but you certainly weren't nervous if you were Microsoft at the time. So obviously, it evolved in unexpected ways. What businesses or what types of business should be nervous now that probably aren't thinking about it?


Moran Cerf (36:41 - 36:57)

Amazon, a few weeks ago, had this thing where they said, an author cannot write more than three books a day. That is a response by Amazon to AI. I think that to me, this is a moment where I kind of see it.


Nancy Lashine (36:58 - 37:00)

That's a ridiculous idea, but okay.


Moran Cerf (37:03 - 39:49)

It makes sense if you start having like AI pipeline books, right? And that is where we are right now. Amazon has this business model that I would call the Amazon basic model, where if you create a shampoo and you list it on Amazon as a product, first your shampoo is at the top of the search.


And if at some point Amazon sees that this shampoo is selling a lot, they will create their own shampoo, very similar to yours, under the brand Amazon basic. And they will make sure that when someone sells the shampoo, now yours and theirs appear on the top result. And before long, people are going to move to the Amazon one that's cheaper and maybe faster.


I think that this analogy is where the AI companies as a whole are. They say, for now, stage one, we're going to give any company in the world access to our AI models and help them. So if you're a bank and you need to give your customers some services faster, cheaper, better, you want to use AI from, say, Anthropic or OpenAI or XAI in the back, they want to give it to you.


But they also monitor all the queries. And they say, OK, Chase constantly asks questions about interest rates forecasting in Nicaragua. So it seems that they know something. I don't know, as OpenAI, what they are after. But I should also start investigating that myself, because maybe tomorrow I'm going to do the Amazon basic to their business. 


So I think AI companies are beginning to see what queries are coming a lot and they're opening businesses there.


Here's an example. OpenAI started as an AI classic company, but soon after, they created a search option to compete with Google. Why?


Because they saw that many people actually ask questions of the ChatGPT, in that case, about things that are in the search domain. Which restaurant should I go have dinner in New York? What's the best shampoo?


So they say, OK, instead of us then taking the queries, asking Google and putting it in nice words and sending it back, we can open our own thing and come the deep search from OpenAI. When they saw that people asking all the times of the AI to help them write resumes and CVs so they can put them in LinkedIn, they said, you know what, we're going to create a service that you just put your CV and we're going to help you find jobs. And they are competing directly with LinkedIn.


And they're now expanding this to a lot of things besides just ask a question. And so I would say bankers, lawyers, HR, financial services, specifically, not just bankers, are the kind of easiest, fastest casualties. And the next things are brick and mortar.


So the same way Amazon used them as an example. Also, at some point, bought Whole Foods as a way to have a brick and mortar store where they can do a lot of things that require physical presence, but also another business. I think that the AI companies are looking right now already at other industries that are physical, they can put their machines live so you can interact with them, not through your laptop, but in the real world.


Nancy Lashine (39:50 - 40:49)

When you talk about the AI companies, there's four or five, maybe seven. There aren't that many. They have huge capital expenditures in building these data centers right now.


There's lots of questions about circularity of their capital bases. And obviously, they've raised so much more capital than all their industrial counterparts. So are there disruptors outside of those companies that could come in and particularly say, obviously, we're in the real estate business, so we're interested in all the services related to real estate, but also just building.


What are the biggest problems in real estate? Affordable housing. We've not been able to solve the affordable housing problem.


We've not been able to build affordable housing at costs that make sense in urban areas. Are there companies, have you seen companies that maybe can jump in and take advantage of the technology to themselves?


Moran Cerf (40:49 - 41:09)

So I'll say, in the instance of real, like brick and mortar stuff, like things that are real, the AI companies are now, if you look at their allocations, are moving not fully out, but somewhat out of technology alone. In right now, mostly the world of robotics, but it's really the world of hardware.


Nancy Lashine (41:09 - 41:11)

So what kind of hardware?


Moran Cerf (41:12 - 41:30)

Not just hardware, it's robots. Things that are beyond just physical, just software. We think of them, so when you think of that, you mentioned a few, like there's OpenAI, Anthropic, XAI, Microsoft, what, there's Apple in a weird way.


Nancy Lashine (41:30 - 41:31)

Meta, sure. 


Moran Cerf (41:31 - 43:38)

Meta, there's American one, there's DeepSeek, there's Quint, there's like a few, but you're right, there's no more than 20 kind of names that we know. We think of them as software companies.


They hire programmers and they write codes and websites. That's what I think. They think of themselves differently.


If you go to their offices, they say, no, no, no, we're starting in this industry, but we're quickly moving away to products and devices. Example would be OpenAI, partnered/acquired the company io by Jony Ive, used to be Apple's design officer who designed the iPhone and the likes. They basically brought him in because they say, we're done with being an interface, an app on other computers.


We want to have our own device. We want to compete with Apple and Samsung on actually being a device. That's our future. We want to be physical. And that's one example. 

We have Elon Musk creating the Optimus, the idea of a robotic that will kind of answer questions.


All of them are moving there and it's the most intuitive next step for them, but it's a step two out of five steps to actually be physical. So what they say is we don't want to exist only on a computer. We want to be in the real world.


We want to have stores. We want to have cranes. We want to have whatever moves stuff and control stuff.


I was in a panel years ago where Sam Altman was also, and he was asked the question, that's early days of his kind of tenure as a CEO of a company. He would ask a question that seems like a freebie for someone from the room says, are you happy and impressed by how OpenAI is doing? This is a year in.


And you think, okay, every CEO loves this question, he gets to brag. But he said, no, I'm disappointed. I'm very unhappy with where we are and I don't think we're in the right direction. And everyone says, what? That's the CEO of a company saying that he's not happy with the company.


And he said something that even back then I thought is clever and surprising. He said that when he was a kid and he was asked to imagine AI, he didn't imagine a chatbot on a computer. He imagined a real thing that you can talk to the way I talk to you.


And then at the end opens the sleeve and wires come out. Like you can't really tell between a human and a machine.


Nancy Lashine (43:38 - 43:46)

I hope that doesn't happen today, just to be clear, Moran, because I really like you. And I want you to feel soft and not like wirey.


Moran Cerf (43:48 - 44:31)

So he said that. And for a second, I said, yeah, you know what, like we kind of converge to all thinking, okay, what is AI? AI is a chatbot.


And that is what they think of as step one. They really think AI should be a little pet bear that a kid can hold and talk to him. It should be a cow that helps make decisions.


It should be in the real world. And for that to happen, they started to venture into the world of robotics, which is the easy step two. But what they think about is general.


How do we go to real estate, to physical spaces, transportation, everything that is physical is the next evolution of AI. It's quite harder than building software. So there's a few more steps, but that is why I'm saying they are really after a bigger industry.


Nancy Lashine (44:31 - 45:23)

Okay. So AI is after all of our businesses, all of our jobs, all of our industries. But if you think back to the last several waves of innovation, the government catches up eventually.


You've got this wave of antitrust and monopolistic practices. And obviously it's too late, but it still does impinge on things. But for everybody who's running a business, whether it's a small real estate investment management firm or a midsize construction company or a brokerage firm, should we just cede it now to these big companies?


Or how do those leaders evolve and take advantage of this technology so that they can continue to employ their people, serve their customers, and grow their market share?


Moran Cerf (45:24 - 48:55)

So first of all, I painted a scenario where we're all... I don't think so. I think that all businesses will exist and they are going to evolve with that.


So it's going to happen in steps that we could see. So if you're in the real estate business, it's not that one day there's a new real estate company from OpenAI and that's it, you're out of business. You will see how they evolve.


You will see tools and so on, and you can influence that. So it's slow enough, even though AI is moving faster than most things, it's still slow enough for us to see and people can... I don't think there's going to be a world where you won't have AI as part of your work daily in short time, but you will have a work and it's not that one day you're just out of business.


So that is kind of where it is. I do think that there are some industries that should be more cautious and kind of earlier to respond. And if you want what I tell a lot of the companies I work with is you will never catch up, but neither with anyone else.


And in that sense... 


Nancy Lashine (46:25 - 46:26)

So for example? 


Moran Cerf (46:27 - 48:55)

So for example, the banking industry, I work with most of the big banks in the US and they all are terrified.


What if this happens? And they mostly ask me, what are the other banks doing? And when they kind of look at their surroundings, they see that they are all moving relatively slow, carefully, risk averse, but they're moving in the right pace as in they're not making mistakes.


The risk of a mistake in the financial business where you lose money or expose your clients is so big that they're willing to not be the first or not be early and still do it right. There are a handful of situations where you can actually talk to an AI agent about your bank account directly. You can talk to an AI and get your passwords replaced.


You can get advice and so on. But right now there are few cases of an agent that you really ask it to transfer money and it does. All the banks can do it.


We know how to build it, but they're not doing it right now because they say, we're not sure how it's going to work. We're not sure it's ready. We don't trust the AI companies to not use it against us.


So there's a lot of things. We'll wait. And waiting is a good strategy in this place.


I think that the question you asked me in the beginning was about antitrust and regulation. I think we're probably on the same side of the world politically and you and I. So I'm more kind of supportive of the government being involved in regulating and being kind of part of it.


There is a challenge here, which is moving too fast for them. So I think that right now, unlike other industries, if you had a senator who said, I really want to regulate it. And I listened to the calls from the AI companies who say regulate us.


They even say that. It's a bit hard because you don't want to kind of do a too big a brush and just stop it. And it's moving so fast.


If you've created a law two years ago, you would have missed agents. Agents came a year ago and they have a law that doesn't include agents. If you created a law six months ago, it would not include what we call now AI research, which is it's kind of creating its own papers.


So there is a challenge here. I think that there are solutions of how you can respond to that. But I think that there's a challenge that is difficult.


And for now, to my surprise, the AI companies are actually doing a lot of self-regulation. You have to trust them to be on the good side of history. But doing a lot of that themselves is a way to help the government see how they think it should be done.


That's strange, surprising, not common, requires a lot of trust in them, but something that they do.


Nancy Lashine (48:56 - 49:02)

Do you think that this government, the U.S., can ever keep pace with regulating AI?


Moran Cerf (49:03 - 49:41)

In principle, I think that there are things they can do that are big kind of strokes that can help, but not AI necessarily. They can say something like, a company that has this much money, this many employees must disclose the data that they use. And that would be a type of regulation.


And even that, it won't really stop the AI. Even that will send a message to the AI companies, we're trying, and we're doing something that is big enough that requires you guys to at least think about the public interest and so on. And I think it will be a big game.


It will be a big kind of move. And I think that in this particular case, I think they have to have the AI companies also be involved.


Nancy Lashine (49:42 - 49:48)

Like Europe has to disclose, companies in Europe have to disclose when they're storing cookies, whereas we don't have to do that here.


Moran Cerf (49:49 - 50:12)

So those seemingly small steps, they're not regulating, they're not controlling. They're enough to, A, send a signal to the market that we're trying and that we're seeing where we should put the chips. And also, it will force the AI companies to work with the government, which is right now it's totally two entities that seem like they're talking to each other, but really they're operating autonomously.


And I think it creates a lot of friction.


Nancy Lashine (50:13 - 50:25)

Yeah. Yeah. Gosh, it's changed so fast.


Before we leave your last comment about banking, how do you see crypto and Bitcoin playing into this whole AI transition?


Moran Cerf (50:26 - 52:03)

So crypto, you open a chain of one right now. First of all, it's a game for kids, right? First of all, when it starts, a game for kids and a good place for criminals to play a game.


I was very much involved in that, to be very honest. I was an early player, not as a person who buys and sells, but I have a student who was one of the founders of Ethereum. I have other students who kind of wrote papers.


I myself did research on that. So it's kind of in my life since 2012. I was in a room with many, many colleagues of mine, economists, who said repeatedly, it's never going to materialize. It's a thing and so on. And I think now they're kind of saying, it's here to stay. It's no longer like a joke between the kind of people. It's here to stay. 


So with that in mind, we have a way to respond. I think that it's impossible to ignore because the Trump family itself is now a player. And in that sense, the US is. And in that sense, every company has their own location and the IRS is now requiring you to do this. But it's no longer a joke.


I think that it's still a risky investment and it's still very much a way to hide money rather than to be a player. But I think that almost every company needs to now monitor the way they monitor the S&P and the way they monitor the bond market. And they now have to also monitor Bitcoin price, even if it's very correlated with other markets.


It's at least, if you want a way to see how the world is playing, I would say everyone has to monitor also the prediction markets. It's one more thing to monitor, even if just to get a sentiment of the public opinion.


Nancy Lashine (52:04 - 52:23)

Do you think that people outside the US will be more comfortable as if you continue to have dollar devaluation and you continue to have this tension between the technology companies and governments, they'll be more comfortable in trading in some of those currencies, some of the crypto or currencies?


Moran Cerf (52:23 - 52:30)

I think with the current administration, it's a good thing to do because you know that Trump himself likes it and he responds to it.


Nancy Lashine (52:31 - 52:35)

He's not going to be here forever, Moran. Three more years, count them.


Moran Cerf (52:35 - 53:11)

The next couple of months, years, when it's one more way to kind of funnel money into the US and specifically into the hands and the kind of pockets of some people, it's something that everyone has to play. So that's one thing. I think that longer term, it's one more asset that I think you have to play.


I would say a minor one. It's still not a real thing. It's more of like a secondary game that is useful, fun, a way to get the public sentiment.


You can kind of get a sense of what people want, especially young people and so on. I would still think of it as like in real estate, I wouldn't really say, okay, I'm now starting to trade in Bitcoin.


Nancy Lashine (53:12 - 53:34)

We're not doing tokens, tokenization of real estate. 


You're in the education business now, executive education business. How should those of us who are in the business of hiring people and building teams and interviewing people out of school and then figuring out how to train them, how should we think differently because of the impact of AI?


Moran Cerf (53:34 - 54:46)

So when I teach about AI, at the end of the class, there's time for people to ask me any question they want. So I say, now, that's it. Ask me anything.


There are a few questions that come, and one of them is, what should I teach my kids or what should I learn? Or if I'm going to college, what should I kind of apply for and so on? I'll give you the simple kind of one-liner, and then you can expand.


I think that the best skill, I think, right now, it's not a domain. It's not like physics or biology. It's ability to learn by yourself.


So AI is afforded people the ability to learn by themselves. So you can now have a person be sent home and says, you want to work at a company? I wanted to learn about data centers a lot so I can quiz you on that in a week. And because of AI or thanks to AI and because of the internet, if you want, a person can do it.


A person can know nothing about data centers on Monday, be sent home, given a week with a computer and come back the Monday after to take an interview with Nancy and be able to answer tough questions about data centers. All is there. So those who are able to spend a week and become knowledgeable are the people that you want to hire the next year.


And it could be data center, it could be physics. Go and learn quantum mechanics. Come back next week.


Nancy Lashine (54:46 - 55:22)

And then thinking about the real estate business and real estate investment management, because that's what we do every day, you train people to use Excel and how to build models and how to think about markets and how to think about assumptions in their models. AI is going to give you so much more information about all of that. So training people will be very different and the output that we expect will be very different.


How do we make sure that we haven't shortcut really important parts of somebody's understanding of what this business is about?


Moran Cerf (55:24 - 57:22)

So I think that us as the employers in this case, we have to quiz the incoming employee a lot more to understand that they actually understood it and not just write. So a year ago, if someone was sent home to write a report and they came back with 200 pages report, you'd say, oh my God, it must have worked day and night because 200 pages, the quantity suggests 200 hours of writing. Now we know that 200 pages is like one prompt as much as 10 pages. So volume doesn't count anymore, but knowledge does.


So when a person comes to an interview in my lab as a PhD student, I don't care about- they don't have to submit code. They don't have to submit papers. They sit in front of me and I quiz them.


And now there's no generative AI. It's them. And sometimes they can say, I need to ask.


So, so I'm asking a question. I said, I don't know the answer. Let me ask.


So they're allowed to ask the question of an AI and come back. But the point is, I'm going to ask you a question and I'm going to give you 10 minutes to come back with an answer. And I'm going to ask you how you got to the answer and kind of a follow up or what if you did it opposite way?


I want to know that you understand. And for that, you have to know how to learn. And AI allows you to get all the learning condensed in your preferred language.


You can ask it in different language, really like Portuguese versus English, but also in text versus in graph. You can ask to run it yourself and see how it plays out if you want, or you can say, run it for me. And I, but at the end, I want to hire you.


So I want to know that you can use AI, but that you understand me, speak my language, so to speak, and that you can take a question and think about it differently. And in that sense, I think the skill is, it doesn't matter if you study biology, physics, philosophy, or arts, I want you to be able to speak about these domains such that I understand that you understand it and you can learn something new. And I think that if you ask about academics, more and more, I see that I don't need to teach the class with whiteboard material.


I need to say, these are the things I want you to know. You pick up, figure out the way that you want. You want to talk to me?


I'll talk to you. You want to go to our breakout room and come back knowing? We need to know that at the end of the day.


You figure out what's the best way to get that into your brain.


Nancy Lashine (57:23 - 57:44)

So much of alternative investing is picking a partner whom you never know what the future will bring, right? But who you've learned to trust and develop a relationship with, that they will do the right thing when the unexpected happens. Does that relationship change with the advent of AI?


Moran Cerf (57:45 - 1:01:11)

So we started talking about Kahneman and Tversky. I told you that people read different things in the story and I look at it as a story of our relationship. A couple, two scientists who thought about the world differently, but managed to find common ground and create the most remarkable work and got the Nobel Prize for this and essentially shaped the economy.


That's the Undoing Project’s couple. Here's the back for that. The world right now created a new entity that you can interface with.


That's the AI. And you get the questions and the answer in your preferred method and in your best kind of version that you can ask for, and I think it's not great. It removes the friction that is necessary in the world.


I have a good friend who's a couples therapist. And she said that someone took- She's a very public figure, so she has a lot of talks that she has given and a lot of books that she's written. And someone took everything that she did and created an Esther AI.


So someone can essentially go to her and pay $400 an hour to get her advice or ask an AI, and it's based on her thinking. And I asked her what she feels about it. And she said that there's one thing that's missing about the AI.


Yes, it will give the same answers that she will give. But she said, there's something about you having a problem in a relationship on Monday and not seeing it until Friday. And the AI will be there for you whenever you want it.


So there's something about you not getting a therapist whenever you want it, because that's the real world. If you just get used to it at 2am, if you get advice from someone giving it to you, then you get used to whatever I need is there for me. And she said one of the things about therapy isn't just the session where I teach you and I give you the thoughts that I have.


It's you waiting for five days anxiously to see me on Friday and having to drive to see me and sit there and know that there's a finite time. And AI is taking this away. And in principle, it seems great.


It's just giving me every day, every time I want Esther. But that's not how the world is. The world sometimes requires the friction.


And I think that that is the issue. And to take it where you are, we see a gap right now between generations. So we see that Gen Zers are much more comfortable with AI as a companion, as a friend, getting advice, sharing intimate information with AI compared to people in their 30s and onward.


We see that there is an issue of trust. The older you get, you trust AI less. The younger you are, you trust AI more.


The nature of the question people ask, we see that there was a study about a year ago about AI persuasion and negotiation. And what the study shows is that for some people, AI is a better negotiator. So when you and I negotiate, or me and AI negotiate, and the AI negotiates on behalf, the AI has a much better chance to convince me to buy or sell something with certain groups.


So Gen Zers would actually make better negotiations if they negotiate with AI compared to a human. When it comes to persuasion, we see that AI can change people's minds more than humans. So if you're a Democrat and coming to argue with a Republican, essentially the chance of any of you changing your mind is pretty zero, like almost never happening.


But if I, a Democrat, argue with a Republican AI, the chances of it actually convincing me is a little higher. So there's something to that. All I have to say is that relationships are changing.


And the key driver here is age. We see that people, based on what age group they are, they respond differently to that.


Nancy Lashine (1:01:11 - 1:02:10)

Well, when I hear that, and I kind of think through the many things we've talked about in the last hour, and how brains can evolve, maybe not every day, but every decade. And how, if you think about going from Monday through Friday, as your couples therapist said, and the value of working things out yourself versus kids today who just live on their phone, and just, it's all back and forth, and there's not a lot of digestion in between. I mean, for example, I was trained, when you have a difficult decision, sleep on it, come back the next morning.


And I have young people who say, you didn't answer me. And I'm like, right. So it feels as though, if I kind of take what you're saying through to its natural conclusion, that AI will have more and more influence over us as we evolve, and the next generation comes into being.


So that seems to be the natural conclusion of...


Moran Cerf (1:02:11 - 1:02:33)

Yeah, and I think we have some, so we're still in the brains seat, so we can change it. So what I'm telling my students many times is what you don't like, you can still change. So if you say, I would not want AI in decision making when it comes to warfare, that's the endopic case right now, then it's a time where you can still influence that.


In two years, you cannot. In two years, it will be too late.


Nancy Lashine (1:02:36 - 1:03:06)

Well, I think everybody who's running a business, as I am, and so many of us are, are somewhat sobered by your comments, because it is very hard, as you say, to see, or as Bill Gates said, to see the change that is coming in the next year. When you think about the biggest disruptions that you're working on right now, can you tell us what's the one thing that has really surprised you that you're seeing in the last, I guess, we're talking not year, but months?


Moran Cerf (1:03:07 - 1:04:44)

So I would say in the world of AI, there are two big ones. One is, we used to say, that's very technical, but I would say still, we used to say that the US is advanced in AI, far beyond all the other countries. And specifically, China was considered far behind and we shouldn't have any worry about them and so on.


I think the last four months have shown a flip where China isn't ahead, but they're definitely catching up and flooding the market with AI-free models that companies, especially small size companies, say like, why should I pay $100,000 to Anthropic or to OpenAI? I can get it for free and it works. So that's one kind of thing that's immediate and it's new.


It's starting, I would say, sometime in the last quarter. So I don't know how it's going to shape the world, but more and more companies are choosing that. There's national security questions, there's kind of economics questions and so on.


There's even kind of maybe warfare questions that are going to materialize in the long run. So that is one thing that's kind of immediate right now. And I think that the other one is re-skilling.


So I think that the biggest question that a lot of companies are asking right now, when I talk to them, is I have a lot of employees that I hired in the last couple of years, should I spend the next year retraining them in-house or should I start hiring people who are AI savvy? And that's a big question that I think a lot of companies are asking me right now. I have an opinion that's strong about re-skilling.


I think that it's you, there's loyalty to people you hired and you don't just go for a new one. It's like you spend time investing in your employees. So I create tools to do that.


But it's a big question that people are kind of considering right now because it's clear that there's a gap here that you need to kind of close.


Nancy Lashine (1:04:46 - 1:04:53)

What do you think is irreducibly human about investment judgment that AI will never be able to replicate?


Moran Cerf (1:04:55 - 1:06:58)

So I mean, the highest level kind of answer is consciousness. That's like as vague as before I answer it, because who knows what consciousness is. I'll say that there are some material aspects of human that are hard to replicate.


And it sounds like it's easy, but it's not. And one of them is a background story. So you started with me by saying you went to the military and you were a hacker and so on.


And even though I didn't elaborate any of the stories, the audience has an image. So because they have a kind of image associated with like being a soldier and being in Israel and growing up in France and so on, without you saying anything, we and you and I assume that everyone in the audience has an image that is similar to the one we have in mind. And AI doesn't have that.


AI hears the word French, Israeli, American, professor and so on, and just use them all as like labels. Okay, now I know that my next answer should somehow interact with those things. There's something about the memory and the cloud session that is AI kind of lacking and sense of humor.


I can tell jokes. And it can learn all of our jokes, but it doesn't influence it as much. And what I mean by that is that if you're sitting in a boardroom, and you're discussing something, and someone says something in a funny way, the chance of it being more memorable, actually leading people to is higher, and AI doesn't.


So AI doesn't respond well to humor. And in that sense, why humans laugh and why the brain creates this kind of spontaneous response is an interesting question. But I think behind it lies the thing that it's very personal.


In my classroom, when I teach, and I see a student kind of drifting off, I learned that if I tell a joke, everyone's laughing, it wakes this one person who's drifting off because them hearing a joke and everyone laughing means to them that something happened that they missed. A joke is not just kind of, okay, knock, knock, who's there? It's really a code between humans that something interesting that is kind of categorical happened.


We should all rally behind this thing. So that is human. 


Nancy Lashine (1:06:59 - 1:07:09)

I love that. I love that. That's a great place to end. But I want to ask you one more, one last question.


Is there a book, a podcast, that you think that people listening really need to know about?


Moran Cerf (1:07:11 - 1:07:29)

Oh, I wasn't prepared for that. Let's see. If they really care about AI, there is a guy, Andrej Karpathy, who is like an ex-OpenAI programmer who left to essentially create educational material.


He's fantastic and he's technical. But if you say, I want to know who is a guru that I can trust, he is one person. 


Nancy Lashine (1:07:30 - 1:07:31)

How do you spell his name?


Moran Cerf (1:07:32 - 1:08:25)

Andrej, is it A-N-D-R-E-J I think? I'm pretty sure. And Karpathy, if you start putting the K-A-R-P, hopefully it's going to auto-complete to his full last name.


So that's for the technical people who kind of say, okay, I want to know what is happening technically. If you want to know what the business side of AI, like more kind of like day-to-day, there's something called Last Week in AI. It's a podcast of two young people, AI kind of savvy, who talk about where AI is going, what happened, and they try to do it every week.


It's called Last Week, but it's I think every two weeks or so. But they really kind of break it down by saying, here are the things that happen in regulation, they happen in legal, they happen in technology, and so on. It's a very useful way.


If you're interested in musings about those things, between this podcast of yours, between any place that you can find professors at Columbia talking about what they're doing, you're good to go.


Nancy Lashine (1:08:27 - 1:08:42)

Always good to end with a shameless plug, Moran. I can't wait. I hope you will indulge us to come back in not too long a time, because I can't even wait to think about how things will be changing in the near future.


Moran Cerf (1:08:42 - 1:08:55)

I finished the AI program by saying this entire class comes with a one-week warranty, because everything I say in the class might not be right in a week. So it's the same for this part. By the time we air it, it might be that I was coming out as a fool.


I said that this will never happen. It's already kind of happened.


Nancy Lashine (1:08:56 - 1:09:42)

Oh my goodness. Well, I hope our producers are listening to that. We may have to bump the line here.


Moran, you're such a pleasure and really a treasure. Thank you so much for joining us today. Thank you.


I hope you enjoyed this episode of Real Estate Capital. Before you go, I have a quick favor to ask. We put a lot of thought and effort into this show and making sure we bring you insights from real estate leaders that you don't normally find in the mainstream media.


So if you're enjoying the show, please remember to follow it on your favorite podcasting app so you never miss an episode. We'd also love for you to share it with others or give us a review on Apple Podcasts so others can find us. Thanks again for tuning in.


For more information about our firm, please visit our website at parkmadisonpartners.com.