B. Earl, Corrales Cachola & Nolan Ether: Can A.I. Be Net Positive For Hollywood? – Storytelling, AI Data Sets, What Is Consciousness, Albert Einstein, Biological Systems, Trees As Community, Setting High Standards, Intelligence, How To Trust AI, Elon Musk, Ethics, What is IQ, Love, Favourite Books, Joseph Campbell And Much More…

"As a creative, as a writer, as whatever you are, the big thing comes down to 'do you have something to say and do we care, right?'"

"There is no culture, there's no community around AI, it's just how can I get mine, and how can I be better than the next guy?"

"It's not just about high intelligence; it's about the connections that were made and that process of communication between people that builds true meaning and value."

"What happens when ChatGPT evolves and people have, let's call it, a 140 IQ personal assistant whose entire existence is only to improve your life?"

Brought to you by Wripple, Marketing’s On-demand talent platform.

  • Get your company matched with vetted freelancers in real time.
  • If you’re talent, get qualified opportunities with top brands.
  • Now featuring AI, Web3 and blockchain talent across verticals.

Coca Cola, Cox, Universal, AT&T and more. Join the growing number of companies partnering with Wripple to hire agency-experienced freelancers to help teams excel in the future of work. 

AI and Hollywood: Quotes From The Show


B Earl:

“As a creative, as a writer, audition, as whatever you are, you know, look, the big thing comes down to do you have something to say and do we care, right?”

“I look at my 12-year-old, and I’m going, this is your world, it’s not mine.”

“We’re just so caught up in our own hubris and how awesome we are, and you have guys like Elon Musk who just say things to say things to disrupt them, and you’re just like, where are we right now as humans?”

“The one question we’re not getting into, and it’s called artificial intelligence, is: What is intelligence? What makes something intelligent?”

“We, as humans, have become the measure of intelligence. We’ve created these metrics, these numbers, and said, ‘This is what is smart, and then you’re all stupid.'”

“Nature has more for us to learn from than any of this other stuff. We think we’re brilliant because we created the internet, the atomic age.”

Corrales Cachola:

“There is no culture, there’s no community around AI, it’s just how can I get mine, and how can I be better than the next guy?”

“So AI comes along, and a lot of the decision-makers say, hey, we can basically replace you. Well, they’ve already replaced these people, they’re already using them, and I think that’s the unfair thing about AI.”

“That’s why I don’t like a lot of the AI implementation, even in the toolsets that we use, the SaaS tools that we see. It’s like, did I ask for this? No, no one asked for this, they just built it in, they baked it in.”

“When people talk about that, they forget the connections that were made to get to Einstein and then from Einstein to other people.”

“It’s not just about high intelligence; it’s about the connections that were made and that process of communication between people that builds true meaning and value.”

“Consciousness connects to love, community, and other things.”

Nolan Ether:

“I’m actually in the middle of it because it’s one of those podcasts where, you know, I listen to it like 15 minutes at a time because there’s so much in there that you want to digest.”

“What happens when ChatGPT evolves and people have, let’s call it, a 140 IQ personal assistant whose entire existence is only to improve your life?”

“I think there’s no doubt it’s potentially the most impactful technology of our lifetimes, or maybe in the history of humanity.”

“The way he defines it is if there’s something it’s like to be something, then it’s conscious.”

Who Are B Earl, Corrales and Nolan?


Ben Earl is a Marvel Writer who has been working in the entertainment business for the past 20 years.

His work ranges from documentaries to graphic novels to feature films to community building. He currently has projects  set up with Cartoon Network and Gaumont.

B Earl is also heavily involved in the blockchain space, using NFTs as storytelling nodes for both community and world-building. As well as Chiwawows (www.chiwawows.com) which is built on the Solana Blockchain, he has two other story-driven NFT projects in development called Realms of Avalon and Angryfoot.

His focus is to create multi-media experiences in entertainment using new and emerging technologies that engage communities into creative activity. And telling stories that resonate.


Nolan Ether

Nolan is a brand and marketing leader, storyteller and content creator. He has worked with Sony, IBM, Sierra Club and more. He loves crafting authentic stories and immersive experiences that engage people and capture their imaginations.


Corrales Cachola

Corrales is Mr Culture and founder of Brand New Voices, an exploration community for emerging opportunities in Web3, Communities, Crypto, DAOs, NFTs, Blockchain, DeFi, Marketing, Diversity, and Social Impact. He has worked with Nike, Intel, The World Health Association and many more. 

“Corrales has been one of the greatest mentors I’ve had in web3. He is not just tremendous with marketing and storytelling, but also highly aware of being human, the importance of strategy that has its basis on higher purpose and inspiration. Brainstorming with him is definitely a flow sensation! Love working and sharing with him!” – Daniela Merlano

Please enjoy the show!

Never miss another trend. Receive expert insight direct to your inbox from your friends at the cutting edge of web3 and emerging tech.

The Web3 Titans On A.I And Hollywood Transcript

Jeremy Gilbertson: The disruptors and curious minds, welcome to another amazing episode of “Thinking on Paper.” My name is Jeremy Gilbertson. To my left, or depending on your perspective, your right, Mr. Mark Fielding is still on holiday with the kids. Mark, are we still bouncing around Europe? What’s happening?

Mark Fielding: Yeah, so, bouncing around Europe. I’m in England and very excited to be doing this.

Jeremy Gilbertson: Let’s jump right in. Today, we’ve been messing around with a couple of different formats with “Thinking on Paper.” At first, it was just Mark and I talking, and we quickly learned that we were not interesting enough to hold an audience for a long amount of time. So, we decided to bring on some really amazing guests that have been super kind to jump into our fun discussions. We’ve done some rabbit hole editions where we go a little bit deeper on certain topics. Today, we have what we’re calling the “Triple Threat Mystery Edition.” 

Our idea with this is to bring on some of your favorite previous guests and unpack a couple of things in more of a collective sense-making engine. Not only for Mark and I but hopefully for the people out there listening as well. Two things: one, we’re going to talk about some controversial opinions related to culture and emerging tech. Then, we’re also going to help each other answer our most baffling questions that our curiosity cannot let loose. How does that sound, Mark? Is that a good intro?

Mark Fielding: It sounds awesome. Let’s get into it.

Jeremy Gilbertson: So, the Willy Wonka meme that you posted in our pre-production thread is very apropos. The suspense is killing us.

Mark Fielding: It’s killing me. And as Willy Wonka said, “I hope it continues. I hope it lasts. Drag it out a few minutes longer.”

Jeremy Gilbertson: No, we’ve already dragged this out long enough. Let’s go. Come on. Alright, alright. So we’re gonna bring them one at a time. I won’t tell you who’s coming on particularly, but they might just randomly appear in our chat. Oh my gosh, there’s one of them. Nolan Ether, back in the house. Good friend of the show. Thanks for jumping into the mix. I’m going to bring in our other guests as we continue to go. Let’s see who’s popping in next. Oh my gosh, Ben, what’s happening? Good to see you again, sir. Happy morning time. Thanks for being here early.

Mark Fielding: The way you were bringing them in, I thought we were going to do it like Dallas.

Jeremy Gilbertson: Yeah, that’s it, right. Corrales, what’s happening, my guy?

Corrales Cochala: What’s up, what’s up.

Jeremy Gilbertson: You got that beautiful baritone voice kind of rocking and rolling with your mic setup, man. It always sounds good.

Corrales Cochala: Well, thank you. That’s because it’s 7:30 in the morning here. Same with B. Earl. I’m sure we got our coffees.

Jeremy Gilbertson: Yeah, that’s right. Let’s dive right in, guys. So again, I teed this up as this collective sense-making vehicle for not only our viewers but ourselves. And hopefully, you guys can kind of spin something out of it as well. But let’s start with the controversial thoughts. Every audience loves to listen to something that’s going to stir the pot, break things, all of that kind of stuff. 

Why don’t we, since we brought you on first, Nolan, hit us? There are no wrong answers to this, by the way, guys. Safe room, friendly room, all that fun stuff. But you know, hit us with a thought, Nolan, at the intersection of culture and emerging tech.

Nolan Ether: I know all you guys pretty well. Glad to be here with all my boys. So, I did craft some of these questions and thoughts specifically for the audience because I thought it might be controversial and would love to hear your opinions. I’m not fully convinced of my own ideas on this, so I’m just gonna throw it out there for us to debate. 

For the controversial one, I want to throw out that it’s possible that emerging tech, like generative AI and 3D avatars in real-time, you know, kind of 3D virtual beings and things like that, could actually wind up, after the strikes and after all these things with evolution in Hollywood and media production, being a net positive for creators, writers, actors, people in the creative field. Curious to hear what you guys think about that.

Jeremy Gilbertson: So, let me make sure I understand the thought the right way. So, the actual net result of what we’re seeing on technology’s influence in storytelling in Hollywood, and the strike, and all of that, will the outcome of all of this be a net positive to the creators?

Nolan Ether: Yeah, I think that if Hollywood long term stays kind of the way that it’s always been, then they can take advantage, they can cut costs, they can cut creatives out. But if it becomes more about democratization, about the ability for people who maybe haven’t had the opportunity or the ability to publish their own content or get distribution, or do things without living in Hollywood, some of these tools mean the next generation of IP or big media could come from three people in this room working together on something.

Mark Fielding: I think that is a nice cold shower wake up for B.Earl to go into that one.

Jeremy Gilbertson: Yeah, that’s why I’m asking the question. And I know Corrales probably has some thoughts too.

Ben Earl: Yeah, I mean, I can jump in on that. I agree with reservations. And I think the interesting thing is, what do we define content as? What do we define as meaningful, evergreen stuff? What do we care about? What’s gonna matter in 100 years? You look at the books that we’re still reading and being taught, and even the films. The film industry is less than 100 years old, really. And you look at Marvel, it’s less than 100 years old. You look at the gaming industry, it’s less than 100 years old. These are really young, baby industries for the most part. Obviously, books are the oldest. The written form is the oldest, and that’s been around for several hundred years where it’s been mass consumable. 

So, I think you have technologies that allow for mass consumption as well as opportunities for people to be creative. I look at my own career where I was in school for English. I wanted to make film. They didn’t have that, but I was able to take some of my money that was set aside for school, and my grandfather so gratefully left me, and I was able to buy a camera and buy an editing system. This was when Final Cut One first came out. So, my career is very much predicated and based off of the fact that I was able to purchase that technology and go off and make a documentary on comic books when I was 20 years old. 

That was really an amazing calling card that got me into the comic book business. I was always loving comics, and here I am interviewing Neil Gaiman and Frank Miller and all the people I grew up with, John Byrne, and being a fan but also being able to come at it from a different direction and say, “I’m here to tell your stories with our team.” And the director was a good friend of mine. So, I mean, you know, I just look at that, right? 

In the past 30 years, these new technologies, in 20 years, these new technologies allowing us to be able to tell stories that otherwise weren’t able to really be told. We were able to tell the story of comic books, and then footage I shot of Frank Miller, because they couldn’t get him on camera, PBS actually had to use. They requested, “Hey, can we get that interview you guys shot of Frank because we need some quotes from him.” So, I mean, that to me is like, we were doing this way back when, right? And you know, I’ve always been sort of tech-adjacent, being a film editor. And when I say film, I use that word loosely because I’ve been working in non-linear editing systems since I was 20 years old, working with codecs and compression and all that. And in that really early version of web one, where I was working in a video production arm of a consulting company for Johnson & Johnson, where we were doing compression for doctor streaming. So, you know, it’s all about what are you using it for and how is it going to be used? 

Then, ultimately, to your point, it’s giving people opportunities that they otherwise wouldn’t have had. And I think the question is, what do you do with that opportunity? What do you do with this new technology? What do you do with these cameras? 

I’m also a musician, so it’s like, you know, I’m a musician. I guess people say that they’re a musician when they’re making money making music. I guess that’s like a writer. But I guess, you know, correct me if I’m wrong, if you can play an instrument, if you’re a musician, you’re a musician. Will you play music? I’ve got a, I host a jam at the rainbow. But for me, music was such a big part of all of it. And you know, even making electronic music and using things like Rebirth back in the day, you know, and burning CDs on my computer and handing them out to friends in college, being like, “Yo, check out this cool dance track I made.” 

And again, that’s technology that’s allowing us, instead of having to have the 808s and the SP 1500s and things like that, you now have these technologies literally on your phone. I have iMachine on my phone. I can bring it home, and I can use this micro to make beats. I mean, I look at all this as like, it’s a large sort of interwoven piece of opportunity for people to make creative cool shit. 

That’s the bottom line. And I think, you know, I’ve been working with AI for the past two years, while AI, large language models, we’ve been OpenAI developers for the past two years. We’ve been working very closely with it, been seeing how it’s all been changing and the speed when EBT came out. So, you know, I think there’s a lot of reservations here in Hollywood. You know, I mean, I’ve been working in Hollywood for 20 plus years as well. And you know, and I think there’s a lot of fear of new technologies sort of shifting things. But at the end of the day, you know, writers, creators, I think there are people that are true artists that have visions and want to tell those stories. And then there are people that are good enough to copy, and they’ll only stay there. 

And I think that’s where AI and things like that are going to start replacing where you are just the best version of all of the sort of amalgamations of everything else. And if you’re not rising above that, that’s where you’re going to get replaced. So, you know, cut your teeth, get your craft together. It’s cool. Like, honestly, as a writer, you know, I still look at guys like Neil Gaiman and Grant Morrison and Alan Moore as sort of my North Stars, even when I’m doing my writing. 

You know, but I’ve been finding, even, you know, I look at my, I get this Daredevil series out right now, and I look at that, and I compare that back to what I was even writing three years ago, and I go, “Wow.” Like, I’ve been finding my voice. And I think we find our voice as we develop and grow. And you know, new technologies and things like that can help you expedite your confidence and ability to find that voice because a lot of times you don’t have that support network around you. 

You don’t have those great editors. You don’t have someone like Taboo going, “Dude, you can do it. I believe in you, man. Like, you’re awesome.” And you know, and like, those kinds of voices, we need those communities. We need people that we can go, “Yeah, you’ve done it. I believe. I, yeah, why can’t I? Why am I not?” And I think those are new ways technologies can also connect us and allow us to have that confidence.

Jeremy Gilbertson: I think it’s great. I loved your point on the idea of, okay, so the general public will largely be able to produce some sort of story-based content at a relatively reasonable level, eventually, right? So it’s like, “Hey, creators, time to tighten it up a little bit. Let’s get even a little bit better to figure out how to work through that piece.” 

Another bit of dust I want to sprinkle on this before we throw it to Corrales is the idea, if you guys ever read any of Timothy Wu’s work, I can’t actually remember the name of the book, but he’s the guy that coined the phrase “net neutrality.” And it’s this one book he wrote. I’ll have to dig it up while you guys are talking, talking about open and closed systems. How every technology oscillates through open and closed sectors. The film industry did it for a while. Well, Edison basically controlled all the tech, controlled all the content, and then that went a little bit open. Theaters were the same way. The Fox Theater and the distribution was controlled, and then it kind of got open. I think it’s really interesting that we keep seeing these cycles. He’s not wrong. It’s a cyclical pattern. It’s just like, you know, it’s just like night and day. It’s just what happens. But Corrales, give us your thoughts.


Mark Fielding: Can I just, I’m gonna pick up on that last bit before class, guys. They’re saying about finding your voice, and I’m thinking so, making it possible for anybody to create. A lot of the Web 3 platforms are giving that opportunity now, and a lot of the people who didn’t create are now creating there. And there’s a big discussion between using AI and it becoming a crutch, or not using AI, and not nobody’s saying really that AI can help you find your voice. There’s a lot of people saying that it can weaken your voice because you rely on it so much, but it’s interesting to hear be saying that it can be used to quickly find your voice or to help you find your voice. I find that very interesting.

  1. Earl: I think it goes back to confidence, right? As a creative, as a writer, audition, as whatever you are, you know, look, the big thing comes down to do you have something to say and do we care, right? Like there’s so much YouTube stuff out there. At the end of the day, it’s like it’s not, it’s meemic, it has its moment, my son watches it, he has a laugh, and then okay, moving on, right? But for him, right now, basketball is everything, and anything he cares about, he will follow anyone that’s like any sport, whatever. So it’s like someone making that sort of content, someone researching that sort of stuff, someone really getting deep into the weeds of it, and using, let’s say, you know, an OpenAI or whatever llama or whatever large language model to really sort of put together that content, you know, and someone that maybe like my son who loves basketball, and then what if he’s like, well, you know, let me use this to help do my research and make sure my research is right, and things like that. 

So you start getting confidence now. The question is, has it been trained on good data? Like that’s a whole other conversation. You know, it’s the information you’re getting good? Is it helping you create good copy? 

As another question too, is it helping you format things? I mean, you know, I’ve got here, you know, as a writer, you’ve got your Strunk in White, right? Oh, am I in the right ten? Somebody, you know, like we have our little crutches, right? Like we have our books, we have our different things. If we’re using technology to basically bring that all together in one place and be able to have a conversational and engagement with those, those pieces of techno, you know, this is technology right here, this book, right? 

It was written by someone who’s printed on something. Well, I need that information. How quickly can I get it and make sure that it’s giving you the confidence to know that I’m doing something correctly and then I’m getting the voice and vision that I need out? I think that’s where it can really be good, and I think that’s going to come down to training on the right data sets and more enclosed systems to that point, you know, a closed system of correct data sets that ultimately can really help creatives find that confidence and voice to know what they’re putting out is, you know, within the nouns and confines of what we are used to reading. And then once you’ve learned the rules, then you go Bibi Caso, then you go break it, then you go figure out how to make it your thing and do some crazy weird shit, you know? 

But learn the structure, learn the stuff that they teach you in school first, you know, and if you can’t afford to go to school, then this is where the internet and this is where new technologies give you those opportunities to start learning that kind of stuff that gives you that confidence to go and say, okay, I have something to say now. I’m going to go out and say it, and I know I’m going to do it in a way that’s going to fit what the parameters are that people are used to, and then I’ll be able to figure out my own way, and that’s where I can find my voice.

Jeremy Gilbertson: Corrales, hit us with, hit us with no one’s hot take. Is the, are we going to be better off eventually? Are creators gonna be better off eventually after this Tim Wu cycle?

Corrales Cachola: Yeah, it’s a good one. I think it’s interesting just hearing all the perspectives, and, you know, especially, you know, B here, all being in the mix. I mean, in keeping with the topic of the show, I’ll say I think AI is horrible, and I don’t, I like it, I love it. I mean, just wrote a post about it. What I mean by that is I don’t, I think it’s horrible in the way it’s being implemented in Hollywood. 

What we’re seeing, I think, is a bellwether case that is gonna, this is what the future is gonna hold for all industries, and that’s because it’s being used, it’s being weaponized against people. It’s a tool, it’s a cold tool for economics and for quick hit, quick wins. There is no culture, there’s no community around AI, it’s just how can I get mine, and how can I be better than the next guy? And that’s why I think Web3, blockchain, and an ethos of community, which AI simply doesn’t have right now, is going to benefit, aside from the technical implementations of putting checks and balances on AI. 

But that said, what I’ll bring into the conversation about Hollywood, I also have a lot of my, you know, I have lots of my cousins who are in Hollywood, and studio heads, and so forth. One, one of the not studio heads, I’ll just say they’re in the industry, but one thing that troubles me about this situation that I think is going to blow, come into all industries eventually, very soon, is that AI is the way that AI is being used in Hollywood is sort of like a lot of these writers are underpaid to begin with. So a lot of streaming and that kind of stuff, these people were making pennies on the dollar in a lot of ways, these creators, and not getting credit, and a lot of it falls along racial lines, a lot of it falls along gender, that kind of thing. So we see that a lot in a lot of stuff. So AI comes along, and a lot of the decision-makers say, hey, we can basically replace you. 

Well, they’ve already replaced these people, they’re already using them, and I think that’s the unfair thing about AI, and now it’s like, oh well, you have to, you have to learn all the AI stuff, man, they got to get on the bandwagon, and I think that’s the big problem with legacy or AI building into legacy systems so quickly without thinking, and that’s why I think that, and that’s for that’s for lack of vision, that’s for lack of culture, and that’s my big problem with AI. I love AI, I think it’s crucial for Web3, I think it’s crucial for blockchain, but eventually, I think we’re going to see some very, very bad situations across all industries, and I think the Hollywood thing where people are rising up right now about and protesting the situation, I don’t think they’re going against AI. I think what they’re going against is the unfair treatment that they were already faced with, and now AI is this easy thing to say, well, you know what, you guys are just behind the times, come on. So that’s why I don’t like a lot of the AI implementation, even in the toolsets that we use, the SaaS tools that we see. It’s like, did I ask for this? No, no one asked for this, they just built it in, they baked it in, everyone is going crazy about it, like it’s the next revolution, look what Adobe did, Liquid did, you know, and being in Web3, you know, a lot of us know we don’t, and we don’t get that enjoyment. 

But now I look at it, I’m like, I don’t want that enjoyment. In fact, I think that’s the reason that a lot of blockchain is here, to pull back on that stuff and say, wait a minute, there needs to be consensus about this, there needs to be decisions made. You guys, you guys sitting behind your decision-makers at Adobe, implementing AI into everything, that’s not the way this is going to work. But yet, if we continue to think like that, I think it’s going to be very bad consequences.

Jeremy Gilbertson: What about this idea of, you know, pulling back the thread a little bit, getting back to a first principles approach on this stuff? Ben, you referenced a book, you know, I’ve got William Zinser’s book that I go to all the time to figure out if I’m saying it in digestible frameworks and all of that. But here’s the thing, I trust the book, I trust the guy that wrote the book because he’s proven himself to me right as a resource over time. But with these AI, this latest tech, there’s no trust validator, and humans love shortcuts. So how do we balance this trust validation with our desire to optimize every freaking thing in our lives?

  1. Earl: That’s a good question because I think, yeah, you know, we’ve grown up on systems that, like books, right? You have a series of gatekeepers, right? You have the publisher, you have editors. I mean, it’s like anything, right? Do you trust that brand? Do you trust that they’re going to give you something right? If you’re buying yogurt, do you go to Dannon or do you go to Brand X? You know, those are the questions that come back to you. 

And maybe they’re both exactly the same, and maybe they’re both great, you know, but it’s always sort of up to how are you presenting the material, the information? And I think, you know, for us, it’s like, it’s unfortunate that the internet has, you know, it’s great, but at the same time, like if you don’t have great gatekeepers like on Wikipedia making sure that all that information is up to date and really, you know, those people that we’ve seen on like, you know, not Vice, but you know, those, you’re like, oh, that’s the guy, there’s that one guy who like literally lives on Wikipedia updating everything, making sure he’s like written them, like those people, those are people that are dedicated to the craft, that are dedicated to information, that are dedicated to human integrity. 

And I think we have yet to see that because of how these large language models being trained on so much data and ultimately just trained to scale right now. And it’s like this arms race of information, and we’re in, you know, I think in history is going to look back on this and go, this was a different sort of arms race, you know, we were watching these big tech companies coming in and trying to win, you know, with the information, you know, we’ve been coming in this information age right now, we’ve kind of sort of come to this flash point of information can be so quickly cultivated, created, resourced, pulled in, and you’re looking and going, holy shit, and we’re using these neural networks to do it, and wow, like this is amazing. 

And it’s like we’re just so caught up in our own hubris and how awesome we are, and you have guys like Elon Musk who just say things to say things to disrupt them, and you’re just like, where are we right now as humans? Like, what are we, what are we going towards? What’s our there? Are we really looking and saying, are we trying to go to the singularity? 

Do we just want to get out of these meat machines and put ourselves into a consciousness that exists throughout everything in the universe? It’s like, is that where we’re going? And are we trying to get there in 3,000 years? Or are we just trying to actually experience and enjoy life? And these are just experienced machines, and they’re not machines. I mean, I think, you know, the human brain is so much more fascinating than we even give it, you know, real reverence. And we think that we are gods, and we can replicate that, and I think that’s really very sort of arrogant to think that, you know, the brain that has been organically grown for some crazy amount of time, and who knows, maybe we can get into Ancient Aliens, and maybe there was, you know, the psychedelic side of it, or maybe some aliens came down and tampered with us to make us get that leap, who knows, we go into fiction, or maybe it’s real. 

But I think that’s the question, it’s like, why are we doing it, and where are we going? Because if we’re not asking in those questions right now, and we’re just doing well, we can because well, that’s stupid, you know, like how do we do it and say like, yeah, is it making us better? Is it going to make our kids better? Because we’re really, you know, I forget who said the quote, but it’s like we’re leasing this planet right now, and especially us that are in that age of 30 to 50, you know, especially the Gen Xers, and a friend of mine was telling me he’s also in 1980, or like we’re in this weird cosplay kids, you know, like we live between these worlds, we’re not really a millennial, and we’re not really Gen X. And, you know, I think those of us in this sort of mix between, you know, the Gen X generation really does have sort of an obligation to help, you know, kind of craft towards where we’re going, but we’re handing it off. I mean, I look at my 12-year-old, and I’m going, this is your world, it’s not mine. Like at this point, like, yeah, there’s things I want to do, there’s things I still am like looking to keep doing, but you’re the one who’s about to inherit it in the next, you know, eight to ten years.

Corrales Cachola: That point is, is like, you know, what disturbs me about a lot of this is like, what’s the first thing when OpenAI and ChatGPT starts exploding, and Bard comes out? They eliminate, they quietly eliminate all of their ethics, AI ethics. That’s telling. Microsoft did it, Google did it, and there’s no blowback. Elon, I mean, if anyone thinks Elon Musk is, as brilliant as he is, if anyone thinks he has any ethics, you know, they’re barking up the wrong tree. This guy might be a genius, but ethically, he’s not. And so are you gonna, no one expects any ethics coming out of X or whatever he’s doing, right? So this is just gone. It’s like to B. Earl’s point, it’s like, where are we going with this? Is it just to build the fastest engine, the biggest rocket ship? Okay, I guess that’s what’s happening. And but yeah, you’ve, you see this weird psychology of like people literally fearing, this is where I think we need to pay attention, people are fearing for their future. And this is not just relegated to quote-unquote low-end workers anymore. We’re talking doctors, we’re talking attorneys, not that they’re any more special than anyone else, but the point is that everyone is just sort of passing it off and going, let’s just see what happens. Okay, that’s, I don’t know where this is going. We can’t, we have no control, really. We have no control over anything. No, we don’t have any control over it. It just goes, and you just, you just ride on that rocket ship. What if it explodes? Well, oh well, you know, I don’t know.

Jeremy Gilbertson: I agree with you, Corrales, and where my head goes, and Nolan, I’d love your thoughts on this, like where my head goes with this is what we see with tech and innovation. It’s like, hey, get out of the way, we’re building stuff, we’re changing the world, get out of the way, we’ll figure it out later. It’s like, yeah, we’re building, we’re building, and money gets thrown at people that build stuff. Nolan, how do you see us, like, kind of, only a few people are saying that a lot?

Mark Fielding: “Only a few people are saying that. Most people are just jumping on the bandwagon and going, ‘Okay, they’re changing the world. I’m going with them because I don’t have any ideas of my own.'”

Nolan Ether: “I’m actually in the middle of it because it’s one of those podcasts where, you know, I listen to it like 15 minutes at a time because there’s so much in there that you want to digest. But it’s a conversation between Mark Andreessen from Andreessen Horowitz and Lex Friedman talking about AI. Those conversations are always great. Lex is a great interviewer, and they’re talking about the gatekeepers. It sounds like Mark is a big believer in AI overall and the potential for people. Information, intellect, or those types of things are what make people better and stronger. On every measure across humanity, if you’re more intelligent, then you’re better equipped to handle issues. So, what happens when ChatGPT evolves and people have, let’s call it, a 140 IQ personal assistant whose entire existence is only to improve your life? They are your biggest cheerleader. They’re never going to tell you no. They’re going to be infinitely patient, answer any questions you have, and teach you things. So, if someone’s got 140 IQ already, then you’ve got a partner. If someone’s less than 140 IQ, you’ve got someone potentially more intelligent than you to balance things off. And if you’re above 140 IQ, you have someone to hand off menial tasks. But then the question becomes, who’s the gatekeeper that built that 140 IQ person? Do they share the same values as you? Deep in that large language model, are they going to influence you in some way? Who’s deciding all of those things? That’s where we get into some of the regulation and conversations about how we make sure this isn’t going to destroy humanity and is only going to help humanity. I think there’s no doubt it’s potentially the most impactful technology of our lifetimes, or maybe in the history of humanity. But that could go both ways. If we get a 140 IQ person and, let’s say, three years from now it’s 180, well, that’s better than Einstein. So, if everybody has a personal assistant that’s more intelligent than Albert Einstein, what does that mean for humanity? It could be good or bad.”

Corrales Cachola: “When people talk about that, they forget the connections that were made to get to Einstein and then from Einstein to other people. It’s not just about high intelligence; it’s about the connections that were made and that process of communication between people that builds true meaning and value. That’s the secret. That’s the sauce. That’s the good stuff.”

Nolan Ether: “I think a lot of the guys in this room and many people in our community probably share a lot of the same values in some ways. I’ve had really deep conversations with many of you. But what happens if you don’t? What happens if the person that built that particular model doesn’t share those values? And even if they do, even if we all think our values are the right ones, there are people out there who disagree with us. They don’t want their large language models built on our values either. So, what happens?”

Jeremy Gilbertson: “Here’s an interesting thought. Ben, earlier you described the accessibility of the technology that we can now use to create. You got early access to cameras and editing suites. We talked about how we could do all these great things on our phone, like make a hit song. Just because I have access to that doesn’t necessarily mean I have the patience to learn, understand, and figure out how to interact with it. We’ve all been in rooms with wicked smart people, and in some of those conversations, I’m like, ‘Oh my God, I don’t even know where to begin the translation between where I’m at and where this particular individual is at.’ Ben, here’s my question: It sounds like we’re all going to have droids like R2D2 and C-3PO as co-pilots for our existence through our world and its transition to our kids. If it was like a Tinder or dating app, what would you set as your requirements for your R2D2 or C3PO?”

B.Earl: “Well, here’s the thing. The one question we’re not getting into, and it’s called artificial intelligence, is: What is intelligence? What makes something intelligent? Then there’s consciousness, and I think we’re not really touching on the bigger idea of consciousness. Is a tree conscious? If I could tap a tree and ask, ‘Tell me all the stuff you’ve seen in the past 300 years,’ that’s a communication thing. Obviously, I can’t talk to a tree because I don’t have the same language. But a tree can talk to the forest through its roots. Those are the kinds of things where I start thinking about communication and consciousness. We, as humans, have become the measure of intelligence. We’ve created these metrics, these numbers, and said, ‘This is what is smart, and then you’re all stupid.’ That’s created problems, especially with what we’ve called special needs. Kids get put into remedial programs because they’re not ‘smart enough.’ But what if that kid is brilliant at art? What if a teacher can’t recognize that? That’s where the problem starts. We call these people experts, geniuses. Is Elon Musk a genius because he made a lot of money? Or because he got a lot of smart people around him to do amazing work that made him a lot of money? We then go into the metrics of finance. What makes someone a genius? Why is Elon Musk a genius? These are the bigger questions. Einstein saw things differently. A large language model is trained on everything that’s already existed. It hasn’t seen what’s coming. It can be predictive, but someone like Einstein saw things differently. There are people who are book smart, but how they use that information, that’s what makes them intelligent. And I guess that goes back to the question.”

Corrales Cachola: “Yeah, and we should not discount, kind of a nod to the movie Interstellar, things that we don’t know, like love. AI does not have it.”

Mark Fielding: “Does creativity come from intelligence or consciousness? You asked, ‘What’s intelligence? What’s consciousness?’ If you look at some animals in the animal kingdom, they’re creative. There’s no question they’re creative, but they don’t show consciousness. Does the creative act come from intelligence rather than just consciousness?”

B.Earl: “This is a really big question. I think everything is consciousness. Religions like Islam believe everything is conscious. Indigenous cultures believe things have consciousness. Myths say rocks have consciousness. The question of what consciousness is, is something we’re not even touching on. We just keep talking about intelligence. Consciousness connects to love, community, and other things. Trees have community. The forest is a community. The forest is the first worldwide web in its own right, with mushrooms and other things speaking to each other. That’s fascinating. Nature has more for us to learn from than any of this other stuff. We’re so focused on these little pockets. We think we’re brilliant because we created the internet, the atomic age. But in a thousand years, will we look back and think we were childish? What if in a thousand years we could resurrect consciousnesses? What if my consciousness exists in a thousand years in some computer somewhere because they liked what I said? They’ve created a version of me that’s living out there, based on this interview, this podcast we’re having. They’ve culled this information and said, ‘We like what these guys were saying. We’re going to recreate them and have them run iterations of this podcast thousands of years over and over.'”

Mark Fielding: “I like your optimism that we’re still here in a thousand years.”

Jeremy Gilbertson: “Ben, you’re hitting on a couple of really awesome things. The whole mycelium network, trees being able to talk, I’m such a nerd about that stuff. Trees can sense other trees who need help and send nutrients that way. It’s proven. Theoretical physicist Michio Kaku, if you’ve never read any of his stuff, talks about beaming consciousness across space-time. That’s old news. But isn’t everything we do as humans an effort to control and exert influence on our environment?”

Nolan Ether: “Let me just say, I’m so glad that this somehow pivoted to consciousness. That was going to be my other cerebral question. Mark, you’re talking about some animals not having consciousness. I’m a big fan of Sam Harris. He spends a lot of time thinking about consciousness. The way he defines it is if there’s something it’s like to be something, then it’s conscious. Consciousness might look different for a spider than for a squirrel or a person. But if for that spider, it’s like something to be a spider, then that being is conscious. Jeremy, can you rephrase your question?”

Jeremy Gilbertson: “I was thinking about wrapping it back up into the storytelling world. We have mental models of what we believe the world to be. How much of what we do as humans is related to that?”

Nolan Ether: “I think you’re trying to control your world. I agree with that. I’d just replace the word ‘environment’ with ‘consciousness.’ The only thing that is real is consciousness. The only thing that is real for you is your consciousness. Everything you’re doing is to impact your experience of the world in some way. You’re trying to impact your own experience and consciousness. I don’t know about ‘environment’ because that’s defined differently. But you’re ultimately trying to impact your experience.”

Jeremy Gilbertson: So, we’ve got… we’ve gone… we’re getting close to time. And I, man, I’d love to continue riffing on all this stuff, but why don’t we end with this and we’ll go right around the horn. We’ve talked about a lot of deep stuff, right? We’ve talked about consciousness, we’ve talked about the nature of human tendency to build and attack, and tech and ethics and all of that. Let’s flip around the horn and give the audience one person, one writer, one thought leader, someone you trust as your go-to to help build your understanding of what’s happening right now at this intersection of tech in culture. Ben, you first.

B Earl: Well, I mean, I’m just going to go with Stephen West. I care less about tech because tech is constantly changing. You can read all the newsletters you want every day, but if you don’t have a lens to look at where we came from, to really understand the philosophy of mind and consciousness, there’s so much that has been explored over the past several thousand years in human understanding, intellect, and creativity. I love Stephen West. He’s got a great podcast called “Philosophize This”. Over the pandemic, I literally listened to every episode and then went back and listened to them all again. Now he’s putting out new content more frequently. He takes a lot of great information from philosophers. His last episode was on AI. He spoke about ChatGPT, he spoke about Noam Chomsky. He looks at it from a holistic point of view and explores it through the lens of philosophy. I recommend his podcast to everyone. Obviously, there are others like Sam Harris, but I think Stephen does a great job. It’s funny, bite-sized, and you get a deep dive into amazing topics. Then I go out and buy all the books he’s talking about to go deeper. That’s my choice.

Jeremy Gilbertson: Awesome, awesome. That’s great. Corrales, what about you?

Corrales Cachola: In keeping with brand new voices philosophy, I will say it’s us. It’s you and me. You don’t learn web3, you are it, you manifest it. I believe more in energy levels. There’s a connection with neuroscience, the stories we tell ourselves, the pathways in our brains. You will things into existence rather than just passively learning them. I just read a piece on that. The universe itself may very well be that. If anyone’s watched the movie “Everything All At Once”, this is in line with that. You are the multiverse, the creator of that through your family, heritage, culture. I talk about heritage and culture being connected to these things, things that we typically can’t explain but they are the creators of all these things, and perhaps the universe itself and beyond. So, go deeper in rather than always trying to learn outwardly.

Jeremy Gilbertson: Got it, got it. Before we go over to Nolan for your thoughts, I’ll tack on. We talked about things like everything as potential in this many-world interpretation. So, if no one has read anything from Richard Feynman, I would highly recommend that as a source of understanding quantum mechanics. Nolan?

Nolan Ether: I love what everybody said here. I don’t have a specific person, but I find that if you start thinking about the world and consciousness in terms of energy, go back to ancient wisdom. Strip away the political connections and you find it’s all the same stuff we’re talking about. I find myself going back to ancient wisdoms and religious texts. Look at it not from a secular or modern religious point of view but from what they were trying to get across. We’re all telling the same story. We’re all energy, we’re all the universe. God is us, we are God, we create our own reality, and consciousness is all there is.

Jeremy Gilbertson: I love it, guys. We went deep today, and I wish…

Mark Fielding: What about me? I want to go. I’ve been thinking about Einstein and his IQ. He didn’t get a 140 IQ just like that. Intelligence is a journey. If you’re just given an IQ of 140, what’s missing from that? I know you’re familiar with Joseph Campbell. I’m reading “The Power of the Myth”. It ties in with what Nolan was saying. We are all essentially the same. I’m using AI to create a new framework, taking Joseph Campbell’s Hero’s Journey, Pixar’s 21 rules of storytelling, and Blake Snyder’s “Save the Cat” to create a new framework for ethical, historical, mystical storytelling. I know you like frameworks, Jeremy, so that’s for you. So, Joseph Campbell.

Nolan Ether: We need to talk, Mark, because I’m working on a similar project.

Mark Fielding: Wow, okay, definitely.

Corrales Cachola: Can you send me the prompt that you’re using for that? I want to do the same thing.

Mark Fielding: I don’t really use prompts, I like to converse.

B Earl: I think there’s a whole other podcast in this. I’m actually working with the Campbell Institute.

Corrales Cachola: I want to see Mark plugged into the AI.

Mark Fielding: Can I just say that R2D2, C3PO, Luke Skywalker turned it off?


Please share our show with your friends and colleagues.

This is where disruptors and curious minds connect

Improve your thinking together with the TOP community. Join the exclusive Book Club!