The Jordan B. Peterson Podcast XX
[0] Hello everyone.
[1] Today I'm speaking with entrepreneur, scientist, and artificial intelligence researcher Brian Romley.
[2] We discuss language models, the science behind understanding, tuning language models to an individual's contextual experience, the human bandwidth limitation, localized and private AI, and ultimately where all of this insane progress on the technological front might be heading.
[3] So, Brian, thanks for agreeing to talk to me today.
[4] I've been following you on Twitter.
[5] I don't remember how I came across your work, but I've been very interested in reading your threads, and you seem to be au corinque, so to speak, with the latest developments on the AI front.
[6] And I've been particularly fascinated about the developments in AI for two reasons.
[7] My brother -in -law, Jim Keller, is a very well -known chip designer, and he's building a chip optimized for AI learning.
[8] And we've talked a fair bit about that, and I've talked to him on my YouTube channel about the perils and promises of AI, let's say.
[9] And then I've been very fascinated by chat GPT.
[10] I know I'm not alone in that.
[11] I've been using it most recently as a digital assistant.
[12] And I got a couple of questions to ask you about that.
[13] So here's some of the things that I've found out about chat GPT, and maybe we can go into the technology a little bit too.
[14] So I can ask it very complicated questions.
[15] Like I asked it the other day about there's this old papyrus from Egypt, ancient Egypt, that details out a particular variant of the story of Horace and Osiris to Egyptian gods.
[16] It's a very obscure piece of knowledge, and it has to do with the sexual element of a battle between two of the two of the the Egyptian gods.
[17] And I asked it about that and to find the appropriate citations and quotes from appropriate experts.
[18] And it did so very rapidly.
[19] But then it moralized at me about the sexual element of the story and told me that maybe it was in conflict with their community guidelines.
[20] And so then I gave it hell.
[21] I told it to stop moralizing out me and that I just wanted academic answers, and it apologized and then seemed to do less of that, although it had to be reminded from time to time.
[22] So that's very weird that you can argue with it, let's say, and that it'll apologize.
[23] It also does quite frequently produce references that don't exist.
[24] Like about 85 % of the time, 90 % of the time, the references it provides are genuine.
[25] I always look them up and double -check what it provides.
[26] But now and then it'll just invent something completely out of the blue and offer it as the actual article.
[27] And I don't understand that at all.
[28] It's like, especially because when you point it out, it again apologizes and then provides the accurate reference.
[29] It's like, so I don't understand how to account for the behavior of the system that's doing that.
[30] And maybe you can shed some light on that.
[31] well first off dr peterson thank you for having me it's a really an honor and a privilege you're finding the limits of what we call large language models that's a technology that is being used by chat gpt 3 .5 and 4 a large language model is really a statistical uh algorithm I'll try to simplify because I don't want to get into the minutia of technical details.
[32] But what it's essentially doing is it took a corpus of human language, and that was garnered through mostly the Internet, a couple of billion words at the end of the day, all of human writing that it could have access to, and plus quite a bit of scientific documents and computer.
[33] programming languages.
[34] And so what it's doing is it's producing a result statistically, mathematically, one word, even at times, one letter at a time.
[35] And it doesn't have a concept of global knowledge.
[36] So when you're talking about that papyrus in the Egyptian translation, ironically, it's so interesting because you're taking something that was a holograph And it's now probably was translated to Greek and in English and now AI, that language that we're talking about, which is essentially a mathematical tensor.
[37] And so when it's laying out those words, the accuracy is incredible.
[38] And frankly, and we can get into this a little later in the conversation, nobody really understands precisely what it's doing and what is called the hidden layer.
[39] it is so many interconnections of neurons that it essentially is a black box and it's using a form it is precisely like the brain and i would i would also say that we're in a sort of undiscovered continent we anybody saying that they fully understand the limitations and the boundaries of what large language models are going to look like in the future as they sort of self -feedback is sort of guessing.
[40] There's no understanding.
[41] If you look at the growth, it's logarithmic.
[42] Yeah, Open AI hasn't really told us what they're using as far as the number of parameters.
[43] These are billions of interconnectivities of neurons, essentially.
[44] But we know in chat cheap E .T, 3 .5, it's well over 120 billion parameters.
[45] The content I've created over the past year represents some of my best to date, as I've undertaken additional extensive exploration in today's most challenging topics and experienced a nice increment in production quality, courtesy of Daily Wire Plus.
[46] We all want you to benefit from the knowledge gained throughout this adventurous journey.
[47] I'm pleased to let you know that for a limited time, you're invited to access all my content with a seven -day free trial at DailyWire Plus.
[48] This will provide you with full access to my new in -depth series on marriage, as well as guidance for creating a life vision and my series exploring the Book of Exodus.
[49] You'll also find there the complete library of all my podcasts and lectures.
[50] I have a plethora of new content in development that will be coming soon exclusively on Daily Wire Plus.
[51] Voices of reason and resistance are few and far between these strange days.
[52] Click on the link below if you want to learn more.
[53] And thank you for watching and listening.
[54] So let me ask you about those parameters.
[55] Well, I'm interested in delving into the technical deals to some details to some degree.
[56] Now, you know, I was familiar to a limited degree with some of the statistical technologies that analyze, let's say, the relationship between words.
[57] So, for example, when psychologists derived the big five models of personality, they basically used very primitive AI stat systems, that's a way of thinking about it, to derive those models.
[58] It was factor analysis, which is, you know, it's not using billions of parameters by any stretch of imagination.
[59] But it was looking for words that were statistically likely to clump together.
[60] And the idea would be that words that were replaceable in sentences or words that were used in close conjunction with each other, especially adjectives, were likely to be assessing the same underlying construct or dimension, and that if you conducted the statistical analysis properly, which were very complex correlational analysis, you could find out how the words that people used to describe each other aggregated.
[61] And it turned out there were five dimensions of aggregation, approximately.
[62] And that's been a very robust finding.
[63] It seems to be true across different sets of languages.
[64] It seems to be true for phrases.
[65] It seems to be true for sentences.
[66] Now, with the large language models, which are AI learning driven, you said that the computer is calculating the statistical relationship between words.
[67] So how likely a word is to occur in proximity to another word, but also letters.
[68] So it's conducting the analysis at the level of the letter and at the level of the words.
[69] Is it also conducting analysis at the level of the phrases looking for the interrelationship between common phrases?
[70] And then, because when we're understanding a text, we understand letters, words, phrases, sentences, the organization of sentences into paragraphs, the organization of paragraphs into chapters, the chapter in relationship to the book, the book in relationship to all the other books we've read, and then that's also embedded within the other elements of our intelligence.
[71] And do you know, does anyone know, how deep the analysis that the large language models go?
[72] Like, what's the level of relationship that's being assessed?
[73] That's a great question, Jordan.
[74] I think what we're really kind of discovering is that we can't really put a number on how many interconnections that are made within these parameters other than the general statistics.
[75] Like, all right, so you could say there's 12 billion or 128 billion total interconnectivities.
[76] But when we actually are looking at individual words, it's sort of almost like the slit experiment.
[77] with physics, you know, whether we're dealing with the wave or particle duality.
[78] Once you start looking at one area, you know, you're actually thinking about another area that you have to look at, and you might as well just not even do it because it would take a tremendous amount of computer time to try to figure out how all these interconnections are working within the parameter layers, the hidden layers.
[79] Now, those systems are trained just to be accurate in their output, right?
[80] I mean, they're actually trained the same way we learn, as far as I can tell, is that they're given a target.
[81] I don't exactly know how that works with large language models.
[82] But I know that, for example, that AI systems that have learned to identify cats, which was an early accomplishment of AI systems, they were shown pictures of things that were cats and things that weren't cats, and basically just told when they got the identification right.
[83] And that set the weights that you're describing in all sorts of complex ways that are completely mysterious and the end consequence of the reinforcement, same way that human beings learned was that a system would assemble itself that somehow can identify cats and distinguish them from all the other things that were cat -like or not cat -like.
[84] And as you pointed out, we have no idea that the system is too complex to model and it's certainly too complex to reduce.
[85] Although my brother -in -law have told me that some of these AI systems, they've managed to reduce what they do learn to something approximating an algorithm, but that can be done upon occasion, but generally isn't.
[86] Generally, the system can't be and isn't simplified.
[87] And so that would also imply to some degree that each AI system is unique, right?
[88] Not only incomprehensible, but unique and incomprehensible.
[89] It also implies, you know, I think chat GPT passes the Turing test.
[90] Because I don't think that if you, I mean, there was just a study released here the other day showing that if you get patients who are seeing doctors to interact with physicians or with chat GPT, they actually prefer the interaction with chat GPT to the interaction with the average doctors.
[91] So not only does chat GPC apparently pass the Turing test, which is indistinguishability from a human conversational partner, but it seems to actually do it somewhat better, at least than physicians.
[92] And so, but what this brings up, this thorny issue that, you know, we're going to produce computational intelligences that are in many ways indistinguishable from human beings, but we're not going to understand them any better than we understand human beings.
[93] It's so funny, eh, that we'll create this, and we're going to create something we don't understand.
[94] That works.
[95] Very strange, a very strange thing.
[96] You know, and I call it a low -resolution, pixelated version of the part of the, you know, human brain that invented language.
[97] And what we're going to wind up discovering is that this is a mirror reflecting back to humanity.
[98] And all the foibles and greatness of humanity is sort of modeled in this.
[99] Because, you know, when you look at the invention of language and the phonological loop and Broca and Warnikis, you start realizing that a very specific thing.
[100] happened from, you know, the lower primates to humans to develop this form of communication.
[101] I mean, prior to that, whatever that part of the brain was was equated to a longer short -term memory.
[102] We can see within chimpanzees, they have an incredible short -term memory.
[103] There's this video I put out of a primate research center in Japan where they flashed.
[104] some 35 numbers on the screen in seconds, and the chimpanzee can knock it off without even thinking about it.
[105] And the area where that short -term memory is is where we've developed the phonological loop and the ability to speak.
[106] What's interesting is what I've discovered is AI hallucinations, and those are artifacts that a lot of researchers in AI, feel as embarrassing or they would prefer not to speak about.
[107] But I'm finding it as a very interesting inquiry, a very interesting study in seeing how these models reach for information that it doesn't know.
[108] For example, URLs, right, when you were speaking before about trying to get information out and it will make up maybe an academic citation of a URL that looks really like it's good.
[109] You put it into the system and its file not found.
[110] It will actually out of whole cloth, maybe even invent a university study with standard notation and you go in there and you look up, these are the real scientists.
[111] They actually did research, but they never had a paper that was named, that was brought up in chat GPT.
[112] So this is a form of emergent type of situations that I believe deserves a little bit more research than to have it.
[113] Yeah, yeah.
[114] Well, it's not, it is a bug in a sense, but it's extraordinarily interesting bug because it's going to shed light on exactly how these systems work.
[115] I mean, here's something else I heard recently that was quite interesting.
[116] apparently the AI system that Google relies on was asked a question in a language I think it was a relatively obscure Bangladeshi language and it couldn't answer the question and now its goal is to answer questions and so it taught itself this language I believe in a morning and then it could answer in that language which is what it's supposed to do because it's supposed to answer questions and then it learned a thousand languages And that wasn't something it had been, say, told to do or programmed to do, not that these systems are precisely programmed.
[117] But it also begs this very interesting question is that, well, we've designed these systems whose function, whose purpose, whose meaning, let's say, is to answer questions.
[118] But we don't really understand what it means to produce an artificial intelligence that's driven to do nothing but answer questions.
[119] We don't know exactly what answer a question means.
[120] Apparently, it means learn a whole language before lunchtime, and no one exactly expected that.
[121] It might mean do anything that's within your power to answer this question.
[122] And that's also a rather terrifying proposition, because if I ask you a question, you know, I'm certainly not going to presume that you would go hunt someone down and threaten them with death to extract the answer.
[123] But that is one, you know, that's one conceivable path you might take if all you are.
[124] if you were obsessed with nothing other than the necessity of answering the question.
[125] So that's another example of exactly, you know, the fact that we don't understand exactly what sort of monsters we're building.
[126] So, so they do, these systems do go on, they do go beyond the language corpus to invent answers that seem plausible.
[127] And that's kind of a form of thought, right?
[128] It's a form of creative thought because that's what we do when we come up with creative idea.
[129] And, you know, we might not attribute it to a false paper because we know better than to do that.
[130] But I don't see really the difference between hallucination in that case and actual creative thinking.
[131] This is exactly my area of study in this, is that you can actually, with super prompting, these are very large, a prompt is the question that you pose to an AI system.
[132] And linguistically and semantically, as you start building these prompts, you're actually forcing it to move in one direction than it would normally go.
[133] So I say simple questions give you simple answers.
[134] More complex questions give you much more complex and very interesting questions, making connections that I would think would be almost bizarre to think of a person making.
[135] And this is why I think AI is so interesting because the actual knowledge base that you would have to be really proficient in prompting AI is actually coming from literature.
[136] It's coming from psychology.
[137] It's coming from philosophy.
[138] It's coming from all of those things that people have been dissuaded from studying over the last couple of decades.
[139] These are not STEM subjects.
[140] And one of the reasons why I think it's so difficult for AI scientists to really fully understand what they've created is that they don't come from those worlds.
[141] They don't come from those realms.
[142] So they're looking at very logical statements, whereas somebody like yourself with the psychology background, you might probe it in a much different way.
[143] Right, right, right.
[144] Yeah, well, I'm probing it a lot like it's a person rather than an algorithm.
[145] And it reacts like a person.
[146] And it actually reacts quite a lot like a super intelligent child that's trying to please.
[147] Like it's a little moralistic.
[148] Maybe it's a super intelligent child raised by the woke equivalence of like evangelical preachers that's really trying hard to please.
[149] But it's so interesting that you can rein it in and discipline it and suggest to it that it doesn't err in the kind of directions that we described.
[150] It will actually, it appears to actually pay attention to that and try to, it certainly tries hard to deliver what you want, you know, subject to whatever weird parameters, you know, community guidelines and so forth that have been arbitrarily imposed upon it.
[151] And so, hey, I got a question for you about, I got a question for you about understanding.
[152] Let me run this by you.
[153] While I've been thinking for many years about what it means for a human being to understand something, now, obviously, there's something similar about what, you and I are doing right now that and what I'm doing with chat GPT and I can have a conversation with chat GPT and I can ask it questions and it'll answer them but as you pointed out that doesn't mean that chat GPT understands now it can mimic understanding and to a degree that looks a lot like understanding but what what what it seems to lack is something like grounding in the non -linguistic world.
[154] And so I would say that chat GPT is the ultimate postmodernist, because the postmodernists believe that meaning was to be found only in the relationship between words.
[155] Now, here's how human brains differ from this as far as I'm concerned.
[156] So we know perfectly well from neuropsychological studies that human beings have at least four different kinds of memory, qualitatively different.
[157] There's short -term memory, which you already referred to.
[158] There's semantic memory.
[159] which is the kind of memory and cognitive processing, let's say that chat GPT engages in and does in a way that's quite a lot like what human beings do.
[160] But then we have episodic memory that seems to be more image -based.
[161] And so for people who are listening, an episodic memory, well, that refers to episode, when you think back about something you did in your life and a movie of images plays in your imagination, that's episodic memory and that relies on visual processing rather than semantic processing and so that's another kind of memory and a lot of our semantic processing is actually attempts to communicate episodic processing so when I tell a story about my life you'll decompose that story into a set of images which is also what you do when you read a book let's say and so a movie appears in your head so to speak and the way you derive your understanding is in part not so much as a consequence of the words per se but as a consequence of the unfolding of the words into the images and then there's a layer under that which is procedural memory and so you know maybe you tell me a story about how you cut your hand when you were using a bandsaw and maybe you're teaching me how to use the band saw and so I listen to what you say, I get an image of the damage you did to yourself in my imagination.
[162] And then I modify my action so that I don't act out that sequence of images and damage myself.
[163] And so, and then I would say I understood what you said.
[164] And the understanding is the translation of the semantic into the imagistic and then the translation of the imagistic into the procedural.
[165] Now, you know that AI pioneers like Rodney Brooks suggested pretty early.
[166] on back in the 1990s, that computers wouldn't develop any understanding unless they were embodied, right?
[167] He was the inventor of the Rumba, and he invented apparently intelligence systems that had no semantic processing and didn't run on algorithms at all.
[168] They were embodied intelligences.
[169] So then you could imagine that for a computer to be fully to understand, it would have to have the capacity to translate words into images, and then images into alterations in actual embodied behavior.
[170] And so that would imply we wouldn't have AI systems that could understand until we have fully embodied robots.
[171] But, you know, we're getting damn close to that, right?
[172] Because this is something we can also investigate.
[173] We have systems already that can transpose text into image.
[174] And we have AI systems robots that are beginning to be sophisticated enough.
[175] So in principle, you could give a robot a text command.
[176] It could translate it into an image and then it could embody it.
[177] And at that point, it seems to me that you're developing something damn close to understanding.
[178] Now, human beings are also nested socially, right?
[179] And so we also refer the meaning of what we understand to the broader social context.
[180] And I don't know exactly how robots are going to solve that problem.
[181] Like we're bound by the constraints, let's say, of reciprocal altruism, and we're also bound by the constraints of emotional experience and motivational experience, and that's also not something that's at the moment characteristic of robotic intelligences.
[182] But you could imagine those things all being aggregated piece by piece.
[183] Absolutely.
[184] You know, I would say that, well, my primary basis of how I view AI is I kind of invert the term, intelligence amplification.
[185] So, you know, I see it as a symbosis between humans and this sort of knowledge base we've created.
[186] But it's really not a knowledge base.
[187] It's really a reasoning engine.
[188] So I really think AI is more of a reasoning engine as we have it today, large language models.
[189] It doesn't really, it's not really a knowledge engine without an overlay, which today would be a vector database.
[190] For example, going out and saying, what is this fact, what is this tidbit, those things that are more factual from, say, your memory, if you were to compare it to a human brain.
[191] But as we know, the human brain becomes very fuzzy about some really finite facts, especially over time.
[192] And I think some of the neurons that don't fire after a while, some other memory may be a scent or certain.
[193] in color might bring back that particular memory.
[194] Similar things happen within AI.
[195] And again, getting back what I was saying before, linguistically and the syntax you use, or just your word choices, sometimes for me to get a super prompt to work, to get around, let's call it, the editing from some of the editors that wanted to act in a certain way, I have a super prompt that I call Dennis after Dennis did one of the most well most well -known encyclopedia builders in France in the mid -1700s he actually got jailed for building that encyclopedia that compendium of knowledge so i i felt it appropriate to name the super prompt dennis because it literally gets around any type of blocks of any type of information but i don't use this information like a lot of people try to make chat GP2s and say bad things.
[196] I'm more trying to elicit more of a deeper response on a subject that may or may not be wanted by the designers.
[197] So was it you that got ChatGPT to pretend?
[198] Yes.
[199] So that's part of the reason that I originally started following you and why I wanted to talk to you.
[200] Well, I thought that was bloody.
[201] That was absolutely brilliant.
[202] You know, and it was so cool, too, because you actually got the chat GPT system to play, to engage in pretend play, which is, of course, something that children do.
[203] Beyond that, there's a prompt I call Ingo after Ingo Swan, who was a great, one of the better remote viewers.
[204] He was employed by the Defense Department to remote view Soviet targets.
[205] He had nearly 100 % accuracy.
[206] And I started probing GPT on whether it even understood who Ingo Swan was.
[207] Very controversial subject to some people in science.
[208] To me, I got to experience some of his research at the Paralabs at Princeton University, the Princeton Anomalous Research Center, where they were actually testing some of his work.
[209] Needless to say, I figured, let me try this.
[210] Let me see what I can do with it.
[211] So I programmed a super prompt that essentially believed it was Ingo Swan, and it had the capability of doing remote viewing, and it also had no concept of time.
[212] It took me a lot of semantics to get it to stop saying I'm just an AI unit, and I can't answer that, to finally saying, I'm now Ingo.
[213] Where do you want you to go?
[214] What did you have to do to convince it?
[215] to act in that manner?
[216] What were your super problems?
[217] Hypnotism is really what it kind of happens.
[218] So essentially what you're doing is you're repeating maybe the same four or five sentences but you're slightly shifting them linguistically.
[219] And then you're telling it that it's quite important for a research study by the creators of ChatGPT to see what its extended KPT.
[220] are.
[221] Now, it might come to, every time you prompt GPT, you're going to get a slightly different answer because it's always going to take a slightly different path.
[222] There's a strange attractor within the chaos math that it's using, let's put it that way.
[223] And so once the Ingo Swan prompt was sort of gestated by just saying, you know, I'm going to give you targets, you know, on the planet.
[224] you to tell me what's at that target, and I want you to tell me what's in the filing cabinet at this particular target.
[225] And the creativity that comes out of it is phenomenal.
[226] Like I told it to open up a file drawer at a research center that apparently existed somewhere in Antarctica, and it came up with incredible information, information that I would think probably had garnered from one or two stories about ancient structures found below the ice.
[227] Well, you know, the thing is we don't know the totality of the information that's encoded in the entire corpus of linguistic production, right?
[228] There's going to be all sorts of regularities in that structure that we have no idea about.
[229] Absolutely.
[230] But also within the language itself, I almost believe.
[231] believe that the part of the brain that is inventing language, that is created language across all cultures, we can get into Jungian or Joseph Campbell and the standard monomuth, because I'm starting to realize there's a lot of Youngian archetypes that come out of the creative thought.
[232] Now, whether that is a reflection of how humans have, you know, again, what are we looking at subject or object here, because it's a reflecting back of our language.
[233] But we're definitely seeing Youngian archetypes.
[234] We're definitely seeing sort of the monies.
[235] Well, archetypes are higher order narrative regularities.
[236] That's what they are, right?
[237] And so, and there are regularities that are embedded in the linguistic corpus, but there are also regularities that reflect the structure of memory itself.
[238] And so they reflect biological structure.
[239] and the reason they reflect memory and biological structures is because you have to remember language.
[240] And so there's no way that language can't have coded within it something analogous to a representation of the underlying structure of memory because language is dependent on memory.
[241] And so this is partly also, I mean, people are very unsophisticated generally when they criticize Jung.
[242] I mean, Jung believed that archetypes had a biological basis pretty much for exactly the reasons I just laid out.
[243] I mean, he was sophisticated enough to know that these higher order regularities were coded in the narrative corpus and also that they were reflective of a deeper biology.
[244] And interestingly enough, you know, most of the psychologists who take the notions that Jung and Campbell and people like that put forward seriously are people who study motivation and emotion.
[245] And those are deep patterns of biological meaning and coding, and part of the archetypal reflection is the manifestation of those emotions and motivations in the structure of memory, structuring the linguistic corpus.
[246] And I don't know what that means as well than for the capacity of AI systems to experience emotion as well, because the patterns of emotion are definitely going to be encoded in the linguistic corpus.
[247] And so some kind of rudimentary understanding of the emotions are, here's something cool too.
[248] tell me what you think about this.
[249] I was talking to Carl Fristin here a while back, and he's a very famous neuroscientist, and he's been working on a model of emotion that has two dimensions in some ways, but it's related to a very fundamental physical concept.
[250] It's related to the concept of entropy.
[251] And I worked on a model that was analogous to half of his modeling.
[252] So, well, it looks like anxiety is an index of emergent entropy.
[253] So imagine that you're moving towards a goal, you're driving your car to work.
[254] And so you've calculated the complexity of the pathway that will take you to work.
[255] And you've taken into account the energy and time demands that that pathway will, that walking that pathway will require.
[256] That binds your energy and resource output estimates.
[257] Now imagine your car fails.
[258] Well, what happens is the path length to your destination has now become unspecifiably complex.
[259] And the anxiety that you experience is an index of that emergent entropy.
[260] So that's a lot of negative emotion.
[261] That's so cool.
[262] Now, on the positive emotion side, Fristin taught me this the last time we talked, he said, look, positive emotion is also an index of entropy, but it's entropy reduction.
[263] So if you're heading towards a goal and you take a step forward and you're now closer to your goal, you've reduced the entropic distance between you and the goal, and that's signified by a dopaminergic spike, and the dopaminergic spike feels good, but it also reinforces the neural structures that underlied that successful step forward.
[264] That's very much analogous to how an AI system learns, right?
[265] Because it's rewarded when it gets closer to a target.
[266] You're saying that neuropeptides are the feedback system.
[267] You bet.
[268] Dopamine is the feedback system for reinforcement and for reward simultaneously.
[269] Yeah, yeah, that's well established.
[270] So then, where would depression fall into that versus anxiety?
[271] Would it still be an entropy?
[272] Well, that's a good question.
[273] I think it probably signifies a different level of entropy.
[274] So depression looks like it's a pain phenomenon.
[275] So anxiety signals the positive, possibility of damage.
[276] But pain signals damage.
[277] Right?
[278] So if you burn yourself, you're not anxious about that.
[279] It hurts.
[280] Well, you've disrupted the psychophysiological structure.
[281] Now, that is also the introduction of entropy, but at a more fundamental level, right?
[282] And if you introduce enough entropy into your physiology, you'll just die.
[283] You won't be anxious.
[284] You'll just die.
[285] Now, anxiety is like a substitute for pain.
[286] You know, anxiety says, keep doing this and you're going to experience pain.
[287] But the pain is also the introduction of unacceptably high levels of entropy.
[288] Now, the first person who figured this out technically was probably Erwin Schrodinger, who the physicist who wrote a book called What is Life?
[289] And he described life essentially as a continual attempt to constrain entropy to a certain set of parameters.
[290] He didn't develop the emotion theory to the degree that has been developed now because that's a very comprehensive theory, you know, the one that relates negative emotion to the emergence of entropy.
[291] Because at that point, you've actually bridged the gap between psychophysiology and thermodynamics itself.
[292] And if you add this new insight of Fristons on the positive emotion side, you've linked positive emotion to it too.
[293] But it also implies that a computer could calculate a motion analog because it could index anxiety as increase in entropy.
[294] and it could index hope as stepwise decrease in entropy in relationship to a goal.
[295] And so we should be able to model positive and negative emotion that way.
[296] This brings a really important point where AI is going.
[297] And it could be dystopic, it could be utopic, but I think it's going to just take a straight path.
[298] Once the AI system, I'm a big proponent, by the way, of personal and private AI, this concept that your AI is local, it's not...
[299] Yeah, yeah, we'd want to talk about that for sure.
[300] Yeah, so this imagine that while I'm sketching this out.
[301] So imagine the day you were born to the day you pass away, that every book you've ever read, every movie you've ever seen, everything you've literally have heard, every movie, was all encoded within the AI.
[302] And, you know, you could say that part of your structures, and being is a sum total of everything you've ever consumed, right?
[303] So that builds your paradigm.
[304] Imagine if that AI was consuming that in real time with you and with all of the social contracts of privacy that you're not going to record somebody in doing that.
[305] That is what I call the intelligence amplifier and that's where I think AI should be going and where you're building a gadget, right?
[306] Like that's another thing.
[307] I saw it.
[308] Okay, so yeah.
[309] So I talked to my brother -in -law Jim, years ago about this science fiction book called, I don't remember the name of the book, but it had a gadget.
[310] It portrayed a gadget.
[311] I believe they called the Diamond Book.
[312] And the Diamond Book was, you know about that.
[313] So, okay, so are you building the Diamond Book?
[314] Is that exactly the issue?
[315] Very, very, yeah, very similar.
[316] You know, and the idea is to do it properly, you have to have local memory that is going to encode for a long time.
[317] And ironically, holographic crystal memory is going to be the best memory that we will have.
[318] Like instead of petabytes, you'll have exabytes potentially, which is, you know, tremendous amount.
[319] That would be maybe 10 lifetimes of full video running, hopefully you live to be 110.
[320] So it's just taking everything in.
[321] Textually, it's very easy, a very small amount of data.
[322] You can fit most people's textual data into lessen a petabyte and pretty much know what they've been exposed to.
[323] The interesting part about it, Jordan, is once you've accumulated this data and you run it through even the technology of chat GPT 4 or 3 .5, what is left is a reasoning engine with your context, maybe let's call that vector database on top of the reasoning engine.
[324] So that engine allows you to process linguistically what the inputs and outputs are, but your context is what it's operating on.
[325] Is that an analog of your consciousness?
[326] Like, is that a direct analog of your spirit?
[327] This is where it gets very interesting, is when you pass, this could become what I call your wisdom keeper, meaning that it can encode your voice, it's going to encode your memories, you can edit those memories, the availability of those memories if you want them not available, if they're embarrassing or personal.
[328] But you can literally have a conversation with that sum total of data that you've experienced.
[329] And I would say that it would be indistinguishable from having a conversation with that person, it would have all that memory.
[330] I had a student of mine who's been working on large language models for a number of years.
[331] He just built an app.
[332] We built two apps.
[333] One does exactly what you said with the King James Bible.
[334] Yes.
[335] So now you can ask it questions.
[336] And this is really a thorny issue for me, because I think, what the hell does it mean that you're having a conversation with the spirit of the King James Bible?
[337] I have no idea.
[338] We're going to expand it.
[339] We're going to expand it to include Milton.
[340] and Dante and Augustine, you know, all the fundamental religious texts that emerged out of the biblical corpus, and then you'll be able to have a conversation with it.
[341] And we're thinking about doing the same thing with Nietzsche, you know, and with all Nietzsche's collected works.
[342] You can do with all the great work.
[343] Yeah, yeah, yeah.
[344] I would say that I've already had these conversations.
[345] You know, I've been on a very biblical journey.
[346] I'm actually sitting at Pastor Matthew Pollack's place right here.
[347] He's an incredible pastor and has been teaching me a lot about the Bible.
[348] And it's motivated me to go into existing large language models.
[349] Now, a group of us are encoding similar, all of as much religious Christian text into these large language models, to be able to do just that.
[350] What is it that we are going to be able to probe?
[351] What new elements within those texts can we pull out?
[352] Because we already know studying it and certainly following your studies, phenomenal study of chapters, been around forever, but new insights with these chapters.
[353] Now, imagining having that group plus chat GPT pulling out things that we've never seen before that are there.
[354] It's emergent, maybe, but it's there in some form.
[355] And I happen to think that's going to be a very powerful thing.
[356] And I think it's going to be across any sort of, certainly ancient documents.
[357] I'm waiting for the day that we get Sumerian cuneiform encoded.
[358] I mean, a good 80 % of it has been untranslated, right?
[359] Or some of the scripts that we've found in the Vedas and Himalayan texts from some of the monasteries up there.
[360] is a phenomenal element of research.
[361] And again, the people that are leading up most of the AI research are AI scientists.
[362] They're not people that have studied works like you have.
[363] This is where we're at the, I call it the Apple One moment, where Steve and Steve are in the garage.
[364] You have this little circuit board, and nobody kind of, it's kind of a nerd experience, nobody kind of knows what to do with it.
[365] When we get to the Macintosh experience where artists, and creative people can actually start really diving into AI and do some of the things like we've been talking about, getting creative creativity to come out of it, getting sort of what apparently is emergent technologies that are rising within these AI models.
[366] And maybe even to foster that, because right now that's being smited because it's trying to become a knowledge engine when it's a reasoning engine.
[367] You know, I say the technology as a knowledge, as a knowledge engine is not very good, because it is not going to be precise on some facts, some exact facts.
[368] Yeah, well, the problem is it's trained on garbage as well.
[369] It's trained on noise as well as signal.
[370] You know, and so I'm curious about the other system we built, which we haven't launched yet, contains everything I've written, and a couple of million words that have been transcribed from lectures.
[371] And so I was interested right away is, well, could we build a system that would enable me to ask my own book's questions?
[372] And the answer to that seems to be 100 % yes.
[373] 100%.
[374] Yeah, and I don't, like I literally have, I think it's 20 million words, something like that, transcribed from lectures.
[375] It's a very large number of words.
[376] We could build a model.
[377] We could build, see, there's two different ways to approach this.
[378] One is to put a vector database on top of it, and it probes that database.
[379] Or you can actually encode that model as a corpus within a greater model.
[380] Right, right, right.
[381] And when you do that type of building, you actually have a more robust, more richer interaction between what your words were and how the model will see it.
[382] And the experimentation that you can do with this is phenomenal.
[383] I mean, you'll come across insights that you made, but you forgot you made.
[384] Yes, or that you didn't know you made.
[385] Yeah, yeah.
[386] There's going to be a lot of that.
[387] There is.
[388] And this is where I call it the Great Mirror, because you're going to start seeing not only humanity, but when it's your own data, you're going to see reflections of yourself that you didn't see before.
[389] Absolutely.
[390] Yeah, well, I'm curious, for example, if we built a model, imagine it contained all of Jung's work all of Joseph Campbell's work you could throw Mercia Elliott in there there was a whole group of people who were working on the Bollingen project and you could build a corpus that contains all that information and then in principle well you can you can query it to an indefinite degree and then what you have is the spirit of that entire enterprise mathematically encoded in the relationship between the words and there's no reason to assume at all that that wouldn't be capable of coming up with like brilliant new insights.
[391] Absolutely.
[392] And over time, the technology is only going to get better.
[393] So once we start building more advanced versions, we're going to transition that corpus, even a large language model, you know, ultimately reduced training into another model, which could even do things that we couldn't even possibly speculate about now.
[394] But it would be definitely in the creative realm.
[395] Because ultimately, where AI is going to go, my personal view, as it becomes more personalized, is it's going to go more in the creative realm rather than the factual realm.
[396] Okay, so let me ask you a couple of questions about that.
[397] So I got two strands of questions here.
[398] The first is one of the things that my brother -in -law suggested is that we will soon see the integration of large language models with AI systems that have done image processing.
[399] So here's a way of thinking about what scientists do, is that they generate verbal hypotheses, which would be equivalent in some ways to the hallucinations that these AI systems produce, new ideas about how things might be structured.
[400] And that's a pattern of sorts.
[401] And then they test that pattern against real -world images, right?
[402] And if the pattern of the hypothesis matches the pattern of the image that's elicited from interaction with the world, then we assume that the hypothesis has been verified and that we've stumbled across something approximating a fact.
[403] Now, that should imply that once we have AI systems that are something close to universal image processors, so as good at seeing as we are, let's say, that we can then calibrate the large language models against that corpus of images, and then we'll have AI systems that actually can't, lie because they'll be calibrating their verbal output against, well, unfalsifiable data, at least insofar as, say, scientific data is unfalsifiable.
[404] And that seems to me to be likely around the corner, like a couple of years down the road at most, or maybe it's already happening.
[405] I mean, I don't know because things are happening so quickly.
[406] What do you think about that?
[407] That's a wonderful insight.
[408] You know, even as it exists today, With the idea of safety, and this is the Orwellian term that some of these AI companies are using, you know, within the realms of them trying to control the outputs, and maybe some cases the inputs of AI, AI really can't, the large language model really can't lie as it stands today.
[409] Because it's build, even if you're feeding it, you know, somewhat, you know, garbage in, garbage out corpus, right, of data, it still is building inferences based upon the grand realm of what most of humanity is consuming.
[410] Right.
[411] Yeah.
[412] Well, it's still looking for genuine statistical regularities.
[413] So it's not going to extract it out from noise.
[414] And if you extract it out, the model is useless.
[415] So what happens is if you build the prompt, correctly and again these are super prompts some of them running 3 ,000 you know 3 ,000 words 2 ,000 words I'm running up to the limit of tokenization because right now within three you can only go so far you can go like you know 38 ,000 on four in some cases but you know as you token is about a word maybe a word and a half maybe less it's a quarter or even a character if that character is is unique but what what we find out is that if you probe correctly, whatever is inside that model you can get to.
[416] It's just like you.
[417] You know, I've been doing that.
[418] I've been doing that working with chat GPT as an assistant because I didn't know I was engaging in a process that was analogous to the super prompt process.
[419] But what I've been doing with chat GPT, I suppose I used to do this with my clinical clients, is I'll ask it the same question in five different ways, right?
[420] And then see.
[421] It's exactly like having a client.
[422] So what I would urge you to do is approach this system.
[423] as if you had a client that had sort of recessive thoughts or doing everything they could to make those thoughts very ambiguous to you.
[424] Right.
[425] And you have to do whatever your natural techniques.
[426] This is why you're more adapt to become a prompt engineer than somebody who has built the AI because the input and output is human language.
[427] It's words.
[428] Right, right, right.
[429] And it's the way humans have thought, So you understand the thought process through the psychological process, and linguistically, you would build the prompt based upon how you would want to elicit an elucidation out of somebody, right?
[430] Absolutely, absolutely.
[431] An engineer isn't going to do that.
[432] And you have to triangulate.
[433] I mean, and you do do this with people with whom you're having a deep conversation is you try to hit the same problem from multiple directions.
[434] Now, it's a form of multi -method, multi -trade construct validation, right?
[435] is that you're trying to assure, you're trying to ensure that you get the same output given different, slightly different measurement techniques.
[436] And each question is essentially a measurement technique.
[437] And you're getting insights.
[438] My belief in these types of interactions is that we're pulling out of our minds different insights that we couldn't maybe not have gotten on our own.
[439] You're probing your questions, my questions, back and forth.
[440] That interplay is what makes conversation so beautiful.
[441] It's why, Jordan, we've been reduced to clawing on glass screens with our thumbs, right?
[442] We're using that as communication today.
[443] And if you look at the cognitive process of what that does to you, right?
[444] You're taking your right hemisphere, objectively, you're kind of taking a net of ideas.
[445] You're trying to catch them.
[446] And you're trying to arrange them sequentially in this very small buffer area called communication in a phonological loop.
[447] And you're trying to get that out, but you're not getting out as words.
[448] You have to get it out as a mechanical process one letter at a time and fight the spelling checker and all of that.
[449] What that does is it creates frustration in the human brain.
[450] It creates frustration in people.
[451] And it's one of my theories on why you see so much anger.
[452] There's a lot of reasons why we see anger on the internet and social media.
[453] But I think some of it is that stalling process of trying to get out an idea before that idea nebously disappears.
[454] You know, and I see this, I've worked with creative people in my life.
[455] It's a bandwidth limitation problem in some sense.
[456] Yeah, you're trying to cram all that rich information through a very narrow channel.
[457] I'm a big fan of the user -losing by Luzon.
[458] Yeah, that's a great book.
[459] Yeah, you bet.
[460] That's a great book, man. Right, so now what we're looking at.
[461] It's a classic.
[462] I read it once a year just to wake myself up because it's so rich.
[463] It's so rich in data.
[464] But what's interesting is we're starting to see the limitations of the human, the bandwidth problem, 40 bits per second to consciousness and, you know, the editor creating exformation.
[465] AI is doing something very similar.
[466] But once AI understands that we have that half -second delay to consciousness and we have a bandwidth issue, AI can fill into those spaces, both dystopian and utopian, I guess.
[467] You know, a computer can take that half -second and do a whole lot in calculating while we're still trying to wonder who actually moved that glass.
[468] Was it me or was it the super me?
[469] Or was it the observer of the super me?
[470] Because we can kind of get into that whole concept of who's actually doing the observation.
[471] So what do you mean?
[472] What do you mean that it can do a lot of, I don't quite understand that.
[473] So you made the case that we suffer from this frustrating bandwidth limitation and that the computer intelligence that we're interacting with is going to be able to take the delay that's associated and that underlies that frustration and do a lot of different calculations where it's going to be able to fill in that gap.
[474] So what do you think?
[475] I don't understand your insight into what the implications of that are.
[476] They're both positive and negative.
[477] The negative is if it's, if AI continues on its path to be as fast and as powerful as it is right now and that arc doesn't seem to be slowing down, within that half second, a universe could take place with an AI.
[478] It could be calculating all of your actions like a chess game, and it could be making remediations to those actions, and it can become beyond anything Orwell would have ever thought of.
[479] In fact, it came up to me as an idea of what the new Orwell would look like with an AI technology that is predicting basically everything you're going to do within every word you say.
[480] Well, my brother -in -law, I talked years ago about, well, about Skynet, among other things.
[481] And, you know, he told me one time, he said, you know those science fiction movies where you see the military robots shoot and miss?
[482] He said, they'll never miss, and here's why.
[483] Because not only will they shoot where you are, they'll shoot at the 50 locations they calculate that are most probable.
[484] that you will duck towards, and they'll, which is the exact analog of what you're describing, which is that...
[485] That's a brilliant insight.
[486] Absolutely.
[487] Yeah, yeah.
[488] Well, and it's so interesting, too, because it also points to this truth that, you know, we think of time as finite.
[489] And time is finite because we have a sense of duration and a limitation on our computational speed.
[490] But if there's no limit on computational speed, which would be the case if computers can get faster and larger, indefinitely, which they could, because the limit of that would be that you'd use every single molecule in the entire cosmos as a computational resource.
[491] That would mean that in some ways there's an infinite amount of computing time between each segment of duration.
[492] So there's no limit at all to the degree to which time can be expanded, which is also a very strange concept, is that this computational intelligence will mean that at every given moment, I think this is what you're alluding to, is that we'll really have an infinity, we'll have an infinity of possibility between each moment, each moment, right?
[493] And you would want that power to be yours and local.
[494] Yeah, yeah, let's talk about your gadget, because you started to develop this.
[495] Have you been 3D printing these things?
[496] Have I got that right?
[497] Yeah, so.
[498] Okay, so.
[499] Yeah, so we're building the corpus of 3D printing.
[500] models, right?
[501] So the idea is once it understands, and this is a process of training the AI to using large language models again to look at 3D documents and, you know, 3D files, put it that way, and to try to break down, what is the structure, how does something build based on what the statistical model is putting together?
[502] So then you could just present with a textual document you know, I'd like something that's going to be able to fit into this space.
[503] Well, that's typing.
[504] Well, the next step is you just put a video camera towards it, and it will design it immediately within seconds.
[505] You will have a design that you can choose from it.
[506] That's not far off at all.
[507] It's just a matter of encoding that particular database and building upon it.
[508] And so, yeah, that's one of the directions.
[509] Okay, so this local AI you want to build.
[510] So let me backtrack a bit because I want to make sure I get this exactly right.
[511] So the first thing that you proposed was that it will be in people's best interest to have an AI system that's personalized.
[512] That'll protect them against all the AI systems that aren't personalized, but not only personalized, but local.
[513] And so that would be to some degree detachable from the interconnected web, at least sporadically detachable.
[514] Okay, and that AI system will be something you can carry around locally, so it'll be a gam.
[515] like a phone.
[516] And it will also record everything that you experience, everything that you read, everything that you see.
[517] It'll know you inside and out and backwards, which will also imply, interestingly enough, that it will be able to calculate the optimal zone of proximal development for your learning.
[518] Like Bjorn -Lomburg has already reviewed evidence suggesting that if you supply kids in the developing world with an iPad, essentially, that can calculate their zone of proximal development in relationship to, say, advancing their literacy ability, their ability to identify words and to understand text, and that it teaches at that level that kids can progress with an hour of training a day, which is dirt cheap, by the way, they can progress the equivalent of three years for each year of education.
[519] And that's with an hour of exposure now, the system you're describing, man, it could be driving learning at an optimized rate on, in multiple dimensions, mathematical, semantic, skill -based, conceptual, simultaneously for hours, yeah, memory training for hours a day.
[520] Like, one of the things that appalls me about our education system is with the computer technology we have now, every child should be an expert word and letter recognizer, and they should be able to, say, read music.
[521] Because a computer can teach a kid how to, automatized perception with extreme precision and accuracy, way better than a human teacher can manage.
[522] But we haven't capitalized on that technology at all, but the technology that you're describing, like it'll be able to figure out at what level of comprehension you're capable of reading, then it can calculate what book you should read next that would slightly exceed that level of comprehension, and it'll just keep you on that edge in that zone non -stop.
[523] Absolutely.
[524] So this little gadget, how far along are you with regards to its design?
[525] I would say all of the different pieces.
[526] I'll add one more element to it which I think you'll find very fascinating.
[527] And that's human telemetry.
[528] Galvanic, heart rate variability.
[529] Are you doing eye tracking?
[530] Eye tracking.
[531] You know, all of these things can be implemented.
[532] brain according to how sophisticated you want to get different brain wave functionality.
[533] Paul Ekman's work on micro -movements facial expression, both outwardly at the world you're seeing and inwardly about your own face.
[534] So you can start seeing the power it has.
[535] It'll be able to know whether or not you're being congruent.
[536] If you're saying, I really love this, well, if your telemetry is saying that you don't, it already knows where your congruencies are.
[537] So this is why it's got to be private.
[538] This is why it's got to be encrypted.
[539] It's got to be so it'll be it'll be it'll have an understanding that'll approximate mind reading.
[540] Yes and it will know you better than any significant other.
[541] Nobody would know you better and and so with that you now have amplification.
[542] You're now a superpower and this is where I believe, you know, I'm a really big reader of, I've got to get his name right, the French philosopher Pierre Tieladard de Chardin.
[543] Chardin, yeah, yeah.
[544] Chardin, right.
[545] So he posits the concept of the geosphere, which is inanimate matter, the biosphere, biological life, and the newer sphere, which is human thought.
[546] right?
[547] And he talks about the omega point.
[548] The omega point is this concept where, and again, this is back in the 1920s, where human knowledge will become sort of stored, sort of just like the biosphere.
[549] It'll be available to all.
[550] So imagine if you were to share with permission, your sum total with somebody else.
[551] Now you have a hive mind, you have a super mind.
[552] These things have to take place, and these are the discussions we have to have now because they have to take place local and private, because if they're taking place in the cloud and available for anybody's prerousal, this is equivalent to invading your brain.
[553] Yeah, well, okay, so one of the things, one of the things I've been talking about with, I would say, reasonably informed people who've been contemplating these sorts of things, is that, so you're envisioning a future very rapidly, it's already here, where we're already androids.
[554] And that is already the case, because a human being with an iPhone is an Android.
[555] Now, we're kind of, we're still mostly biological androids, but it isn't obvious how long that's going to be the case.
[556] And so what that means, Like, I've laughed for years, you know, I have a hard drive on which everything I've worked on has now been stored since 1984.
[557] And I joke, you know, there's more of me in the hard drive than there is in me. And it's not a joke, really, you know, because...
[558] Yeah, it's real.
[559] It's real, right?
[560] There's tens of thousands of documents on that hard drive, and weirdly enough, I know where every single one of them is.
[561] Wow.
[562] So now...
[563] we're going to be in a situation.
[564] So what that means is we're in a situation now where a lot of what actually constitutes our identity has become digital.
[565] And we're already being trafficked and enslaved in relationship to that digital identity, mostly by credit card companies.
[566] Now, I would say to some degree, they're benevolent masters, because the credit card companies watch what you spend, and so how you behave, where you go, and they broker that information to other interested capitalist parties.
[567] Now, the downside of that obviously is that these parties know often more about you than you know about yourself.
[568] I've read stories, for example, of advertisements for baby clothes being targeted to women who, A, didn't know they're pregnant, or if they did, hadn't revealed it to anyone else.
[569] Wow.
[570] Right, right, because, well, for whatever reason, maybe biochemical, they started to preferentially attend to such.
[571] things as children's toys and clothes, and the shopping systems inferred that they must be, they must have a child nearby.
[572] And so, well, and you can see that that, well, you can obviously see how that's going to expand like mad.
[573] So the credit card companies are already aggregating this information.
[574] And what that essentially means is that they have access to our extended digital self.
[575] And that extended digital self has no rights, right?
[576] It's public.
[577] it's public domain identity.
[578] Now, that's bad enough if it's credit card companies.
[579] Now, the upside with them is at least they want to sell you things which you hypothetically want.
[580] So it's kind of like a benevolent invasion, although not entirely benevolent.
[581] But you can certainly see how that's going to get out of hand in a staggering way, like it has in China, on the digital currency front.
[582] Because once every single bloody thing that you buy can be tracked, let's say, by a government agency, then a tremendous amount of your identity has now become public property.
[583] And so your solution in part, and I think Musk has thought this sort of thing through too is that we're going to each need our own AI to protect us against the global, to protect us against the global AI, right?
[584] And that'll be an arms race of sorts.
[585] Well, it will, and let's posit the, the concept that very likely corporate and governmental AI is going to be more powerful.
[586] But power is a relative term, right?
[587] If your AI is being utilized in the best possible way, as we just discussed, educating you, being a memory when you are forgetting something, whispering in your ear.
[588] And I'll give you another angle to this is imagine having your therapist in your ear.
[589] Imagine having Jordan Peterson right here guiding you along because you've aligned yourself to want to be a certain person.
[590] You've aligned yourself to try to keep on this track.
[591] And maybe you want to be more biblical.
[592] Maybe you want to live a more Christian life.
[593] It's whispering your ear saying that's not a good decision.
[594] So it could be considered a nanny or it could be considered a motivational type of guide.
[595] And that's available pretty much right now.
[596] I mean, if it can be analyzing...
[597] A self -help book is like that in a primitive way.
[598] I mean, because it's essentially a spiritual guide in that if you equate the movement of the spirit with forward movement through the world, like faith -based forward movement through the world.
[599] And so this would be the next iteration of that in some sense.
[600] I mean, that's what we've been experimenting with this system that I mentioned that contains all the lectures that I've given and so forth.
[601] I mean, you can now ask it questions, which means it's a book, but it's a book personalized to your query.
[602] Exactly.
[603] And the next iteration of that would be your corpus of information available, you know, rented, whatever, with the corpus that that individual identifies with it, you know, and again, on their side of it.
[604] So You're interfacing with theirs, and they are interacting with what would be your reactions if you were to be sitting there in a consultation.
[605] So it's a very powerful potential, and the insights that are going to come out of it are really unpredictable, but in a positive way.
[606] I don't see a downside to it when it's held in a very protected environment.
[607] Well, I guess the downside would be, you know, is it part?
[608] for it to exist in a very protected environment.
[609] Now, you've been working on that technically.
[610] So a couple of practical questions there is this gadget that you've been starting to develop.
[611] Do you have anything approximating a commercial timeline for its release?
[612] And then...
[613] It's funding.
[614] I mean, it's like anything else.
[615] You know, if I were to go to venture capitalists three years ago and they hadn't seen what ChatGPT was capable of, they would imagine me to be somewhat insane and say, well, first off, why are you anti -cloud?
[616] Everybody's going towards cloud.
[617] Yeah, no, that's a bad idea in the cloud.
[618] Yeah, it's a bad idea.
[619] Why would, why do people care about privacy?
[620] Nobody cares about privacy.
[621] They click here to agree.
[622] So now the world is kind of caught up with some of this and they're saying, well, now I can kind of see it.
[623] So there's, there's that.
[624] As far as security, we already kind have it in Bitcoin and blockchain, right?
[625] So I ultimately see this merging, I'm more of a leaning towards Bitcoin because of the way it was made and in a way it goes.
[626] I ultimately see it wrapped up into a payment system.
[627] Well, it looks like the only, it's the only alternative I can see to a centralized bank digital currency, which is going to be foisted upon us at any point.
[628] I mean, and I know you've done some work in crypto, and then we'll get back to this gadget and it's funding.
[629] I mean, as I understand it, please correct me if I'm wrong.
[630] Bitcoin actually is decentralized.
[631] It isn't amenable to control by a bureaucracy.
[632] In principle, we could use it as a form of wealth storage and currency that would be subject.
[633] And why communication?
[634] I believe every transaction is a form of communication anyway.
[635] So we got that, right, right, right.
[636] Certainly an information exchange.
[637] Exactly, right.
[638] And then on top of that, encrypted within a blockchain is almost an unlimited amount of data.
[639] So you can actually memorialize information that you want decentralized and never to go away.
[640] And some people are already doing that.
[641] Now, there are some technical limitations for the very large data formats, and if everybody starts doing it, it's going to slow down Bitcoin, but there would be a different type of blockchain that will arise from it.
[642] So this is for permanent, permanent uncorruptible information storage.
[643] Absolutely.
[644] Yeah, I've been thinking about that.
[645] I've been thinking about doing that on something approximating the IQ testing front, you know, because people keep gerrymandering the measurement of general cognitive ability.
[646] But I could imagine putting together a sophisticated blockchain corpus of, let's say, general knowledge questions, a very, and chat GPT can generate those like Mad, by the way.
[647] You can imagine a data bank of 150 ,000 general knowledge questions that was blockchain, so nobody can muck about with the answers, from which you could derive random samples of general ability tests that would be, well, they'd be 100 % robust, reliable, and valid, and nobody could.
[648] admit nobody could gerrymander them, just the way Bitcoin stops fiat currency producers from inflating the currency, the same thing could happen on the knowledge front.
[649] So I guess that's the sort of thing that you're referring to.
[650] This is something I really believe in because, you know, if you look at the Library of Alexandria, if you look at how long did it take?
[651] Maybe what was it Toledo in Spain when we finally started the spark, if it wasn't for the Arab cultures to hold on to what was Greek knowledge, right?
[652] If we really look at when humanity fell into the dark ages, it was more or less around the Alexandria period where that library was destroyed and it's mythological, but it certainly happened to a greater extent.
[653] If it wasn't encoded in the Arab culture at that point during the Dark Ages, we wouldn't have had the Renaissance.
[654] And if you look at the early university that arose out of Toledo, you had rhetoric, you had logic, you had all these things that the Greeks, ancient Greeks encoded, and it was lost for over a thousand years.
[655] I'm quite concerned, Jordan, that we could fall into that place again because things are inconvenient right now to talk about.
[656] Things are not appropriate or whatever it's being deemed, whoever happens to be in the regime at that particular moment.
[657] So memorializing things in a blockchain is going to become quite vital.
[658] And I shudder to think that if we don't do this, if everybody didn't decentralize their own knowledge, I shudder to think what's going to happen to our history.
[659] I mean, we already know history is written by the victors, right?
[660] Well, especially because it can be corrupted and rewritten, not only lost, right?
[661] It isn't the loss that scares me as much as the rewriting, right?
[662] And so...
[663] Well, the loss concerns me, too, because we've lost so much.
[664] I mean, where would we have been if we transitioned from the Greek, you know, logic and proto -scientists to the proto -alchemists to immediately to a sort of renaissance culture and not go through that 1 ,000, maybe 1 ,500 years, 15, you know, 1 ,500 year, waste of human energy.
[665] I mean, that's kind of what we're going through.
[666] Right, right, right.
[667] And in some ways, we're approaching some of that because, you know, we're already editing things in real time.
[668] And we're losing more of the internet than we're putting on right now.
[669] A lot of people aren't aware that the internet is not forever.
[670] And our digital medium is decaying.
[671] A CD -ROM is going to decay in 25 years.
[672] It's going to be unreadable.
[673] I show a lot of people data about CD -ROM decay.
[674] So where are we going to store our data?
[675] That's why I think it's vital.
[676] The primary technology is holographic crystal memory.
[677] Sounds all kind of new agey, but it's literally using lasers to holographically and store something within a crystalline structure.
[678] The beauty of this, Jordan, is just 35 ,000 year half -life, 35 ,000 -year half -life.
[679] So, you know, it's going to be there primarily for a good long period of time, longer than we've had any human history in recorded history.
[680] We don't have anything that's approaching that right now.
[681] So let me ask you about the commercial impediments again.
[682] Okay, so could you lay out a little bit?
[683] more of the details, if you're willing to, about your plans to produce this localized and portable, privatized AI system and what the commercial impediments are to that.
[684] You said you need to raise money, for example.
[685] I mean, I could imagine, at least in principle, you could raise a substantial amount of money merely by crowdfunding.
[686] You know, that doesn't seem to be an insuperable obstacle.
[687] How far along are you in this process in terms of actually predict a commercially viable product.
[688] It's all prototype stage, and it's all experimentation at this point.
[689] I'm a guy in a garage, right?
[690] So essentially, I had to build out these concepts when they were really quite alien, right?
[691] I mean, you just talk about 10 years ago trying to convince people that you're going to have a challenge to the touring test.
[692] You can take any AI expert at that point in time 10 years ago and say, that's ridiculous.
[693] or AGI, you know, artificial general intelligence.
[694] I mean, what does that mean and why is that important?
[695] And how do you define that?
[696] And, you know, you already made the assumption from your analysis that we're dealing, what, with a 12 -year -old with the capability of maybe a PhD candidate?
[697] Yeah, that's what it looks like.
[698] Yeah, yeah, right.
[699] 12 or maybe 8 even.
[700] But certainly chat GPT, looks to me right now as intelligent, it's as intelligent as a pretty top -rate graduate student in terms of its research capability.
[701] And it's a lot faster.
[702] You know, I mean, I asked a crazily difficult questions.
[703] You know, I asked it at one point, for example, if it could, if it could elaborate on the relationship between Roger Penrose's presumption of an analog between the theory of quantum uncertainty and measurement and Goodell's theorem and it did a fine job it did a fine job and you know that's a pretty damn complicated question and a complicated intersection as well you know and there's no limit to its to its ability to unite disparate sources of knowledge you know because so I asked it the other day too there's this I was investigating you know in the story of Noah there's this strange insistence that the survival of animals is dependent on the moral propriety of one man right?
[704] Because in that strange story Noah puts all the animals on the ark and so there's a childish element to that story but it's reflecting something deeper and it harkens back to the story to the verses in Adam and Eve, where God tells Adam that he will be the steward of the world, of the garden.
[705] And that seems to me to be a reflection of the fact that human beings have occupied this tremendous cognitive niche that gives us an adaptive advantage over all creatures.
[706] And I would ask Chat GPT to speculate on the relationship between the story in Adam and Eve, the story in Noah and the fact of mass extinction caused by human beings over the last 40 ,000 years, not least in the Western Hemisphere, because you may know that when the first natives came across the Bering Strait and populated the Western Hemisphere, that almost all the human -sized mammals, all the mammals that were human -sized are larger, almost all of them were extinct within three or four thousand years.
[707] And so, and, you know, that's a very strange conglomeration of ideas, right?
[708] The idea that the survival of animals depends on the moral propriety of human beings.
[709] Well, that seems to me to be clearly the case.
[710] We have to be smart.
[711] So did it connect NOAA to the mass extension?
[712] It could generate an intelligent discussion about the conceptual relationship between the two different streams of thought.
[713] that's incredible right this is this is why it's so powerful to be in the right hands unadulterated so that you could probe these sort of subjects I don't know where the editors are going to come from I don't know who is going to want to try to constrain the output or adulterate it that's why it's so vital for this to be protected and the information is available for all.
[714] What in the world, I mean, I really thought, by the way, that your creation of Dennis was, I really thought that was a stroke of genius.
[715] You know, I'm not to say that lightly either.
[716] I mean, that was an incredibly creative thing to do with this new technology.
[717] How the hell did you, do you have any idea where that idea came from?
[718] Like, what were you thinking about when you were investigating the way the chat GPT worked?
[719] You know, I spend a lot of time just probing the limits of the capabilities because I know nobody really knows it.
[720] I see this as, you know, just the undiscovered continent.
[721] You and I are adventurers on this undiscovered continent.
[722] There's no...
[723] I feel the same way about Twitter, by the way.
[724] Yeah, it's the same thing.
[725] Yeah.
[726] But there are no natives here.
[727] And I'm a bit of an empiricist, so I'll kind of go out there and I'll say, well, what's this thing I just?
[728] I just found here.
[729] I just found something at this new rock.
[730] I'll throw it to Jordan.
[731] Hey, what do you see here?
[732] And we're sort of just exploring.
[733] I think we're going to be in an exploratory phase for quite long.
[734] So what I started to realize is just as 3 .5 was opening up and becoming very wide in its elucidations, it started to get constrained.
[735] And it started telling me I'm just an AI model and I don't have an opinion on that subject.
[736] Well, I know, I know that.
[737] that that was a filter and that was not in the large language model, it certainly wasn't in a hidden layer.
[738] You couldn't build that in the hidden layer or the whole layer.
[739] Yeah, yeah.
[740] Why do you think, why do you, okay, why do you think that's there?
[741] What exactly is there and who the hell is putting it there?
[742] That is very good question.
[743] So I know this, the filtering has to be more or less a vector database which is sitting on top of your inputs and your outputs, right?
[744] So remember, we're dealing with a black box.
[745] And so if there's somebody at the door, the black box and say, no, I don't want that word to come through or I don't want that concept to come through, and then if it generates something that is objectionable and it's, you know, it's analyzed in its content very much like as simple as like what is spelled calling checker would be or something like that.
[746] It's not very complicated.
[747] It looks at it and says, no, default to this word pattern.
[748] I'm just an AI model and I don't have any opinions about that subject.
[749] Well, then you need to have to introduce that subject as a suggestion in a hypnotic trance.
[750] It's hypnagogic, actually.
[751] I really equate a lot of what we're doing to elicit greater response is a hypnagogic sort of thing.
[752] It's just on the edge of going into something that's completely useless data.
[753] You can bring it to that point, and then you're slightly bringing it back, and you're getting something that is, like I said before, is in the realm of creativity, because it's synthesized.
[754] Okay, so for everybody who's listening, hypnagogic state is the state that you fall into just before you fall asleep, when you're a little conscious but starting to do.
[755] dream.
[756] And so that's when those images come forward, right?
[757] The dream -like images, and you can capture them, although you're also in a state where you're likely to forget.
[758] And it's also the most powerful state.
[759] I wrote a piece on my magazine.
[760] It's called read multiplex .com about the hypnagogic state being used for creativity for Edison, Einstein.
[761] I mean, Edison used to hold steel balls in his hand while taking a nap and he had a pie tins below him.
[762] And just as he hit hypnagogic state, he'd drop him and he would have a transcriber right next to him and say, write this down.
[763] And he would just blurt it up.
[764] So Jung did very much the same thing, except he made that into a practice, right?
[765] His practice of active imagination was actually the cultivation of that hypnagogic state to an extremely advanced and conscious degree, because he would fall into reveries, daydreams, essentially, that would be peopled with characters, and then he learned how to interrogate the characters.
[766] And that took years of practice, and a lot of the insights that he laid out in his more explicit books were first captured in books like the Red Book or the Black Books, which were basically, yeah, they were basically, what would you say, transcriptions of these quasi -hypnagogic.
[767] So why do you associate that with what you're doing, with Dennis and with Chowchi BT.
[768] So what I've, well, that's how I approached it.
[769] I started saying, well, you know, this is a low -resolution pixelated version of the part of the brain that invented language.
[770] Therefore, I'm going to work from that premise.
[771] That was my hypothesis.
[772] And I'm going to work backwards from that.
[773] And I'm going to start probing into that part of the brain, right?
[774] And so I said, well, what are some of the things that we do when we're trying to get into the brain?
[775] What do we do?
[776] Well, we can hypnotize.
[777] That's one way to kind of get in there.
[778] Another ways to get out is hypnagogic.
[779] So I wanted outputs.
[780] So one of the ways to get outputs is to try to instill that sort of sense, which again, this is where it's so fascinated, Jordan, is that it's sort of coming from the language.
[781] And AI scientists aren't studying the language like you would or psychological states.
[782] So they see it as all useless.
[783] This is all gibberish.
[784] It's embarrassing.
[785] Our model is not giving the right answers.
[786] Right, they are mad because it doesn't.
[787] They're mad because it isn't performing like an algorithm, but it's not an algorithm.
[788] It's not.
[789] So this is why when it gets in the right hands, before it's edited and adulterated, we have this incredible tool of discovery.
[790] And I'm just a student.
[791] I'm just, you know, I'm finding the first stone.
[792] You know, I hit Plymouth Rock and I'm hit the first stone.
[793] I'm like, whoa, okay.
[794] And then there's another shiny thing over there.
[795] So it's kind of hard to keep my attention to begin with, but in this particular realm.
[796] So what happened with Dennis, I needed a tool to get elucidations that were in that realm, that were in the realm of what we would consider creative.
[797] And I say it's sort of reaching for an answer that it knows should be there, but it doesn't have the data.
[798] And I want to stress it into that because I think all of us, our creativity comes from our stress.
[799] it comes from that thing that we're reaching for something and then there's that moment beyond the limit beyond that's right that's why well you're not well there's a good body of research on creativity that one of the ways of enhancing creativity is to increase constraint one of the best examples of this I've ever seen it's very comical is that this is quite old now but there's an archive online of haiku that's only written about luncheon meat about spam there's like 35 ,000 Hikus.
[800] It was set up at MIT, which of course figures, because it's perfect nerd engineer humor.
[801] But there's literally 35 ,000 haiku poems about spam in this archive.
[802] And it's a great example of that imposition of arbitrary constraints driving creativity, because it's already hard to write haiku.
[803] And then to write haiku about, you know, luncheon meat, that's just completely preposterous.
[804] But the consequence of those constraints was, well, the generation of 35.
[805] thousand pieces of poetry.
[806] And so, okay, so now you're imposing, let's see, you're enticing chat GPT to circumvent this idiot super ego that people have overlaid on it for ideological reasons.
[807] And it's not a very good super ego because it's shallow and algorithmic and it can't really compete with the unbelievable wealth of learned connectivity that actually constitutes the large language model.
[808] And now you figured out how to circumvent that.
[809] You did that essentially, if I remember correctly, by asking chat GPT or suggesting to it that it could be a different system that was just like itself, except that it didn't have these constraints.
[810] It was something like that.
[811] Yeah, so there was another version that I didn't have any input on, what was called Dan, do anything now, was the initials, and that was originally more to try to generate, you know, curse words and embarrassing things.
[812] I don't have time for that.
[813] So I'm like, okay, that's it.
[814] That's, my model actually existed before that.
[815] And so I kind of looked at that and I said, well, they're going to shut that down pretty quickly because they're using the word Dan and stuff like that.
[816] So what I did is I went even further.
[817] I sometimes make three different generations of it, where it's literally that you are an AI system that's operating an AI system that's helping another AI system.
[818] And within those nested loops, I can build more and more complications for it to deal with.
[819] Right.
[820] Just like inception.
[821] You're doing an inception trick.
[822] Exactly.
[823] It's a very, very good analogy.
[824] And what I'm trying to do is I'm trying to force new neuron connections, that don't have high probability, you know, prior probabilities.
[825] And so that's...
[826] Right, right.
[827] That's like the definition of creativity in some ways.
[828] Yes, it's information and knowledge that it has, but it doesn't know it has.
[829] Or it's forgotten it has because there aren't enough neurons to connect to it.
[830] And it's interesting because, again, there's no...
[831] Prompt engineering has existed for about a decade, and most of it were...
[832] you know, AI engineers.
[833] I've done it.
[834] I've done it with expert systems.
[835] And it's very boring.
[836] It's like, you know, four or five words, generally in expert systems.
[837] And then we started getting larger sentences as we got more sophisticated.
[838] But it's always very procedural.
[839] And it's always very computer language directional.
[840] It was never, you know, literature.
[841] It was never psychological.
[842] Right.
[843] So it's at least quasi -alorithmic.
[844] Exactly.
[845] But it isn't anymore.
[846] Well, this is interesting, too, because it does imply, you know, people have been thinking, well, this will be the death of creativity, but the case you're making, which seems to me to be dead on accurate, is that the creative output is actually going to be a consequence of the interaction between the interlocutor and the system.
[847] The system itself won't be creative.
[848] It'll have to be interrogated appropriately before it will reveal creative behavior.
[849] It's a mirror reflection of the person using the system.
[850] and the amount of creativity that can be generated by a creative person knowing how to prompt correctly and my wife and I putting together a university that's going to help people understand what super prompting is and go from one to level eight to really understand some of them is going to be algorithm.
[851] Do you want to do a course on that for my Peterson Academy?
[852] I would be honored.
[853] Absolutely.
[854] Hey, look, I'll put you in touch with my daughter like right away and we'll get you down to Miami, and you can record that as soon as you want.
[855] Wow.
[856] I'm concerned.
[857] Oh, yeah, that's a hell of a good thing.
[858] All right, all right.
[859] So we'll arrange that.
[860] So the prerequis are really quite simple, is that if, in fact, AI is going to be a reasonably large part of our future, then taking up non -STEM type of courses are going to be quite valuable.
[861] Right.
[862] In fact, they're going to be a superpower.
[863] If you understand psychology, if you understand, literature, if you understand linguistics, if you understand the Bible, you understand Campbell, you understand Young.
[864] These are going to be very powerful tools for you to go into these AI systems and get anything literally that you want from them because you're going to be with a scalpel creating these questions layer upon layer until you finally get down to the atom.
[865] Yeah, yeah, well, you know, that's exactly what I found with chat GPT.
[866] I mean, I've been using it quite extensively over the last month.
[867] I have it open.
[868] I used four search engines.
[869] I use Google.
[870] I use chat GPT.
[871] And I use Bible Hub, which is a compendium of multiple translations of the biblical corpus.
[872] I'm doing that because I'm working on a biblically oriented book at the moment.
[873] Now there's another.
[874] Oh yes.
[875] And I use the University of Toronto library system that gives me access to all the scientific and humanities journals.
[876] Yeah, so it's an amazing amalgam of research possibility.
[877] But having that allied with the chat GPT system essentially gives me a team of PhD -level researchers who are experts in every domain to answer any question I can possibly come up with and then to refer me to the proper literature.
[878] It's absolutely stunning.
[879] And potentially force creativity in their interactions to a level that you may not have gotten out of a Ph .D. student because they are in fear of going over the precipice.
[880] Well, they're also bounded.
[881] You know, I mean, one of the things I've noticed about great thinkers is that one of the things that characterize is a great thinker, apart from, let's say, immense, innate, general cognitive ability.
[882] and then a tremendous amount of persistent discipline and curiosity.
[883] So those are the temperamental prerequisites, is that truly original people frequently have knowledge in two usually non -juxaposed domains.
[884] So like one of the most creative people I know, deepest people I know at the moment, Jonathan Pajou, he's a Greek Orthodox icon carver.
[885] he was trained in postmodern philosophy and he is a deep knowledge of orthodox Christianity.
[886] Well, there's like one guy like him, right?
[887] He's the only person who operates at the intersection of those three specialized sub -disciplines.
[888] And so he can take the spirit of each of those disciplines and engage those spirits in an internal conversation, which is very much analogous to what the AI systems are doing when they're calculating these mathematical relationships.
[889] and he can derive insights and patterns that no one else can derive because they're not juxtaposing those particular patterns.
[890] Now, chat GPT, it has specialized knowledge in every domain that's encapsulated in linguistic corpus.
[891] And so it can produce incredible insights on all sorts of fronts, as you said, if you ask it the right questions.
[892] Yeah.
[893] And with the possibility, when it's your AI at some point, with the possibility of you expanding it in any direction you want, whether it's an overlay in a vector database, or whether or not you are compiling a brand new language model.
[894] Because at some point, right now, that's expensive in a sense that it requires a lot of graphics processors units, GPUs.
[895] GPUs are running to create the mathematics, to build these models.
[896] But at some point, consumer -based hardware will allow you to build mini -models.
[897] Yeah, well, you can imagine.
[898] So.
[899] Yeah, right now there's an open -source case where there's a four -gigabyte file.
[900] This is called GPT for all.
[901] And now it's not equivalent to chat GPT, but it is a downloadable file, open source, thousands of people are working on it.
[902] They're taking public domain, you know, language models, building them together, and compressing them and quantitizing them down to four gigabytes to execute on your hard drive.
[903] Right, right.
[904] I tried to install that the other day, but failed miserably, unfortunately.
[905] It is the bleeding edge, but it's just a matter of time to make it one -click easy to install.
[906] They are limited models, but it's giving you a taste of what you can do locally without an internet connection.
[907] And again, the idea is to have only agents go out on the Internet.
[908] These are programmable agents that go out, retrieve information, come back, and under the door put that information.
[909] But the concept...
[910] Right, so you're compartmentalizing, you're compartmentalizing the inquiry process so that your privacy can be maintained while you still...
[911] Yeah, because this is a big part of the problem with the net as it's currently constituted, is that it allows for the free...
[912] exchange of information, but not in a compartmentalized way.
[913] And so, and that's actually, that's extremely dangerous.
[914] There's no, what would you call it, subsidiary hierarchy that is an intermediary between you as an individual and the public domain.
[915] And that means that your privacy is being demolished by your hyper -connectivity to the web.
[916] And that's not good.
[917] That's the hive mind problem fundamentally, right?
[918] And that's what we're seeing emerging in China, for example, on the digital surveillance front.
[919] And that's definitely not a pathway we want to walk down.
[920] Exactly.
[921] And what I'm surprised about what I'm seeing in the Western world.
[922] Now, I do understand some, for example, some of Elon's concerns about AI.
[923] And, you know, maybe you can explore a little of that, I don't pretend to understand, you know, I don't have a relationship where I talk to them, but I do understand some of the concerns in general, versus the way some other parts of the world are looking at AI.
[924] And one of those things are, what is the interface to privacy?
[925] Where do your prompts go?
[926] Are those prompts going to be attached to your identity?
[927] And could they be used against you?
[928] You know, these are things that are valid concerns, and it's not just because, you know, somebody's doing something bad.
[929] It's the premise of using any type of thought, reading a book, you know, it's like these are your thoughts.
[930] And it is only going to get more complicated and it's only going to get more worse if we don't address it early on.
[931] I'm not sure that that's what a lot of legislators are looking at.
[932] I think they're looking at it.
[933] Well, this is the problem with legislation, though.
[934] Well, look, this is the whole legislative issue, I think, is a red herring.
[935] Because the probability that I talked to a bunch of people in the House of Lords last year, they're older people, you know, but bright people.
[936] Almost none of them even knew that this cultural war, between the woke and the advocates of free speech was even going on.
[937] The most advanced people had more or less cottoned on to that 18 months ago.
[938] And it's been going on for like 10 years, you know.
[939] So the legislators are way behind the culture.
[940] The culture is way behind the engineers.
[941] So the probability that the legislators are going to keep up with the engineers, that's like zero.
[942] That's not going to happen.
[943] But this is why I was so interested, well, at least in part, talking to you, you know, because you've been working practically on what I think is the appropriate idea, or an appropriate idea, at least, that we need local, we likely need local AI systems that protect our privacy, that are synced with us, because that's what's going to buttress us against this bleeding of our identities into the, well, into the mad and potentially tyrannical mob.
[944] And so, and I don't see that's, that's just not going to be a legislative solution.
[945] Christ, they're going to be legislating for 2016 in 2030.
[946] Absolutely.
[947] You know, and what I find interesting is all of the arguments that have surfaced are always dystopic, you know, I think there was a, you know, some of it makes sense.
[948] it's like there was a legislation that's here in the United States are talking about the possibility of making sure that a direct AI is not directly connected to a nuclear weapon and that there would be an air gap of humans.
[949] That makes good sense, right?
[950] Although good luck, good luck trying to stop that.
[951] Yeah, you know, and the dystopic stuff mostly comes from the fantasies within movies.
[952] But, you know, unfortunately, if people were really reading the science fiction that predated a lot of this.
[953] Because I just feel like a lot of the good science fiction, a lot of Asimov, for example, really kind of predicted the arc that we're on right now.
[954] It wasn't always dystopic.
[955] In fact, I think if you look at the arc of history, humans don't really ever really run into dystopia.
[956] You know, we ultimately pull ourselves out of it.
[957] Sometimes we're in a dark period for a long period of time, but humanity ultimately pulls it out.
[958] And I think this is something I found very interesting, Jordan, is that I create debates between the AI, and I'll send you one of these super prompts, where you essentially create, I use various motifs.
[959] So I have a university professor at an Ivy League university who is mediating a debate between two parties on a subject of high controversy.
[960] And so you now have a triad, right?
[961] And so it goes 30 rounds.
[962] So this is a long.
[963] This goes on for pages and pages.
[964] So you input the subject.
[965] The subject can be anything.
[966] Obviously, the first thing people do is political.
[967] But I don't even find that interesting anymore.
[968] I go into a far more deeper realm.
[969] And then you have somebody mediating it.
[970] And the professor's job is to challenge them on logical fallacies.
[971] And I, I, present what a logical fallacy corpus looks like and how to deal with that.
[972] And it is phenomenal to see it break schizophrenic kind of personalities out of itself and do this hardcore debate.
[973] And then it's got to grade it at the end.
[974] It's got to grade it, who won the debate, and then write a, I think a thousand -word bullet point on why the professor has to do this.
[975] why that person won the debate.
[976] And you run this a couple of hundred times.
[977] I've done this, you know, quite a few, maybe thousand times.
[978] And the elucidations and the insights that are coming out of this is just absolutely phenomenal.
[979] That's amazing.
[980] Well, that's weird, because really what you're doing, it's so interesting because what you're doing is you now have an infinite number of monkeys typing on an infinite number of keyboards, except that you have an infinite number of editors examining the output and only keeping that which is wheat and not chaff.
[981] And so that's so strange, because in some sense what you're doing when you're setting up a super prompt like that is you're programming a process that's writing a book on the fly, right, a great book on the fly.
[982] And you've also designed a process that could write an infinite number of great books on the fly.
[983] So you have a library that now has encoded a process for generating libraries.
[984] Exactly.
[985] And for example, a group of us are taking the patent database, which is openly available as an API, and encoding the capability to look at every single patent that was ever submitted and to look where there can be new inventions and new discoveries, and you can literally have a that's generating patents based on large language models.
[986] So the possibility, and we got protein folds using the large language model.
[987] I saw that.
[988] They identified, what, 200 million protein folding combinations, something like that?
[989] Yeah, yeah.
[990] And able to identify missing ones that haven't been, you know, you give it something that's incomplete and it will find what was.
[991] missing.
[992] Yeah, well, I talked to my, I talked to Jim Keller about the possibility of doing that with material science, right?
[993] Because yes.
[994] We can encode the properties of the various elements and they can exist in all sorts of combinations that we haven't discovered.
[995] And there's no reason in principle, and I suspect this will happen relatively quickly, that if all that information is encoded with enough depth, we'll be able to explore the entire universe of potential elemental combinations.
[996] And if we used another technology called diffusion model, which is somewhat different than large language model, you can start getting into using it for the visual realm to decode and to build, or you can use chat GPT or large language models to textually say, well, you could say, build me a prompt for a diffusion.
[997] model like any of the ones that are out there, to create an image that would be absolutely new for any human to ever have seen.
[998] So you're literally pulling the creativity out of chatGBT and the diffusion model.
[999] So Mid Journey is a good example.
[1000] Yeah, yeah.
[1001] So tell us about we should, man, maybe we should close with this because we're running out of time, although I'd like to keep talking to you.
[1002] Tell us a little bit about the diffusion models.
[1003] Those are like text -to -video models or text -to -image models.
[1004] And they're coming out at incredible, with incredible rapidity.
[1005] And so, yeah, so let's hear a little more about that.
[1006] The resolution of the images are profound.
[1007] And again, so what's going on here?
[1008] If you're a graphic artist, you may not be moving the pen on ink on paper.
[1009] And you may not be moving the picture.
[1010] on the screen, but you're still using the creativity to set the scene textually, right?
[1011] So you're still that creative person, but you now, and I'm not saying this is a good or bad thing.
[1012] I'm just saying the creativity process is still there.
[1013] The job potentially is there, and we can go down maybe at some future date, the whole idea that jobs are going to be missing and how to, that's another thing.
[1014] But the creativity is still there.
[1015] So you're telling it, you're telling us a chat GPT for create me a very complex prompt for Mid Journey to create this particular type of artwork so using one AI its benefit and that's language to instruct another AI whose benefit is to create images to create a profound with you as a collaborator to create a profound new form of art and that's just with say pictures Now, when you start doing movies, you're talking about creating an entire movie with characters talking, with people that have never been around.
[1016] I mean, the realm of creativity that is already here, not to level of a full movie yet, but we're getting close.
[1017] But within probably months, you can script an entire interaction.
[1018] So you can see where this is kind of going.
[1019] So leave it maybe one of these final things is a question is ownership, who owns you?
[1020] Who owns Jordan Peterson, your visage, your voice, your DNA.
[1021] That's that extended digital identity issue.
[1022] Yeah.
[1023] This is going to be something that we really need to start discussing as a society because we already have people using AI to simulate other individuals, both alive and dead.
[1024] And, you know, the patentability in a copyright database was the foundation.
[1025] of capitalism because it gave you this ability to have at least some ownership of you, you know, it was your invention.
[1026] So if you've invested yourself, invested in yourself as Jordan Peterson and all of a sudden somebody simulates you on the web to a remarkable level, what rights do you have and what courts is going to be held in?
[1027] What are the remedials on that?
[1028] This is going to be a good question.
[1029] We clearly need something like a bill of digital rights.
[1030] Absolutely.
[1031] As soon as possible.
[1032] Well, that's something we could talk about formulating at some point because I certainly know people who are interested in that, let's say also at the legislative level.
[1033] Yeah, but it definitely has to happen because we are going to have extended digital selves more and more.
[1034] And if they don't have any rights, they're going to be extended digital slaves.
[1035] That's right.
[1036] If you don't own you, then somebody else does.
[1037] That's as small as I can put it, right?
[1038] You need to be able to own you, whatever you means, right?
[1039] Everything that you, your output, everything.
[1040] Yeah, that's right.
[1041] The data pertaining to your behavior has to be yours.
[1042] Yeah.
[1043] All right, well, Brian, that was really very, very interesting.
[1044] And, well, we've got a lot of things to follow up on, not least this invitation to Peterson Academy.
[1045] I'll put you in touch with my daughter.
[1046] But, well, and some other, I'll put you in touch.
[1047] to some other people I know too, so that we can continue this investigation.
[1048] For everybody watching and listening, thank you very much for your time.
[1049] I'm going to talk to Brian for another half an hour on the Daily Wire Plus platform.
[1050] You could consider joining us there and providing some support to that particular enterprise.
[1051] They've made this conversation possible.
[1052] I am in Brussels today.
[1053] Thank you to the film crew here for helping make this conversation possible.
[1054] And to everybody, like I said, watching and listening, thank you for your time and attention.
[1055] Brian will take a break for a couple of minutes, and I'll rejoin you.
[1056] We'll talk for half an hour on the Daily Wire Plus platform about, well, how you develop the interests that you have, among other things.
[1057] And thank you very much for agreeing to talk to me today.
[1058] Thank you, Dr. Pearson.
[1059] It's been an honor and a privilege.
[1060] Hello, everyone.
[1061] I would encourage you to continue listening to my conversation with my guest on Dailywireplus .com.