Podcast
Episode 
10

What Fascinates Ron Chrisley of Stanford University?

Pondering machine learning versus symbolic artificial intelligence, Frank Herbert’s most underrated book and what it means to be a “committed connectionist” trying to reconcile science and religion. Episode 10 of “What Fascinates You?”

0:00
0:00
https://feeds.soundcloud.com/stream/748168135-bobby-mukherjee-563753774-loka-podcast-with-ron-chrisley-visiting-professor-of-symbolic-systems-stanford-university.mp3
What Fascinates Ron Chrisley of Stanford University?

On this episode, Bobby talks to Ron Chrisley, who's a Visiting Professor of Symbolic Systems at Stanford University, and Director for the Centre for Cognitive Science (COGS) at the University of Sussex. They cover a lot of interesting topics in human-centered AI and its interplay with creativity. We also find out that Ron and LinkedIn co-founder Reid Hoffman know each other and how their friendship set the stage for Ron to return to Stanford. For some fun trivia, stick around until the end and find out Ron's favorite science fiction book.

Transcript

Bobby

Welcome to the Loka podcast. I'm your host, Bobby Mukherjee. On this episode I talk to Ron Chrisley, who's Visiting Professor of Symbolic Systems at Stanford, and Director for the Center for Cognitive Science, better known as COGS, at the University of Sussex. We cover a lot of interesting topics in human centered AI, and hit some thought-provoking ideas around the interplay of creativity and AI. We also find out how Ron and LinkedIn cofounder Reid Hoffman know each other, and how that set the stage for Ron to return to Stanford.

For some fun trivia, stick around until the end, and find out Ron's favorite science-fiction book.

I'm here in downtown Palo Alto recording our next exciting episode. Let me start off by allowing my guest to introduce himself.

Ron

Hello, my name is Ron Chrisley, and I'm currently a visiting professor in Symbolic Systems at Stanford University. But my permanent job is in England at the University of Sussex, where I'm Director of the Center for Cognitive Science and faculty on the Sackler Centre for Consciousness Science.

Bobby

Terrific. So Ron, thanks so much for making the time I thought we could start off with the beginning in your journey to this point, because you've had a lot of my guests, you've had maybe one of the more interesting kind of globe-trotting experiences, and would love to learn a little bit about where you started off and where you then traveled to?

Ron

Well, despite what your ears might be telling you, I was actually born in the United States. And I spent the first few years here. But then when I was a teenager, my family moved to England, we had to travel across through her on the West Coast of the US. And we had to get to the east coast and river flooding England and this long journey where I think I first started, you know, sitting in the backseat of that car for all those hours reading science fiction and thinking hard about things made me start writing down ideas and a little spiral bound notebook questions that are pretty cliche, I'm sure lots of people come across them. But they were the beginning of my journey, about trying to understand the mind things like, you know, when I look at the color red is the color I see the same as what you see when you look at the color red and things like that. So I moved to England, but I eventually moved back to the United States to finish up my high school and do an undergraduate degree here at Stanford a long time ago.

Bobby

Wow. And we're just a stone's throw from there. Now, if that's right, so truly, truly full circle, you and I chatted a little bit before this episode. And I was trying to understand some of the things a little bit better about your childhood and stuff. And one of the things that you said to me that really struck me was that you you grew up in this household where you were trying to reconcile, shall we say, theology and machine learning and artificial intelligence? And I just thought that was really fascinating, because I see those two worlds is so different, and was curious if you could talk a little bit about just that.

Ron

You're right, there are two different ways of looking at the world. And it was that very different. So I think that if I had to point any one thing, I would say it's that  trying to make sense of that difference in a young mind is what got me to start asking intellectual questions. So my family was religious. My father was originally in the US military, but then got out as a conscientious objector to war. He was as a career officer at the time of Vietnam, that was unheard of for a career officer to route to, to do that, when he became a minister, clergy, clergymen and pastor. My mother also studied to be a pastor and was ordained as clergy. And then she went into the US military as a chaplain. So even people in the military need clergy, too.

But the worldviews among my parents were very pro, rational scientific inquiry, pro education. And I remember seeing books on cybernetics lying around and my father was reading. So he had both of these aspects to himself a scientific mind, but also a religious mind. And my mother also shared those values. So I think me trying to reconcile the scientific worldview, with the kinds of things I was learning about or experiencing in church, led to some deep philosophical discussions at the dinner table or in the car with my parents or with my sisters. I think that this was the foundation for who I am today.

Bobby

That's really, really fascinating. So tell us a little bit about how then during that, you know, early journey, when did you get sort of your first taste or introduction to AI?

Ron

Well, it was really at Stanford, Stanford in the ‘80s was—even now it's an epicenter for AI and machine learning, but even then to it was one of the main places in the world to do AI research. Now, I hadn't thought when I was going to Stanford when I applied and when I first started going there that I was going to study AI, I didn't know what the term I'd never heard of the term cognitive science, for example, really was one of my friends, Avery Wang, who gave me a book Gödel, Escher, Bach by Douglas Hofstadter. And that book really changed my way of thinking. And it talks a lot about artificial intelligence and music and logic and philosophy and mathematics and art. And these themes have stayed with me ever since. So that that book was very important to me.

So I started doing taking classes in programming and getting jobs helping out the AI researchers that were some of the AI researchers that were there at the time. So for instance, I got a job as a programmer on an intelligent tutoring system at Stanford. And later, I worked on expert systems for air traffic control at NASA-Ames and I was taking courses and working on computer music at karma. But at some point, I realized that also influenced by Hofstadter, and maybe also by Terry Winograd, one of my tutors that there was some fundamental problems with the symbolic AI approach at the time. And so I began to turn to machine learning and neural networks and systems that could learn their own ways of representing the world rather than being spoon-fed a particular way of representing the world and that really—then everything changed. I started working on neural network models of animal learning under Marc Bloch. And I started working— well, eventually, after I left Stanford, I started working on speech recognition, both in Finland on a Fulbright grant, and also at ATR in Japan. And then my I found myself an intern at Xerox PARC, and was working on simple recurrent networks for robot navigation there. So that's really, my interest in machine learning really flourished and developed at that time, and it was, I was definitely a committed connectionist, as we called ourselves at that time, neural networks, machine-learning person rather than a symbolic good old-fashioned AI researcher.

Bobby

Yeah. And then for some of the folks who are listening, who may not know, do you want to just give a quick explanation of what symbolic AI is, versus and neural nets, sort of the basic background or for both of those?

Ron

Well, the way I look at it, there's there's a number of ways you can characterize the distinction. But the way I look at it is the symbolic AI approach was focusing on taking ways of looking at the world that we have as adult humans as concept users, and trying to write rules that would give a system the same knowledge that we are various experts have in different fields. So you interview a doctor who's good at diagnosing blood diseases, and you ask that doctor how she diagnosis the diseases, and you write down her knowledge in a set of rules. And then you give those rules to a theorem-proving kind of expert system. And then hopefully, the expert system can use those rules to diagnose a blood disease. But the problem with that was that it really, I felt that the only systems that could really be fluidly, dynamically intelligent, and especially creative, would be systems that learn for themselves and didn't weren't spoon fed way of looking at the world of registering the world of representing the world through some set of rules and symbolic representations.

Bobby

Now, that's definitely how I see those, those two schools of thought. So I believe at that point, after Stanford, you sort of you had a fork in the road, and one direction could take you east. And the other direction could take you over the Atlantic even further. Right. So talk to me about that.

Ron

Well, there were several options. Actually, I could have gone there was already there are already companies and research institutes who were doing neural network research, and we're looking for people who knew something about it. And there weren't that many people at that time. There were job offers in Japan, for example, that some of my close friends or one of one of my close friends took but I turned it down because I knew I wanted to do more academic—there were academic issues that I wanted to sort out first. And it was either the Alpha offers I had to choose between were cognitive science at MIT under Michael Jordan, which would have been heavily machine-learning based or a more philosophical approach to understanding what are these representations, that these neural networks are developing that are using before they achieve these the level of conceptual mastery that people have—how can we even talk about those kinds of things? Representations and how could you make make them better?

Well, that kind of more theoretical semantic philosophical work was being done by there was one particular researcher that I was that I thought had some very good insights in this area. And that was Adrian cousins, a philosopher who was at Stanford when I was there, but was returning to Oxford in order to finish up a fellowship that he had there. So I followed him to Oxford, basically, and he guided me in my early years there and I from from being in the British academic ecosystem, and that just naturally led to me looking at jobs in Britain as well. And when a job became available at the University of Sussex that it was a center called COGS, Research Center in Cognitive Science. I knew that's that I really wanted to apply for that job.

I remember seeing a paper by somebody who was at COGS at the time and is now has returned to COGS, Andy Clark, a philosopher that I really admire and who has a, he's quite famous in this area. I remember as a postgraduate student seeing a paper written by him and it was it at the end of the paper, it said, Andy Clark, School of Cognitive Computing Sciences, University of Sussex. I said, “That's what I want to do, I want to do what he's doing.” And that must be a place where you can do it. So I'm going to keep an eye open for any jobs that are being offered there.

So sure enough, that's where I ended up, thanks to the support of Margaret Bowden, who really pioneered cognitive science in Britain. And then later, with the assistance of Aaron Sloman who also pioneered cognitive science and AI in Britain. Sussex was a great place is a great place to, to look at these issues. I was interested in general issues in artificial intelligence, issues in machine consciousness—can you make an artificial conscious system? How could you? Why would you want to? And also this particular issue I was interested in, and I had been studying at Oxford, of how do we understand how the mind works before you get to the level of adult human concepts. Like the ways that infants represent the world or animals represent the world, the way we represent the world when we're not fully logical and rational. And using language that pivots towards more philosophical issues, took me to Oxford and took me to Sussex, and that's where I have been for a couple of decades, at least…

Bobby

Until the beginning of 2019.

Ron

That's right. So it turns out that at Stanford, and at Xerox PARC, and at Oxford, it was a friend of mine who, who joined me it was with me at each of those phases of my life. And that's Reid Hoffman, which your whom your listeners have probably heard of before, very influential figure in Silicon Valley now, but even then, I knew him as a co very creative thinker, a very exciting intellect to interact with who is extremely honest, and wanted to get to the truth. And he was a great sparring partner. And we were we had a lot of similar positions on things. We stayed in contact over the years, but not not too much. But when he contacted me in 2018, and said, hey, you know, this AI thing that's been going on for the last few years, we used to do work that is very related to that. What do you think there's any relevance of that kind of approach that we that that he was pursuing? And I've still continued to pursue? Do you think there's any relevance to that? And I said, Yeah, sure. Let's talk about it. So we talked about it. And it was decided that he quite generously, generously decided to help me out and support this time away from Sussex to return to my Silicon Valley machine learning routes, and see how these ideas, this experience and ideas and insights that have developed over the last few decades, could be brought to bear on practical issues concerning, say, design of machine learning architectures, or concerning governance of AI systems or other technologies. So thanks to him, I'm here. And so we'll still be here through a good part of 2020 as well.

Bobby

Yeah, definitely a small world. Yeah, I'm sure there are a lot of stories about what that what that friendship must have been like, over the years, something that's near and dear to my heart with your work is, you know, starting at a high level around the world of human centered AI. Before we dive into that in a little bit more depth, maybe you could give our listeners just a bit of an you know, an overview of the kinds of topics that fall into that into human centered AI. And then we can maybe dive into some examples.

Ron

Well, there's a kind of standard Stanford line on this, which I find really useful. I find it helps my organization. They are organizing my thoughts and it also helps explain what what's going on at this end. Use center that was just founded earlier this year in March is when it was was inaugurated. And the way the the founders I, by the way, read is one of the people behind the founding of HAI at Stanford, the way that the the HAI at Stanford looks at people aren't human centered AI is has these three parts to it. One focuses on doing AI in a way that takes seriously how, what the impact is going to be on humans, individuals, society, etc. So this in broad terms, you can call the ethics of doing AI, so doing AI in an ethical way, but it's more than just ethics, it's actually understanding how to build better systems by taking into account the fact that they are going to be embedded in a system with in structure with people, and what what are the concerns that we need to think about when designing such systems and, and, and implementing such systems. So these are discussed a lot. And rightly so in the popular press.

I'm sure many of your listeners have heard of issues talking about AI fairness, there's a big issue about security of data that the massive amounts of data that modern machine learning systems use, there's issues of privacy, issues of making the decisions that these systems make transparent to the users explainability. There's the whole question of who's responsible when something goes wrong when an AI system is involved, the impacts on our workforce, if AI is going to, is it going to put people out of jobs? And if so, who bears the cost of that? Or what do we do about that? How do we make sure that the benefits of AI are shared by all and not just by a few people who have lots of money or a few governments there is that aspect hai a lot of people are working, smart people are working on that. And the great thing about HCI and Stanford is that it's not just the AI people, not just the machine learning people, but colleagues across the university are really taking this seriously and doing and people from the business school people from the law school, people in philosophy and ethics people in psychology, linguists, they're all thinking about how, what intellectual issues arise, because of this new technology, and how do we stay on top of it. That's the first thing people might think of when they talk when they think about the term human centered AI what what it might mean, there's a couple other aspects that are covered by the Stanford notion too. And one of them is that like about human centered ai, ai isn't at the center humans are. So the point is to augment human capacities whenever possible, rather than replace humans.

So how can we get more out of people make people more productive make people say, “I'm interested in making people more creative, by designing systems that complement their abilities or that interact with them in stimulating ways?” rather than thinking then focusing on the idea, well, how can we take what people do and create a system that does the same thing so we can replace the human machine learning architectures are incredible. And they have that they can do things that better than people can. But people have their strengths to humans have their strengths. And there are no known AI systems that can that can do those things that they can exercise judgment, I can see the big picture like an exercise creativity, machine learning systems, AI systems aren't good at that, at least not yet. And so it would make more sense to see how these systems can AI systems and humans can work together as a team, or a team suggests that the AI system is a fellow human being it's not but how AI systems can support human activities by allowing the humans to do what they're good at, rather than and letting the AI systems do the data crunching or number crunching, or sifting through large amounts of data that humans aren't so good at.

Bobby

I'm definitely seeing examples of exactly that. More and more. So in one of my earlier episodes, with Dr. Anthony Chang, we were talking about the applications of AI solutions and ml solutions in the world of medicine. And he often, you know, when he and I were talking about that, he said that where he's seeing an example pockets where he's seeing success would be where it's not a question of, you know, AI replacing the doctor, it's sort of the doctor plus AI coming up with a much better set of outcomes for the patients than just with the doctor themselves. Right. But that's just one area, I'm sure. And maybe you could maybe give our listeners a few because you may be you may have access to these much quicker but you know, other examples of where there have been some very positive examples of human plus AI doing much better than just the human by themselves.

Ron

There's a lot of work in the Stanford medical community on how to support decisions, doctors’ decisions. And that's an area that's been studied independently of AI a lot. And so they've got a pretty good handle on what kind of factors go into medical decision making and what what counts as a good outcome. And they've got a good view of how that relates to ethics as well. And so that's it's a great area to really, it's a great testbed for different approaches to basically upping the technological power of assisting doctors decision making off the top of my head, there's, we know from many studies, including the famous studies by Kahneman and Tversky, that humans are irrational in many cases and make mistakes. And one of the most famous ones is, it's called base rate neglect. So humans have a tendency to make a particular kind of it's a bit technical, but a particular kind of statistical mistake of just forgetting that a tropical blood disease say, that's very, very rare, forgetting that it's very, very rare. So when they see the symptoms of it, even though it's consistent with some much more, it's sort of consistent with such some much more common disease, but it's very, very, very typical of some very, very rare disease, doctors will tend to make the mistake of of diagnosing the rare disease because of the good fit that actually, you have to take into account the priors, the prior probabilities of and the base or the base rate. So that's an example of a kind of almost trivial example of how an AI system could monitor doctors diagnoses and say, Actually, the the Bayesian, the optimal recommendation would be this to did you take into account the fact that the disease you are recommending and your diagnosis is very, very rare. If you take that, you know, just to check that the doctor has done that. That's an example of how a machine learning system or that could even be a traditional AI system, a symbolic AI, an expert system could assist in medical decision decision making. The idea here is that the more we understand humans, and human cognition, and human intelligence, the better we'll be able to know how it should be, or can be supplemented by some type of artificial system that can compensate for known frailties in human intellect.

Bobby

Terrific. So let's, let's summarize this. And so my takeaway for kind of the three pillars of how to talk about human centered AI designed for positive impacts on humans, I'd say designed to augment rather than replace human intelligence. And the third thing which you just touched on, the more you know about how human intelligence works, the better you can be to architects and augment them with artificial systems.

Ron

That's right, I kind of jumped to the third pillar of the Stanford AGI approach without making it explicit. So yeah, the third pillar is that we should make AI systems artificial intelligence systems in the light of an understanding of natural intelligence. And that doesn't just sometimes it might mean that you're going to copy human intelligence. So you're gonna say, Well, this is how humans say, our creative. So if we want to build an AI system that's creative, then maybe we'll go through similar steps. But as I just pointed out, it doesn't have to mean that it can be a complementary issue, given that humans are like this and have these weaknesses, then we want to shore up those weak areas in decision making with these artificial systems. So it has both of those aspects. It's not necessarily about copying the brain. So we might learn a lot about the neurophysiology of human intelligence. And that might provide some good insights into how to design artificial learning systems. But I think that the currently there's a mismatch, so the kinds of AI systems, even the neural network style AI systems that we have now are so different from so simple compared to the complexities of the mammalian brain, the human brain, but we don't really know what to do, we wouldn't know what to do with those detailed neurophysiological studies. They can only be I think, at this point, inspirational and give some general guidance, I think the insights that we're going to take from cognitive science to apply to AI now are on a more abstract level, on a more psychological level, we can study what kinds of information humans do or do not take into account when making a decision. And we can use that directly in the design of our systems. Now, it doesn't matter whether the circuits that are implementing the machine learning system are the same on the hardware level as the certain neural circuits implementing human cognition. So I think the most of the transfer from understanding human mind to designing artificial systems is going to be a psychological or even philosophical level rather than a neurophysiological level at the current moment.

Bobby

Yep. That's super interesting. So I'm looking through my notes here, and I was wondering if you could do I have a bit deeper into some of the examples of areas that you're working on. And one area to start with that I know is have a lot of interest to a lot of people is the notion of creativity.

Ron

Yes. So that's an area that I've been interested in for a long time. And my colleague at the University of Sussex, I mentioned her before Margaret Bowden, she really pioneered the study of computational creativity and addressed philosophical issues like How could you even speak about something as rigid, and mechanical and rule following as a computer being creative. And normally, when we think of creativity, we think of breaking the rules and not being bounded by mechanical thinking. And she has spent her career explaining how these two perspectives are not at odds with each other, that you can get suppleness, and fluidity and insight, or you can model insight in computational systems. I think that's most evident in machine learning systems systems that acquire their own way of representing the world rather than as I mentioned before being given a fixed way of registering the world or representing the world by a human. If these systems can develop their own way of looking at the world, they can also ditch that and develop another way of representing the world when it becomes necessary. And you can think of a problem, whether it's a traditional problem to be solved like a design problem, or whether it's the problem of problem of what musical piece should I compose. Now, you can think of those as being challenges to your current way of representing the world. And if you have the ability to let that representational scheme go and adopt a new one, then you might be able to do something radically novel something that looked impossible from your previous way of representing the domain. This is a great example of the second point of the HAI the human centered AI approach at Stanford because it's, I think, in this yes, you can create, you could design AI systems that are trying to generate new things like new designs or, or melodies, or songs or whatever, or videos autonomously, those would be a generative Lee creative systems, or you can think of, you can read and reconfigure the task and say, what we want to do is build systems that will make humans more creative. So it's still humans would in this, this approach, humans would remain the main locus of creativity. And that's what humans are really good at. But through modeling, the creative human, the AI system, the machine learning system might be able to suggest to the human options that they hadn't considered that the AI system doesn't really understand why those are interesting. But given the way that the human has been interacting with the domain, just it can suggest raw materials or new problems, or new challenges or new possible configurations that the human might be able to say, No, that's rubbish. That's rubbish. That's irrelevant. Oh, no, that's something interesting. So the human can use their ability to detect something of value in the set of possibilities that the AI system could throw at it. So it's a little bit more detailed than that. But that's the general structure of the kinds of systems machine learning systems for creativity that I've been, I've been looking at…

Bobby

How does generative cooperative networks fit into that?

Ron

So there's a like, I guess a lot of your listeners, most of you, many of your listeners will have heard of generative adversarial networks, GaNS. And these are the networks that are so good now at creating say, lifelike or realistic looking images of people's faces, but they don't actually correspond to any person. And it's because of this, this adversarial relationship between something that's generating a face and then another thing that's trying to categorize the face as having been seen before, and there's this adversarial nature between them, they, they it's kind of an arms race between them. By speaking of a generative generative cooperative networks rather than adversarial networks. I'm talking about an architecture that's similar on a superficial level on a gross level. You have two networks, one of them is trying to generate something creative, and the other one is evaluating that as whether it is creative or satisfying. In some sense, it's surprising how many attempts have been made at generative creativity or building a an AI system that is going to create some symphony or some work of art. And yet, the system is also incapable is nevertheless incapable of appreciating works of art, like most of the generative, musical generative creative systems. I can't listen to music, all they can do is produce music. And so what are the chances that they're actually going to produce something really good if they, if they don't even like music themselves. So what I'm trying to do is build systems that actually evaluate the music, say, created by others, if it's music, or art or designs created by others, and can appreciate them and say, oh, right, I see what you did there. That's really, that's very satisfying, and use that evaluative capacity to look at its own outputs that it generates, and apply that to itself and actually, in the generative process, kind of try to cook the books to say, what can I do? What move would I make here that would please my evaluator component more, so they're trying to please each other, the generative system is trying to satisfy the needs of the of the evaluating system. But every time the evaluating system does encounter something new that likes, it's less interesting to it, it's part of what makes thing and in the systems I'm exploring, part of what makes something satisfying to experience or evaluate as either a work of art or as a design is having a little challenge and understanding how it works or understanding it. And the first time you try to understand it, there'll be a little challenge, but you'll get some payoff, because you eventually Yes, I can, I can grasp it. So that's very satisfying. But next, the next time you try to understand say the very same thing, well, you've encountered it before. So it's not as hard to understand it now. So you don't get this, this multiplicative, multiplicative, effective being both something you can understand, but it took some effort to understand it, on the approach I'm taking, you have to have both of those to the extent to which both of those are present, you'll get maximum satisfactory satisfaction score out of the evaluator, so the generator has to up its game has to constantly produce things that will challenge the evaluator, if it just produces the same thing over and over again, the evaluator will take, it will be too easy for the evaluator to evaluate and understand it. And so the evaluator won't get as much of a kick out of it. And so the generative system doesn't get rewarded as much, because it's trying to please the evaluator, and so on so that you get this arms race, but it's a kind of cooperative arms race, each one is trying to the evaluator needs to be satisfied more and more, and the generator is trying more and more to satisfy the value.

Bobby

So do you think that there's a way to draw a line and kind of get a line of sight from the dynamic between the generator and the evaluator, and, you know, Spotify and the trove of data that they have be able to create a hit song or Netflix and the trove of data they have and be able to create a hit movie?

Ron

So in theory, yes, but what I've been focusing on is much more individualistic. So I think each individual takes a path of their own. And the way I think of this generative system, and the evaluators of the general system is modeling this particular evaluator at this particular stage in its development, and is honing and tuning its recommendations just for that particular evaluator. So the best analogy with what Spotify, Spotify could do, would be not writing a hit song, but writing a song that you in particular will like so a particular song that will hit the right spots for you now, chances are, you're not the only one who will like it. So but it wouldn't be that in the abstract, you calculate what would be a great song, and then put it out there and then hope people like it, it would be more like, well, for this particular person, this would work for them. And then let's see if some other people like it as well. Or maybe they know how to take that song and make it more generally.

Bobby

Yeah, it's funny, they talk about a different field in healthcare, they talk about the future of personalized medicine, where you have drugs custom-built for your ailments based on your, your genome, your DNA, in what you're talking about, it's sort of you can imagine a future where Spotify, Netflix, or their, you know, the successors, it's not about making a one size fits all, it's more about making a hit movie. That's Ron's favorite hit movie that you found more compelling than the Godfather or something

Ron

And that's—you can do that reliably for each person, then that's pretty powerful. Pretty powerful. It's pretty powerful. I want to add one thing is that with a slight twist, you can change this from creating artworks or songs or videos or films, to being rather than about entertainment or about aesthetic appreciation. It can be about education. So if you think about it, if the generative system is generating problems sets, say for learner and it's good at modeling, where's that sweet spot between understandable by the now it's not an evaluator, but maybe a learner? So where's that sweet? far between it being challenging so that it pushes you requires you to expend some effort. So you'll find it satisfying when you do solve it. But not too challenging. So that's too hard, but also not being too simple. So that you say, Yes, I can solve that. But it's an interesting because it takes no effort to solve it, or the kind of effort that you're expending is not the the interesting kind of efforts, not the effort of understanding, it's the effort of just having to go through the mechanical motions. If this works for the area of creative design. It could also perhaps be used for designing tailored learning systems that figure out where you are, what would be too challenging for you what would be too simple for you, and pushes you just a little bit like a personal trainer for cognition, or just for learning, which you can imagine is hugely powerful. You think about where these online courses and education things are going. So it's an obvious kind of add-on.

Bobby

Alright. So I know we're almost out of time. But there's one question that was kind of gnawing away at me that I think might be fun. I think early on, you mentioned reading science fiction. I know everybody, it's a very personal and subjective thing. And everyone has their kind of favorite science fiction book. But what's yours?

Ron

That's a difficult question. I'm sure for a lot of science fiction fans that's a tough question. But I will say, especially given my research of late that it's by Frank Herbert. But it's not the book you might think. So it's not doing I mean—I think Dune is amazing. I think all the all the Dune books are fantastic. And I was reading Dune in the back of the car when I was when we were traveling or moving to England from the US and traveling across the US by car. It was was certainly one of the things that got me thinking about “What's the mind?” and it was just a perfect thing for a 14-year-old boy to read at that time. But I have in mind a different book by Frank Herbert. It's not as well-known for sure. And that's probably for good reason. I would say it's not as masterfully written as Dune is. But it's just so uncanny that this book anticipates the field of machine consciousness, they've been so interested in trying to understand how artificial systems could be conscious, it really does a great job at anticipating that whole field. And it's called Destination Void.

What's brilliant about is that Frank Herbert doesn't imagine some engineers just saying, “Okay, let's get to the down to the problem designing a robot that's conscious” or something. No, the people who are trying to develop machine consciousness in this world do it by creating an environment where the people who are put in that environment are designed, the people are cloned in such a way that their abilities will complement each other. And they will come up with a solution to the problem of machine consciousness by making the spaceship that they're in conscious. So the spaceship has a kind of brain in it, and the people, the crew members of the ship are have to solve a crisis that the people on earth have deliberately engineered into the situation in order to the only way to solve that crisis would be to make the ship conscious. So people on earth think if they do that often enough over and over again, then maybe eventually they'll invent machine consciousness. And the brilliance of the book is that Frank Herbert's main interest wasn't in machine consciousness, he was seeing this as a metaphor for our own, our own journey through life. We just like that spaceship is traveling through life, and it doesn't have any destination. It's just encountering crisis after crisis in space, and hopefully along the way, it becomes aware of itself. Similarly, he thought of us as traveling through life without any particular existential destination. But by encountering our crises, maybe some of us will come to be aware that while we become aware and go to a different level of consciousness or experience.

Bobby

That's fascinating. I'm gonna have to download the free sample on Kindle and see if I can get a copy.

Ron

It's not a pleasant read. So I would say, Dune is much more fun to read. This book is a labor of love for people who are obsessed with machine consciousness.

Bobby

Well, I think it could be kind of interesting. So thank you so much, Ron.

Ron

Thanks. You know, it's been a pleasure and thanks for giving me this opportunity to talk about my background and work. It's perfect.

Ron Chrisley
Visiting Professor of Symbolic Systems, Stanford University

Loka's syndication policy

Free and Easy

Put simply, we encourage free syndication. If you’re interested in sharing, posting or Tweeting our full articles, or even just a snippet, just reach out to medium@loka.com. We also ask that you attribute Loka, Inc. as the original source. And if you post on the web, please link back to the original content on Loka.com. Pretty straight forward stuff. And a good deal, right? Free content for a link back.

If you want to collaborate on something or have another idea for content, just email me. We’d love to join forces!