Center for Cyber-Social Dynamics Podcast

Center for Cyber-Social Dynamics Podcast Episode 5: ChatGPT and its Socio-Ethical Implications with Sam Arbesman and Dr. Ramon Alvarado

Institute for Information Sciences | I2S Season 1 Episode 5

Send us a text

Since the release of the first iteration of ChatGPT in November of 2022, questions and concerns have been raised about how disruptive it might be to the workforce, the ethical issues it raises about privacy and copyright infringement, and on the more optimistic side, what applications it might have as it develops. Further, responses have often been at the extreme ends of a broad spectrum, with many voicing varying degrees of optimism and pessimism.

 

To help us think through these responses and ethical questions, we were able to put in conversation two researchers with significant scholarship in data ethics, design, and their applications. We have Sam Arbesman, who is a scientist in residence at Lux Capital and a research fellow at the Long Now Foundation. Joining Sam and I, is Dr. Ramon Alvarado, who is a professor of philosophy and digital ethics at the University of Oregon. 

David:

Since the release of the first iteration of chatGPT in November of 2022, questions and concerns have been raised about how disruptive it might be for the workforce, the ethical issues it raises about privacy and copyright infringement and, on the more optimistic side, what applications it might have as it develops. Further responses have often been at the extreme ends of a broad spectrum, as many voicing varying degrees of optimism and pessimism. To help us think through these responses and ethical questions, we were able to put in conversation two researchers with significant scholarship in data ethics, design and their applications. We have Sam Arbersman, who is a scientist and resident at Blux Capital and a research fellow at the Long Now Foundation. Joining Sam and I is Dr Ramon Alvarado, who is a professor of philosophy and digital ethics at the University of Oregon.

David:

You can find this in other episodes of our podcast on Spotify online or other podcast hosting platforms. Thank you for listening and enjoy. Okay, sam Ramon, thank you again for joining me for this follow-up conversation to our original event that we held earlier in May about ChatGPT. So thank you both again for joining me.

Ramon:

Thank you, thanks for making the time to capture it again.

David:

Yeah, well, yeah, capture it again and actually capture it this time. And so, with as we began our conversation that day, I wanted to sort of give some time for those who may not know particularly what's going on in this conversation and may not be privy to the public discourse about ChatGPT and know exactly what it is In the way of. And I think, Ramon, but in a very nice way when we chatted about it, in a way of sort of maybe demystifying some of the things that are being said about ChatGPT and giving people a decent idea of what we're talking about when we're talking about ChatGPT, I was wondering if we could just kind of start there, of sort of setting aside the what it is or talking about what exactly is ChatGPT? What are we talking about when we use that term?

Sam:

That's a very good question. I think ultimately in ChatGPT it's part of these large language models and it's basically part of what are called like transform models. But ultimately these models are premised on the idea of prediction, like predicting the next word or token, and there's many ways of doing this kind of thing. I mean, there's like very simple ways where you kind of like look at the statistical properties of language or the statistical properties of individual words and say, okay, these letters occur more likely or less likely. And so therefore, when I look at when I'm generating a letter, I'm just going to say, okay, if there's a 50% chance there's going to be an E or an A or whatever it is, I'll just kind of like roll some way to die and then that letter will come up next. Now, of course, that assumes nothing about what has come before it, and so you can kind of make this more and more sophisticated. You can say, okay, when an E occurs, there's X percentage chance that a T will be after that or an A after that or whatever, and you kind of then use the letter before it. And then you can do the same thing with, like okay, what would happen if I had three letters before, or four letters, or this chunk of words. Now, at a certain point, though, you don't have enough data, you can't say, okay, what is the probability for once upon a like, what comes next? And of course, I mean know that it's going to be once upon a time, but there's other probabilities, but there's many situations where you might have a specific phrase that's so rare that you don't actually even have that in the data side.

Sam:

And so transformers, then what they do is they kind of create this representation of language that, based on that, is able to kind of create this statistical analysis of what would come out, what would allow you to kind of predict the next word.

Sam:

And then what you do is then you say, okay, I've now predicted the next word, let's now take the new stuff like kind of like shifted over one word or one token, and then pour that into the model again and then figure out what's going to come out next. And you keep on doing this over and over, and there's lots of parameters and details and things like that, but based on that, it is predicting language, and, as a result of that, it ends up being, it ends up outputting things that are very language, like that feel very natural, that have certain stylistic properties. So if you start writing it, start writing something in a certain style, or ask it for a certain style, it will kind of know where in its representation to kind of draw From. But ultimately all it is doing is predicting the next word or token.

Ramon:

Yeah, you know, I want to add a little bit to that. Guess, like Sam mentioned, of course they're called generative, pre-trained transformers, right? And so the generative party is super interesting, of course. The transformer party is super interesting as a machine learning methodology, and, of course, the pre-trained we can talk a little bit about that. But I want to talk about the generative, because this is the key aspect of these new models that they don't just kind of predict the way that old machine learning used to predict, telling you like, oh, this is more likely to be the hypothesis or this is more likely to be the case. They actually, based upon a prediction, then they generate the next set, like Sam was saying, the next set of possible words or the next set of possible pixels around an image, and then from there they keep generating, right, they look at what they just generated and then they generate some more based on what they just generated. And so it's really interesting to see that these actually started a little bit around.

Ramon:

You know, 10 years ago, when Netflix was trying to complete some data sets, they had some rows and some spreadsheets that say look, a lot of people are watching these movies and we need to predict what the next possible movie. It is that they would like, and so they had these contests right. Can somebody fill in some rows that we have that are empty, to tell us what's the most likely movie that the person will click on? And so they generated this huge contest on what came out of this contest, what was this method called matrix completion, in which people, you know, just had sets of data, tried to predict what's the next set of data associated with that data, but then filled in right. They completed that matrix by putting in some synthetic prediction and then working from there right. And so recommender systems became so much better. After that, a lot of entertainment became so much into, you know, putting money into these machine learning mechanisms, and from there we start this generative landslide right where that gets us to GPT in 2022. So 10 years right after that. So this generative part is really interesting.

Ramon:

And the other thing that I wanted to say and I think we talked about this, sam, last time we saw each other was this idea that when we're talking about chat GPT and of course we're talking about GPT GPT is the kind of technology that is behind the chat version of what most of us are interacting with and ultimately, chat GPT is a more of a user interface to interact with these models in you know, one dimension, or maybe two dimensions, and what did I mean by dimension as well in the chat version, right?

Ramon:

So these models can do all kinds of things, but they can also chat with you, and that's the one that we've been using a lot, and, of course, it's super interesting for a lot of us to have these conversations with this machine.

Ramon:

It seems even problematic for a lot of academics that this machine can put together sentences and paragraphs that seem very legible and very smart, but at the end of the GPT part is a lot deeper than just this chat version that we're seeing, and so I just want to remind the listeners that the chat GPT version is just a little user interface in which we can communicate through text and receive prompts or, sorry, receive feedback from prompts through text with GPT models. And I just want to, as usually you know, push a little bit further and saying, if you're worried about chat GPT, what you should be worried about is more like the GPT part rather than the chat part. But also, if you're looking for promises, not just perils and harms, with chat GPT, you should also be looking at GPT, because the generative, pre-trained transformers are what's really going to drive this technology forth in all kinds of dimensions in the next few years. Not just the chat part, but yeah.

Sam:

Yeah, and I would add a couple of things, and one with the chat GPT. I mean, when it went so chat GPT, when it was first released, it was essentially under the hood it had GPT 3.5, which was in advance from GPT 3. But when it was first announced I had been playing with the GPT 3 version I thought, okay, this is just the same thing, it's kind of this fun little wrapper for it, and I kind of discounted it, not realizing the extent to which a user interface, in this case the chat interface, really did make a difference in terms of lowering the barrier to allow people to actually use this kind of thing. Now, of course, there's many other ways of using it and interacting with it, but I had now realized the extent to which a chat interface was sort of not necessarily the killer app, but it was this very clear way of allowing non-technical like far less technical users to actually play with the GPT tool and product, and that was very impressive.

Sam:

And the other thing I would say, though, going back to kind of the way in which these models are trained and I think we discussed this in our last conversation that these things they're not being trained on the world.

Sam:

Sometimes the world is encapsulated in text, but they're ultimately being trained on the world of text.

Sam:

They're kind of being trained on like story land of like stories and text and information, and there is no reason why text need, I guess, coincide with reality.

Sam:

Sometimes it does and sometimes it doesn't, but we have to recognize that ultimately these things are stimulating, not sorry, physical reality, but they're simulating kind of like story world and kind of story reality and how we set up text. And so there's a number of people who have talked about that we don't get to understand not just the physics of these systems but the semiotic physics, the way in which they represent things, and recognizing that narrative tropes and techniques and narrative devices this is sort of these other kind of primitives that these models operate on, because they're used to only predicting text and modifying text, and so they have certain things like check out the gun or kind of other sort of narrative devices, and so that's another thing that I think people need to be aware of, that it oftentimes feels like these systems have some sort of connection to kind of the ground of physical reality, and they don't. They are ultimately connected to the world of text and we have to kind of just be very aware of that.

Ramon:

I really like that sound, because one of the things that I keep telling people is this idea that these GPT models don't really understand language the way we do, such as in concepts and ideas and terms. What they understand is the mathematical relationships between words, the statistical relationships between sentences. They understand the math behind the language and therefore they can deploy it so well. And of course, you know, from that math of emerges this possibility of interpreting concepts in a very rich manner. But at the end of the day, it's working through math to deploy language, as we work with language to deploy language. It's not really the case that we learn the statistical parameters and relationships between boy and toy and or love and you before we can use them, and so it's a very interesting way of seeing the world, because it does see it through text, but it mainly sees it through the mathematical relationships of text.

David:

There are some of the comments that you've both made, reminded of victim Stein states in the tractatists. The limits of our language are not marked, the limits of our world, and to sort of press on that, on that point that you all bring up, can you say a bit more? What do we lose, or what does chat GPT lose, given the way that it relates to language versus the way that human beings relate to language?

Sam:

And I think I mean there's many cases where humans play word games and we kind of just enjoy the beauty of the language, the answer, I think, or what it is connected to. And certainly, and I think in these models there is a certain degree in which you can kind of say, okay, there is a semantics there and like a meaning, but at the same time, though, it's not connected in any sort of explicit way to anything either out in the world or kind of any sort of larger concepts that we might kind of have societally. And I think that limitation means that it can be this way, a very fluent confabulator where it's just like oh, it's talking with a great deal of certainty about certain things, or it's arguing certain things, or it's kind of citing references that don't exist, and these are all. If all you do is care about language, in this case predicting things that seem to make sense based on statistical properties and based on the math, then that's all that matters, and we don't just care about that.

Sam:

We obviously want things to be grammatically correct, but we also want it to actually have some meaning and be connected to either a physical world or the world of information and the kind of preexisting papers and things like that, and I think that's one of the things that's missing. There are plugins now for ChatGPG that will connect with kind of the features of the real world so you can connect it to like Wolfram Alpha, or I think you can actually connect it to Kayak and like search for plane tickets and things like. There are ways of having it connect to the real world, but as a just the model itself. It's kind of this really sophisticated language game that is being played, which I think one should make us recognize, okay, like we have to be aware of certain things, but also recognize, like the fact that we can be fooled as humans by just a language game that's being played says something interesting about ourselves. I'm not entirely sure exactly how to articulate what it does say, but I think it's something worth noting.

Ramon:

Yeah, I think that, as we saw earlier, I think, with what's it? Galactica? Well, the first sort of academic large language model that failed right after I don't know less than a week of being open for us to try. I think one of the things that came up with that and of course we can see it when we use ChatGPG for academic purposes is that, again, it works through the mathematician of language and what it really gets and what it really understands is the use of language and the use of language through mathematical relations. And so when it goes through this corpus of academic papers on a subject, it really doesn't start working on theoretical, fundamental principles. It doesn't work with physics, it doesn't work with the equations I mean, people are working on making it work with those things but so far it works with the written texts and then the relationships of that kind of text with each other, the statistical correlations between the sentences and the topics and the kinds of things you would say if you were writing a paper on this topic. And that's what it reproduces, right, that's what it generates. It's still the case that up until now, even though Wolfram and other people are working on this idea, it hasn't been grounded on the science itself, for example. It hasn't been grounded on the physics itself, but it can produce a text that sounds like it, that reproduces the use of the language in such a context as physics or other very sort of formal inquiry disciplines.

Ramon:

And then the other thing that I think large language models miss so far, and I just pulled up a paper that I read a couple of weeks ago. It's called we're Afraid. Language Models are in modeling ambiguity, right, and of course they're using the word afraid on purpose here, right, so we're afraid large language models are in modeling ambiguity, because the large language model cannot tell the difference between being afraid and being afraid, right, Like we're afraid is sort of we're concerned, we're afraid, is we're scared, or we're afraid is like, oh, we've noticed that, right, or we're disappointed, that right. And language models cannot do that. And this actually just came out April, 27th of April, so like a week ago, sorry, a month ago, by Alissa Liu and 30 other engineers, probably.

Ramon:

Anyway, so this is one of the things that he doesn't get again, right, because the mathematical relationships of texts sometimes do not tell you how to disambiguate these words that may have multiple meanings, and large language models are still sort of missing out on something that for us humans, of course it's not. We're not born with this capacity, but it certainly becomes a little bit intuitive depending on all kinds of other inputs like oh, I can see your body language, oh, I can see your tone, oh, I can see that the other things that you're gonna say after and the things that you said before make it so that you're being sarcastic, et cetera. Right, so it's still struggling a little bit with that, unless you're explicitly stating it in your prompt.

Sam:

And another thing also interesting is that oftentimes the people who have best been able to kind of figure out like the bounds of these systems and kind of the edge cases or kind of figure out prompting junction, that kind of end up like doing some weird thing, kind of making the system fail.

Sam:

There are often people who have a very deep understanding of language and I don't know if these people are kind of formally trained in humanities or not, but it's like they kind of they really understand language and how stories are told and how people use words, and like that is a deep understanding that is required to actually understand how these systems are going to fail.

Sam:

Because if you don't do that and you kind of just use them naively, you might think they work until of course they don't. But if you have this deep understanding of kind of the way in which humans use words and convey information and the grooves within which stories are told, you're much more able to create prompts that cause the system to fail in some unexpected way. But actually it's quite expected if you understand the ways in which we use words and language, and so I just found that very interesting. That kind of like the failure mode often requires a deep understanding of the ways in which we use words and the ways in which, like, semantics and ambiguity and all these different things kind of play together.

Ramon:

That's sort of like the John Simmons that University of Kansas said. This is like the revenge of the humanities to a certain extent, right when it has. Yeah, it is a lot of historical humanistic knowledge and context that brings about these nuances that seem to not be there in this generative processes of GPT.

David:

One thing that I've come to be somewhat wary of in pointing out some of these limitations in chat GPT is, I guess, asking the question is are these limitations fundamental with these programs, in the sense that it could never develop these abilities, or is it only a matter of time that they're able to sort of portray ambiguity, to have that as part of their mechanisms? Or is it yeah, or is it something that's? There's something that we that can never do and that's gonna be something that's gonna be, that's gonna stay human in a sense?

Sam:

My perspective is I'm not sure. I feel like anytime people have made predictions about these kinds of things, I would say the one thing that I feel most confident saying is that those predictions are generally wrong because we just don't know and like people didn't realize that just like scaling up the amount of input, data and kind of the systems would actually yield qualitatively different behaviors. I think that's been, I think that I think has been a constant sort of pleasant surprise that these systems, as they grow in sophistication, not just they don't just become kind of better, kind of at the edges, but they actually seem to change sort of their qualitative behaviors. And so I would say, I don't know, but it could be true that we just kind of scale these things up and, yeah, we get everything that we're currently complaining about, and then these things happen. But and this is the kind of thing it's also just more broadly, if you see this like whether it's like anthropology or psychology, and there's kind of this traditional march of like people always saying, I think, I think was it like the psychologist Dan Gilbert, as it refers to like the sentence where it's like humans are the only species that can do acts, and of course, what is acts. Maybe it was tool use, maybe it was language, maybe it was certain things around like cultural transmission. And then of course, like we found animals, like in the animal kingdom they can do all of these different things. And of course the sphere of what is specifically human kind of gets narrow and narrow and narrow. And of course now it's not just it's actually the writer, brian Christian he's written about this kind of thing Like it's now just not the animals who are kind of taking away this sort of uniqueness, but AI as well. And we see this. And with the Chow GPT, like now it's not just other humans who can write kind of mediocre poetry, but computers can do this as well. And like we're gonna kind of see this constant march.

Sam:

And so for me I like to think about it, less about like what is like uniquely human, cause I feel like maybe eventually these computer programs will be able to do all the different things that we kind of think of as the specific domain of the human, and more about like what is quintessentially human, like what are the things that we view as like what is really important? Like I mean, there's many things I do that I will view as quintessentially human, like walking around or like doing my taxes or things like that, maybe like animals can't do their taxes, but like that's not like quintessentially human activity. I don't view that as like the quintessence of what it means to be human, but maybe like, maybe gardening or spending time with my family or thinking deeply about ideas Like these are the things that I value, and of course it's different for every single person. But just because a machine might be able to do those things, I don't necessarily mean that therefore it has less meaning for me. I mean, just because I can read a book doesn't therefore mean that all the other humans who are able to read can also like. It's like there's many other people who can do these things too, just because another computer program can do that, and so maybe I've kind of like zoomed out too far.

Sam:

But I think we have to kind of think about both like essential humanity, kind of like or sorry, the uniqueness of what is human, the what is quintessentially human, and that I think will hopefully allow us to handle whatever kind of comes out as whether or not there's going to be this plateau, and maybe it says, and it'll get better for a long time, and there's always these things that we can kind of trick them with, and maybe that'll be true for the next like several decades.

Sam:

It could also be that within a year or so we'll discover that all the things we thought could not be done easily I mean in truth, like the turn test we kind of we blew past that and like I feel like occasionally there should be like these like parades in the street and people should be like like what an accomplishment, this is amazing, or it means just like acknowledging it, but like we just blew past it and kind of just like, oh yeah, like now we're just like worrying about the next thing, which is fine, like we should continue worrying. But it's kind of crazy that like we hit all these milestones and we're kind of constantly then like either moving the goalposts or kind of thinking about how else it fail, without realizing like, oh, we've done all this kind of stuff amazing, like amazingly well so far.

Ramon:

Yeah, you know I mean that question, David. It's a very interesting to me because, of course, going back to Turing, he anticipated this question. You know his paper on machine and intelligence. It's a very interesting paper. I'll tell you a little bit about the Turing test if you guys want to, but just talking about the kinds of objections that he anticipates to the possibility of machines doing human things, he's like look, every time I hear an argument that says machines cannot do X, it's basically based on sort of like an induction of the past, and that's just.

Ramon:

I mean it's illegitimate to say that just because something has to be done, that it won't be able to be done. And it turns out that a lot of the things that people think that haven't, that couldn't be done, just get done right sooner or later. And so when you put X there, machines cannot do X, Usually it depends on something very fundamental and I think you know Turing got it but a lot of people don't haven't gotten it yet and it's the following this is something that I'm preparing for to teach in philosophy of computing is the idea of a function, and whatever is able to be put into a function, then it's also computable. And if it's computable, then the machine can do it, right? What does it mean to be put into a function? Well, it means to have an input, a transformation and an output, and whenever you have the capacity to describe anything in the world through that lens, then you can rest assured that it can be computable. If not right now, maybe somewhere in the future where we get more computing power. And so that's just something very interesting, right that, like Sam, I don't like to predict what can't be done by computers, and of course there are some things that I found recently, in the last couple of years, that I just thought very thought provoking, and I don't wanna go too much into the AGI sort of debate, right, but there's this paper by I think it's McDermott, I don't know.

Ramon:

Early in the 2000s, AI ethics wasn't really called AI ethics, it was called machine ethics, and there's a whole literature on what it takes for a machine to be an ethical agent and what it means to put ethical frameworks into a machine, what matters to a machine, all kinds of things about machines and ethics. But anyway, there's this one paper there and I forget the name, but I think it's almost McDermott, I think it's by McDermott where he says if there's one thing that is gonna keep sort of us from saying an artifact can be a full moral agent, is the following right? So there's three lines. The first two can easily be achieved by an agent. You can put some sort of utilitarian ethics into an artifact, you can put some deontology into an artifact and you would just go through the rules and solve it. Right, Solve the outcome. But one thing, he says, is that's specifically human. He says the one thing that we won't be able to do and to make a machine do is to be tempted. And so temptation becomes the key of that line. Right? What does it mean to be tempted? Imagine that you put a moral framework into a machine, like, let's say, deontology or utilitarianism, right, and the machine goes through the calculus Should I do A or should I do B? Which one is best? If you do utilitarian calculus, he's like well, maximizing utility, minimizing harm, et cetera. That's the calculus. And then he arrives at the right thing to do. Right, Most of them. If it's just a second layer moral agent, then it would just do what the right thing is according to the framework.

Ramon:

According to McDermott, there's this point in which one thing he won't do is that once it arrives to the right thing to do, that he's going to doubt that he wants to do it. And he's like well, I know that's the right thing, but I don't really want to do it. And in disability, where now that you have several, let's say, moral frameworks, maybe you want to compare between them, he's like, well, I know it can't, and the ontology wants me to do this, but that's lame. So maybe I'm going to go and consult utilitarianism, See what utilitarianism tells me, and then what it tells me is also something that is not preferable for me. Maybe I'm going to be tempted Like, yeah, no, I'm still going to eat that ice cream, or I'm still going to sort of, and anyway, I just sorry for the diverges here, but I know that at the end we touched on this idea of what's uniquely human. There's at least some consideration on what that is, and the most feasible theory that I found, or the most beautiful one that I found, has to do with temptation, not with cognition, not with capacity to deliberate morally, but to be tempted morally to do something else. Anyway, so the last thing that I wanted to say, just about the Turing test.

Ramon:

I think you're right, Sam, in that we've passed the Turing test a long time ago. We started with Eliza in the 70s, that it would fool most of us, even people that were involved in creating it, into thinking that you were having an actual conversation, Even if you knew it was a machine. It was still a meaningful conversation. Therefore, in that sense, it already crossed the line with the Turing test. But, of course, I think that one of the things that John Simmons and myself have been working on is what does the Turing test really test? And at the end of the day, we have seen that it's not really a test about intelligence and machines at all, Because if you look at the positive arguments in that paper, they're actually quite weak. He basically just spends most of his time attacking possible objections, but the main drive, or positive drive, of the paper has nothing, not much to do with intelligence. It's just the possibility of playing a game.

Ramon:

And so we came up with this idea that, hey, you know what? Maybe what the Turing test really tests is our really crappy excluding criteria as humans, and how we're able to deploy these very badly frameworks for discrimination to things that are not like us. And so, at the end of the day, the Turing test is actually telling us. Why do you keep excluding all of these agents with these bad reasoning? Is it just because you want to be a bigot? Or is it just because you don't like machines? Because, other than that, here's seven reasons why your arguments fail, Anyway. So that's just a little divergence. Some of us think that the Turing test is actually testing us humans and our moral community exclusion guidelines, rather than testing the intelligence of a machine, but that's for another conversation.

David:

So I want to go do a little bit of assessing the more or less the Twitter discourse and a broader sense to public discourse about chat GPT. I think, as I mentioned with Ramona in an earlier conversation, we have these extremes of people that are worried about, say, the promises or the potential that chat GPT not just chat GPT but other types that we can get into the other GPTs and can go beyond the scope of this conversation a little bit worried about, I guess, the social implications, the workforce implications of these technologies. And then there are those who are saying that this is a and we can try to make sense of this. This is just a software, this is not going to. We shouldn't be as worried as maybe that some of the doom and gloom folks are being right now about chat GPT. Given all that, the current extremes of the public discourse right now, what's a conservative forecast of what chat GPT may actually do? And then what should we? And then what should we actually be worried about?

Ramon:

So let me just say something really quickly and then leave it up to some. I just want to point out, for me, of course, being an ethicist of this technology and being a philosopher of this technology comes with a little bit of slowness, in that I'm kind of slow in catching the panic and I'm really kind of slow in catching the hype, both. And part of that is because I can just cite one book that will hopefully illustrate my point, and it's this book from 1974 by Langdon Winner, called Autonomous Technology. This is 1974, 1978.

Ramon:

I forget what it is, but it goes through all the history of our fears. It's not the history of autonomous technology, it's the history of our fears of autonomous technology dating all the way back to the Romans right, all the way back to the Enlightenment period, all the way back to automatas in the Renaissance and after. And one of the things that Langdon does very well is that, at the very least, it problematizes our fears, right. So he says look, and he pushes us towards consistency, saying look, if you're worried for this, you're forgetting that you should have been worried for this a long, long time ago. So anyway, I just want to point to this idea that this fear of automation has a really sort of strong and broader context. That is not new and it's not about sorry about that, and it's not novel technologies. So, yeah, sorry, go ahead.

Sam:

No, I would add to that. I mean similar to the way you mentioned, like turning's paper about machines thinking. People were thinking about these kinds of concerns and ideas, basically the outset of these technologies. So it was almost like within these technology kind of laid the seed for all these things. If you look at, norbert Wiener in the 60s was talking about basically the alignment problem, which I mean obviously the idea of how do you make sure that what we want machines to do actually is done, and he laid this out, and he talked about the classic story of the monkey's paw and kind of like be careful what you wish for, kind of thing. And he very explicitly in terms like and this is very early on in the days of artificial intelligence. So I agree that not only are these concerns not new in terms of we've always thought these kind of things about other technologies, but even the new concerns that we have are actually not that new. We have been thinking about these things for a long time, and we kind of need to recognize that, and so and again, maybe I'll kind of use a cop out of like I really am not good at kind of predicting what is to come.

Sam:

I do think, though that there will be a lot of. There will be a certain amount of disruption of like what these things can do, but I also think there will be a certain amount of regulation, whether it's kind of like society as a whole saying OK, we don't want to use these things in a certain way, or government regulating them. I think that will happen. In what form that will take, I don't know. There's already early work being done around creating kind of like mathematical watermarks for things that are generated by chat, gbt and kind of similar kinds of systems, so that, even if you modify the text in kind of your own way, you can still basically ascertain with a high degree of certainty whether or not this thing like whether or not a text, was generated originally by these systems. And so we are kind of figuring out ways of handling what these tools are outputting.

Sam:

Now, in terms of how it changes jobs, I don't know, but I agree with what Roman was saying. These are not new concerns, like yeah, even if you take kind of like the newest point, as like the 1970s, there were a lot of concerns going back, and even so, and like with the ad event and the internet and the web, how people think about this or how people thought about, like, digitizing the workplace in the office. I'm sure there were similar kinds of concerns of like as we automate certain of the things we do, how does that change things? I would say one area that is going to have a lot of change and already is being changed by these kinds of systems, is the world of software development and computer programming, because one of and going back to like what ChessNPQ does in terms of like predicting kind of output that is really really useful for generating computer code and rather than having people have to like search online on like Stack Overflow or one of these kind of fora about like what is the right API call or how do I do something like implement some little thing around, like modifying a list or an array or whatever it is, you can basically just ask chat, gpt or kind of begin the code and kind of auto complete it and kind of fill it out for you and it will actually write whole chunks of code.

Sam:

I've played with this before and it's like a written like little scripts for me and it's enormously powerful and many programmers have begun using it in their daily workflow and I think there was some estimate of, because GitHub has GitHub co-pilot which is kind of based on, I think, some sort of version of GPT that estimated some fairly high percentage of all code is kind of being created either by these tools or kind of in concert with them.

Sam:

It's like 30, 40% of them, I don't remember the exact details, but yeah, it's already being used that way. And which is not to say that therefore the need to understand computer code will vanish. And like when chat GPT generates a script for me, I still kind of have to understand sort of how it works to kind of be able to engage whether or not it is correct or kind of totally wrong, and you can run it and kind of see how it works. You kind of still need that skill, but you're going to still be able to kind of work in concert with some of these tools. So I would say that's one area where it seems like it's already becoming fairly tightly integrated into people's workflows.

Ramon:

You know, I have this idea that one of the things, of course, that I like to think about concerning these technologies is that they are unlike any other technology before, in the following sense I call them epistemic technologies. Why? Well, because they're related to knowledge creation, knowledge acquisition and all kinds of things. Whereas a bulldozer, whereas a steam engine was mechanizing labor of the physical kind, these new things that we're putting together and that we've been putting together for a century and a half are mechanizing knowledge creation and information generation and sharing, and so it's a very different kind of labor that they're sort of automating, right, and it hits a different segment of the population, but a segment of the population that in the 21st century it's a huge segment right, mainly knowledge workers, information workers and things like that. But one of the things that I really like to think about in terms of the epistemic technologies and ethics of it is that ethics is not just about the harms and it shouldn't just be about us worrying about the possible harms. Of course those are important, but there's something very interesting about the promises of novel technology and what he can do, what he may do, and I've been seeing some sort of promising promises in that field. One is, of course, I like to take seriously I'm not saying I buy it yet, but I like to take seriously this claim. And of course there's all kinds of problems dating back to Marx and Russell in the early 20th century and of course, the late 19th century, about the promises of automation and the replacement of labor that still play out today. But here's the thing someone told me, somebody at Microsoft told me look, one of the things that this is definitely gonna help us with is this with clerical liberation. Right, it turns out that we've turned half of humanity into clerks, and some of them really just don't want to be doing that work and resented, but they're being encultured to thinking that that's the only highest possible job you can have. And guess what? Now we'll be able to liberate them from that sort of cubicle hell that some contemporary societies have become. Now I'm not saying I buy it, but I wanna take it seriously and I wanna think about it seriously. Can it provide clerical liberation? Because to a certain extent, some of us want it. Right, we don't wanna write that stupid report that is meaningless, that nobody really reads, in which words are just sort of check boxes, and it's empty of meaningful interaction. And so guess what these machines can do that for us? Because we still require records, we still require bureaucracy and forms and stuff. And so here's a tangible example of what I'm talking about.

Ramon:

Last two weeks ago I had a panel here at the University of Oregon with a computer scientist, a law professor, a non-colleague of mining philosophy and myself about large language models. And the person that was teaching law said look, in Oregon we have a backlog of people that have been prosecuted without defense because it just needs to be done, right. And there's some sort of loophole that says, like, if you cannot find somebody to defend it sooner rather than later, we have to process this thing, blah, blah. One of the reasons why they failed to find enough public public defenders is because of the paperwork associated with it, not the defense itself, not the actual court trials, it's the paperwork that it takes for it to happen. And she says guess what? We're using large language models already for that paperwork so that the process gets faster. And then she said, in that sense, we're almost using this for legal equity, right, and to bring up the justice system up to the standards of the quantity of requests that it gets.

Ramon:

So again, I'm not saying I buy it, but I think there's something to be taken seriously concerning these promises, right? I think ethics, and the ethics of these things should look further than just the possible harms. And if somebody came and told me, look, I think we're gonna replace miners from going into mines and living in darkness for eight hours a day, five days a week, blah, blah, I would say please do so. Of course, do it carefully and don't do it by tomorrow, but I'm pretty sure a lot of people would choose another lifestyle if they could, rather than polluting their lungs and dying at 60 or 55 because they were living in a cave for the majority of their adult life, and so what I'm trying to say here is that there's some jobs that are worth replacing and that, if these machines can do it for us, perhaps it's worth considering.

David:

I wanna go back a little bit to something that was mentioned by both of you this sort of this. So one of the ways that governments are thinking about regulating these technologies is thinking about in terms of alignment alignment with certain values and issues like that and we can briefly discuss that, maybe speak a little bit more about what that is, but at the same time, I think, ramon I think some of your work gets into this too is that it's not just about aligning the technology to these values. They're all sort of similar to the comments you made about the training tests, saying something about us. Maybe it's just as important to align our social structures to the values in terms of how they use these technologies, and it doesn't seem is that also part of that conversation about alignment is because it doesn't seem to be enough to align the technology itself to certain values and norms that are important to society, but also for companies and corporations and our social institutions to also go through some sort of assessment of their alignment.

Ramon:

Yeah, I mean, there's a lot of dimensions to alignment that I have issues with, and the first one, of course, is the easy one is, when people talk about alignment recently, it's really easy to question whose alignment, or what do you mean by human values? Which values are you talking about in such a diverse world as ours, where populations are gigantic, in a different part of the world than the one we live in? You know, whose values are we gonna take into consideration? That's gonna be a huge question for ethicists going forth, and of course, that problematizes this whole idea of alignment, because it already assumed that there's some sort of universal human value that machines have to be aligned with. The other dimension that concerns me is I also don't wanna say just that hey, guess what? There's these new machines in town. They're pretty powerful and we might as well just align ourselves to their functioning, right?

Ramon:

That was sort of Norbert Wiener's idea with cybernetics, right? He said, like, look at every point where I'm building a machine-human interface, I'm faced with two issues. Well, with a single issue Should I make the machine more human-like or should I make the human more machine-like? And guess what? The path of least resistance is the ladder, right. So I'm gonna try to make humans more machine-like so that we can function together, blah, blah.

Ramon:

So I don't wanna go there and I actually don't agree with that. But I do think that, given the prowess of this novel technology, then we can envision ourselves and what we want with it. Right, so not this by it, not against it, but with it, and so such that we still continue to have this sort of alignment with ourselves. And then we just see this novel, new tool that can enable us to fulfill that sort of I don't wanna mention the word, but Aristotelian kind of the flourishing or something like that, right. And so in that sense I think that alignment, first of all it's complicated, but at the same time we don't just want to sort of accept technological determinism and say, guess what? Now we have this clockwork that dictates our lives, but we also wanna imagine a world that's ours that incorporates those new super powerful machines in a way that enables us to still further our own goals. I don't know.

Sam:

No, I think this is really good. I think kind of recognizing both kind of the human and the subtle aspect as well as kind of the machine aspect of alignment is important. But I think the main question is kind of how can we do this Like what is like the right speed, such that it's kind of done in this sort of deliberate, iterative sort of way, because I think we're not gonna get it right to begin with and we kind of need to recognize that there needs to be a space for iterative feedback and tinkering with these kind of with this process, both the side level as well as the machine level.

Ramon:

You know, I was just talking to a friend today and he sent in Germany and he sent me a message saying like hey, ramona, can you send me that chapter where you're talking about telescopes and how telescopes were adopted by, you know, in the times of Galileo? And he was saying he was like yeah, because I wanna know how they got around the fact that these technology was opaque and still very useful and how they were able to overcome opacity by just using the technology anyway. And then he joked he's like did they have a moratorium of six months on telescopes, right? Did they write an open letter to stop telescope technology from developing? And I told him you know what? Actually it was the complete opposite. They had almost like the equivalent of an open letter to speed up the process by which you test the technology, right?

Ramon:

So by the time of, it took like 30 years for people to really start using it for scientific purposes.

Ramon:

Of course, some people, like Galileo and Kepler, were already using it for scientific purposes, but the people around them were very skeptical.

Ramon:

And then, you know, once it, of course, for military uses it was excellent, but for deciding things out in the sky it was the first telescopes were just not that good.

Ramon:

And so somebody came up and said which one is the best telescope and can you tell us any truths about the world? And in Florence, somebody with lots of money decided you know what, instead of me deciding which is the best telescope and whether it tells us truths about the world, I'm gonna establish an academy to test the crap out of not just the telescope but to test the test with which we test the telescope right. And so Academia del Cimento was established almost to just do that. And so here's me just trying to say it's not so much about, you know, taking it slower or being hyper cautious. Maybe what we should do is get an open letter to test it out as much as we can, and tested, of course, rigorously, not just sending it out to the public and see what people come up with and keep retraining our model based on internet behavior, but, you know, actually sort of invest in the test that could test the reliability of this normal technology.

Sam:

I love that. So it's less about like the speed and more, but yeah, like this kind of like the feedback on the process. It's almost like we need like consumer reports, like to kind of do these kind of like very exhaustive tests of these kinds of things to understand. You know, that's really provocative and interesting.

David:

Well, it seems to follow what we do now. When the new product comes out, we try to have an initial test group that tries it out a few times, and then we gather the stats about what worked, what that group felt that worked the most, and then-.

Sam:

Yeah, this is a with GPT-4, I mean, it was clearly tested for a long time before it was publicly unveiled and like there was, my sense was like a half a year or even more. And so, like in the large paper, it was like the second half was kind of like we gave it to these people, they tried to test it, they tried to break in these different ways and we kind of use it. So, yeah, we need that, but even more so.

Ramon:

And a key thing here, I think, is that we shouldn't just well, three things. We shouldn't just test the instrument, we should test the test with which would test the instrument. But also, we don't just want to test outputs, right? So reliability assessments are not just success stories and or ratios of error, right that? Oh, it fails 95% of the time, okay, so it's reliable. No, we also have to sort of go into the nature of the error of that 5%, and the source and the nature, and that's something that's missing. Because what you're describing, david, a lot of the times is how the industry works. But industry is worried about commercial success and consumer interactions. Right, in science and or for knowledge purposes, it should be a little bit different, where it's not just testing the output and it's not just testing whether people like it and subscribe to it, it's a little bit more invested in understanding its nature and understanding its inner workings. To a certain extent I don't know if you've seen this, sam, but that Wolfram lecture, the three hour lecture on how large language models work.

Sam:

Well, and is it a video or just like a really long article? I heard the long article.

Ramon:

He published a little booklet. He published a little blog and then the booklet and then he made a video of it. That's like three hours Anyway, super thorough, super informative. If you have the patience of three hours of Wolfram talking to you, add to you. But I recommend this to a lot of my students because the very least tells us you shouldn't just be contempt with the success stories of this and how they work. You should go down, open the hood and see conceptually Even if you don't understand the math there's no necessary or the coding you can at least conceptually see how things connect with one another and, at a very sort of global way, understand the principles. And so that's just for me to emphasize that. For me this reliability assessments on novel technology should also be really worried about the nature and source of error, not just the rate of error.

David:

One second sort of, maybe the second to last.

David:

I wanted to have some time to consider sort of new things that have been developed that we weren't able to discuss at the event, and you both sort of mentioned that these sort of technologies can sometimes tell us a little bit about ourselves, about humanity, and one thing that's popped up, not just with the recent writer strikes in Hollywood, I think recently the Supreme Court made a decision about a case involving the use of, or Andy Warhol and the use of a photograph of Prince, and from that it wasn't the decision or the case itself didn't involve large language models or use of large language models to reproduce images, but they are capable of doing these things.

David:

But partly what came out of both the strikes and in this case is this idea of creativity, what it means for how we define or how we think about what it means to create something novel, what it means to create something new, and whether or not we're splitting hairs when we talk about humans are able to create something more capable of creating something new, whereas chat GPT, I think is, or GPT generally in large language models and machines are always having to rely on the past in a different way that human creators don't.

Sam:

Yeah, and certainly when it comes to creativity, I think we need to acknowledge the fact that no one is creating something new in a vacuum, like everything that some people do is based on what they know before and what has come before them, and so there's this highly combinatorial and remixing process that allows something that is still viewed as truly new to come about, and so I do think right now, the way in which these tools are like, the way in what these tools generate, still feels different than the true space of creative possibilities that humans can make. But it need to always be so, and I do feel comfortable putting what these systems create kind of in that, or what would they generate in the realm of like creative output. It might be a subset for now of what is possible in terms of all creative output, but it certainly feels reasonably creative, and that could be the kind of thing where it's like, okay, whatever these outputs are are interpolations between all the things in some high dimensional space of what has come before it. But the truth is that's a lot of what human creativity is too, now things that are kind of outside of that space of possibilities, that are truly novel or not truly novel, but feel different, rather than kind of like more of the same or more of like combining other things.

Sam:

I'm not sure the systems we have can quite do that without being perturbed, to do that kind of thing, which then maybe it's the humans that are doing the creative. I don't know. But we're not quite there yet, but I'm not sure we're never not going to be there, but I do feel comfortable thinking about like at least, yeah, what these systems are creating does feel creative and going along with kind of like tool use and like tools have always been part of the creative process. When there's a new tool, then artists and creators they use, they co-opt, that tool to make something new. Whether it's using new types of paints or new types of optics to image something like the camera, whether it's actually building art out of like HTML and CSS or whatever it is, people have been using different technologies as kind of the raw material for other things, and so I feel like we're kind of in this proud tradition of that. But even so, I do think the outputs have a certain level of creativity, for some level or for some definition of creativity.

Ramon:

Yeah, you know again, I hate to be sort of a historian, but something that if you read that 1951 paper by Turing, he kind of tackles this right and says like yeah, there's no reason to think that machines won't be able to create. All they have to do is program them to do it and they'll do it right. I mean, of course at the time he hadn't yet seen a computer, he was kind of inventing it mathematically, but he could foresee the possibilities of functionally putting together something that could create something like that. But what I wanna sort of say and I usually say this to my students is that you know, if you go by creativity and novelty to define what's uniquely human, then you're gonna exclude 99% of humans, because you know, most of us are not that creative and or novel when we create. Of course what Sam was saying is true Even those that are, you know, sort of geniuses were never in a vacuum, that were borrowing from others. Even people from Bach was already borrowing from people that hide in there, or even you know some Italian composers, and so it's really hard right. So you don't wanna use that criteria to exclude, you know, creative or creative agents, because most of us would fail at it. You know, I keep telling my students that, you know, for a while I fancy myself a musician, but everything that I played just sounded like nirvana. You know, a bad version of it. A bad version of it, to say the least. So nothing new there.

Ramon:

The other thing that I wanna sort of say is that there's, of course, this gigantic problem that we've had for at least almost a century, if not, you know, half a century for sure and it's what Walter Benjamin I was questioned by. We did that paper on the work of art in the age of mechanical reproduction, right, where he was thinking like where's the original, where's the aura of the original, where we're losing it, blah, blah. Now fast forward to the digital world. That is really a different kind of problem, right? But I also want to sort of bring back to two problems here when it comes to creativity and generative AI, and the first one is that, for legal purposes and for ideas of ownership of art and creativity, it's going to be really difficult to regulate this and to control or to say exactly what it is that these things are doing. Right, because consider what happened with Meet Journey. Meet Journey was trained with, especially Volume 5, right, was trained with what? Maybe two to three billion images out of a data set of five billion images that was compiled by the lion data set. That's five billion images. Right, that's three billion images. That's crazy amount of images.

Ramon:

And what he does with those images is really interesting. The first one is that it doesn't look at the images. It doesn't look at the concepts in the images. It studies the mathematical relationship between pixel valuation. Right, how far is one pixel from the other in mathematical terms? How far is it from coloration? What's the texture?

Ramon:

And then it mixes all of these up and then it comes up with a novel, a beautiful image that you prompted with, and so it's really hard to say what it stole from the artist or what was it copying. What was its influence, its inspiration is going to be really problematic because of course, it's such granular detail. Most of us, even when we copy other humans, we go by human terms. So it's like, oh, that looks like a rabbit. I like the way he used the brush. Some of us get a little bit nerdy and think about the pigment and the brush and all that stuff, but that's where we stop. Why we don't go and analyze the mathematical differences between a grid, a little millimeter length grid, and the next two others. So it's working through a completely different way than we work is working through the mathematical relationships of pixels.

Ramon:

And then the other thing that I wanted to say oh, here's the other thing that's going to happen with Lyon and databases like that. When you're thinking about whether these things are stealing or whether these things are borrowing or being influenced by other art, is it Lyon? For example, that database is not a database of images. I don't know if you know this, but the Lyon database is a database of URL addresses for images, and so it's not like they can be sued because they don't even have the images. What they do is a gigantic list of URLs and then a little program that is able to retrieve all the images from that for you to use for training, mid-journey or your generative AI. And so, again, another layer of complication, which is not just the mathematical relationships with the pixels, so we cannot say you stole from me or you stole from these artists is also not just the image itself, because what you're really stealing is the address of the image. Anyway, last thing I want to say about this thing is that when people are worried about generative AI as going into art or creative enterprises. I worry that we're missing the point here about these creative practices, because, of course, I think that AI can produce way better illustrations than I can, way better illustrations that 99% of the humans that I can, but they're illustrations.

Ramon:

To call it art, it's a little bit of a misnomer there, because I think art is a practice. It's a strange kind of practice that is more related to praying and or expression rather than just production, and so we live in a world in which we really focus on the product and then we try to mimic the product, and if we get the product, we get the function. If we get the function, we can make money with it. That's it. But with art and these other generative practices of humans, the product is almost the least important part of it. Of course, for the market it's not, but for the person that created it, the product is just the last little thing, and so for me it's like inventing a machine to pray for me or inventing a machine to do exercise for me. It's missing the point.

David:

When I gather what you just said, ramona, is that process is maybe the process of creating art, the process of how we come up with inspiration and things like that, how we, what goes into creating art is perhaps I don't know if you would say this, but, if not justice maybe more important than what is actually produced from that process. Because, as you mentioned, right now we equate art with the product, with the output, and not enough on the process in between.

Ramon:

And that's just for the creator right, I mean. Of course, most of us benefit from the product way more, and that's why it's value at such levels. But yeah, I just wanted to point out that this idea of creative exercises, of automating creativity, is going to be full of really interesting problems and dimensions, going from the legal to the borrowing of inspiration, to the questions of whether these are really inspired machines at all or, like Sam mentioned, guess what? It may still need some prompting at some point, and that's what we're good at, humans, right, and I think we've talked about this, sam and David of. You need somebody to ask the questions, and so far I've seen a lot of these machines producing answers, but I haven't seen them ask the questions right. So maybe that's where creativity really is at the end of the day it's on the yearning for a question to start with and then, of course, pursuing the answer once we get that.

David:

Well, with respect of both of your times and the work that you've already or the insights that you've already produced for in this conversation, kind of like I just realized this and thinking and coming to this, that kind of run these sessions as a courtroom. So I'm going to end with closing statements. So these are statements that sort of summarize what, not just what we've discussed so far, but also, unlike a courtroom, I'll allow you to introduce new evidence into your statements. So let's end with sort of these closing statements and, with some reason, things that we should be, other questions that maybe we should be raising at the same time.

Sam:

Yeah, I would say so in terms of introducing new evidence. I would say thus far the most magical experience I've had when it comes to child GPT was in terms of its ability and I've mentioned about how it's very good at language and it's very good at understanding the story world, and I think that is one of the places where it's strength for your lives. And so my most magical experience was in relationship to storytelling, specifically Dungeons and Dragons. So both of my kids are really into fantasy and they've kind of asked me about the idea of like playing Dungeons and Dragons or like running it. I didn't know how to play it, I could not become a Dungeon Master, kind of run the game and kind of create the story of this virtual world for them to inhabit. But then a month or two ago I read this article where someone said okay, here's a prompt for, I think, chat GPT plus, like the GPT four version, throw it into chat GPT and it will act as a Dungeon Master and will create in its early, rich, imagined world and you can have it, mention the characters and you can play and it'll tell you when to roll the dice and do things and give you the options. And we tried it and my kids and I had the most amazing experience and it was fantastic and it created this rich world. We really we lost track of time. We were trying different things. It was like it was still like a little bit of rough around the edge at certain points, but it really drove home to me that this ability to kind of imbibe all of language and story that we have as a society and then use that as the raw material for creating new worlds and new kinds of stories was something really amazing and it created like this interactive story that I never really experienced before. It was wonderful.

Sam:

And I think I'm going back to, I say, when it comes to these tools and kind of their ability to be creative, whether or not how we really describe this creativity or kind of how we think about it. Separate from that, these systems are generating things that really do feel creative and creative in kind of a very collaborative way. And so I don't know what the future holds. I don't know exactly how it's going to change society, but I do think it is going to create new possibilities for sort of a human machine partnership, and I'm going back to what we were saying in terms of like how to think about alignment.

Sam:

I think the human, and it's not just enough to think about humans and what we want for machines and what we want them to do. It's all about kind of how we work together and I think ultimately, that is going to be the kind of thing that we're going to be able to take away from where these systems take us, which is, what do we want this human machine partnership to be? Do we want to kind of to be the best versions of us and the best versions of machines, kind of combined and making us the most creative and best inversions of ourselves? And I think that's a conversation we need to have as a society. But I think, as long as we focus on the partnership between humans and machines and kind of how we can be the best versions of ourselves, most creative, the most flourishing or whatever we want, I think that's a really important kind of direction we need to think about.

Ramon:

You know, I'm going to go along with Sam in this idea. There's this term that is used in several different disciplines, but I like to use the way that a philosopher uses. So Savina Lionelli has this concept called data imaginaries in this paper. She was talking about COVID when she wrote this, but it was. This idea is like how do we imagine, how do we project the future with these novel technologies? And then she says, like there's going to be three or four possible imaginaries for each one of the technologies, and then we ought to be able to choose among those imaginaries to see which one is the one that best suits us and our purpose is going forth. And so here's a moment in which we should be positing these imaginaries. Right, these technologies are going to be really hard to. You know, you cannot put the cat back in the bag, right, they're already out there. And so what we need to start thinking is imagining how we can live with them going forward and what we can use them for.

Ramon:

That really beautiful and sweet story of Sam it's really enlightening in that respect. Right, you could have just sort of like tell your children, like look, prompt GPT and play with it by. But no, you chose to yeah, let's prompt GPT together and let's play together with the output. And in that he chose an imaginary in which he continued to be central to the experience of his children. And you know, again, it's one of those places where it was a choice and it was impossible. The imaginary could have gone elsewhere, right, but the imaginary that you chose still had you as part of their human development experience.

Ramon:

And so I'm thinking that with AI, we're going to have to imagine how we want to incorporate it into our lives, and not just how we want to keep it away from our lives and or how we want to restrain it, but how we can flourish along with it. I'm thinking again of the telescope because I'm finishing up my book and that's a central part. It's not even a central part, the whole book is on computer simulations, but I have the telescope story as an introduction and then the telescope story as a conclusion, because it's sort of this overarching imaginary, right, which is like what kind of civilization do we want to be with computer simulations? Do we want to be like the telescope people that imagined the future, in which they wanted to understand the technology, to better use it for their purposes, or do we want to be like Oracle believers that just find the rock that can predict stuff and then use it to thrive without really knowing how the rock work?

Ramon:

And so I posted the same imaginary for your listeners, david, in that we're going to have to choose and we shouldn't just let things kind of go their course. It's up to us to ultimately decide how to incorporate these novel methods and machines into our project. I don't have any new evidence, it was just kind of the moral of the story.

David:

Yeah, that's perfect and it's a very nice, beautiful way to end the conversation, in part because, thematically with the other conversations that we've had with other sort of guests for the Center for Cyber-Social Dynamics, which is a center that produce producing this conversation. Now, part of the direction of these conversations is geared towards just that topic the relation, the sort of building on and defining this relationship between humans and technology. What does that dynamic look like? That's sort of the dynamics part of our name, and taking that as our central, as sort of one of our central questions and topics that we were able to end on that and end on that sort of brief conversation is couldn't have ended any better. So thank you both again, ramon and Sam, for coming back and joining me for this redo of a conversation we had.

Ramon:

The original was better. The original was better.

David:

And that's how it goes. The unrecorded, the freestyle version, will sometimes be better than the sort of structured, more structured, formal version.

Ramon:

It's other truth? I don't think so, but I'm just putting it out there just in case the listeners find it short. It's like oh, we did have this conversation. It sounded better in the album.

David:

Yeah, we'll cover all our bases and saying this may or may not have been better than the original conversation.

David:

If there is anything that contributed to the original conversation being better, it's the live aspect of it all, the sort of field that we were able to see and sort of get into our conversation about what machines aren't able to assess from the real world, and part of what we were able to assess about the difference between this one and the original conversation is the amount of hands that went up at the end. I thought that was great and I believe that if this conversation hopefully would have produced just as many hands at the end and obviously in this conversation more questions can be raised. I think we alluded to them and hopefully the follow-up that you and I had, ramon, will answer to some of them. I think that one of the aspects, namely the epistemic aspects for questions that we've had. But yeah, so look forward to also that follow-up and with that, thank you both.

Ramon:

Thank you, thanks, david. Thanks again for the opportunity and for the efforts in making it happen. Thanks, Sam. I really appreciate your time and it's lovely to have these such enriching conversations. You always leave me with lots of things to think about.

Sam:

Likewise. This is fantastic, Thank you.

David:

And with that we look forward to the audience listening and to future conversations that will take up this topic of the human and technology relationship, and so look forward to look out for all those additional and follow-up conversations and we'll see you next time on the next few episodes.

People on this episode