
Center for Cyber-Social Dynamics Podcast
This podcast consists of recorded conversations and workshops held by the Center for Cyber-Social Dynamics of the Institute for Information Sciences at the University of Kansas. The Center focuses, broadly, on the effects emerging technologies have on our social, physical, and digital worlds.
Center for Cyber-Social Dynamics Podcast
Center for Cyber-Social Dynamics Podcast Episode 3: Exploring the Future of Human-AI Partnerships in Knowledge Production with Federica Russo
Humans and technologies have a long-standing relationship of love and hate, fear and hope, trust and distrust. Within digital technologies, AI and generative AI technologies are but the latest (and certainly not last) episode in this sequel. Can we think of Human-Technology relations in terms of partnership, rather than opposition? How? In this talk, Professor Federica Russo joins the Center for Cyber-Social Dynamics Workshop to discuss the concept of poiesis, which she argues is meant to capture the idea and possibility of humans forming more of a partnership with technology. Further, with this concept fully expressed, Russo hopes that it can prove useful in science and technology education.
On this episode of the Center for Cyber-Social Dynamics podcast, we were joined by a philosopher of science and a chair of philosophy and ethics of techno- science at the Freudent hal Institute, Federica Russo. Dr. Russo discusses the concept of poiesis as a way of thinking about the human-technology relationship as a partnership rather than one of mere opposition. As always, the Center for Cyber-Social Dynamics podcast is hosted by our center director, John Symons, and is brought to you by the Institute for Information Sciences at the University of Kansas. We ask that you help us grow our podcast by liking, subscribing, and sharing. With that, thank you for listening and enjoy.
John Symons:Well, welcome everyone we've got, today we have Federica Russo joining us. It's a great honor to have you, Federica. As you all know, she's a distinguished philosopher of science and technology, in addition to being a professor at the University of Amsterdam, she's also associated with UCL in London in STS, so in science and technology studies, she's at the ILLC in Amsterdam, which of course is a very distinguished place, needs no introduction and she's working within the language and cognition in argumentation group there.
John Symons:Federica has been at the University of Kent, she's been at Pittsburgh, she's been at Levin and her research centers on epistemology, methodology and normative aspects that arise in both the health sciences and the social sciences.
John Symons:I think she's known for the intersection between, let's say, policy questions and questions in epistemology broadly. So now, as we all sort of grapple with our current changing technological context, Federica's writing a great deal of fascinating work on computation and technology generally, modeling, causal modeling, evidence in scientific inference, and her latest book is called "Techno-Scientific Practices: An Informational Approach. I'm getting that correct. As you probably know, Federica is editor-in-chief of Digital Society and has been the co-editor-in-chief of the European Journal for Philosophy of Science. She's sitting on the management team of the Institute of Advanced Study at the University of Amsterdam and she's a member of the steering committee of the European Philosophy of Science Association.
John Symons:So Federica has all kinds of service to the profession as a key player in developments in the profession in philosophy of science, and is one of the most interesting thinkers in the intersection between traditional philosophy of science and epistemology and these new questions that are emerging with the technology that we're all interested in studying in this group. So that was a long introduction, but I'm really delighted to have you, Federica, and I look forward to your presentation and the interesting, inevitably fascinating, undoubtedly fascinating conversation we'll have afterwards. So thank you so much. The floor is yours.
Federica Russo:Thank you very much, John. This was a very generous introduction and I feel honored that you asked me to present my work in your group. I've been looking at what you do and it sounds really great, so I'm very excited to discuss some ideas with you. Let me first share the screen, let's see whether the technology is on our side, and then we continue. So what I want to talk about today really is how to think and design a human AI system. But I want to start this story a few steps back.
Federica Russo:I still consider myself a philosopher of science primarily, and in fact, the root of this way of thinking about human AI systems is in reflections that I started about philosophy of science and, more precisely, in trying to rethink the scientific process and, more specifically, the scientific process of knowledge production, for the way in which instruments technologies are really part of this process. So what I will do is really going a few steps back and start from this division between philosophy of science and philosophy of technology and trying to restart a conversation about the role that instruments have in this process. I did this in part because it has been my personal intellectual struggle to be in this divide and either just talking about science and the typical concepts that philosophers of science use, such as theory, experiment, explanation, but nothing about the instruments or, at the other end of the spectrum, talking about the artifacts, the instruments, as typically philosophy of technology does. But the intersection was not there yet. And I was interested in this because I was interested in what would happen to the concept of knowledge and to the way in which we generate knowledge in techno-scientific concepts if we change the framing of the question.
Federica Russo:And the central part of the book that came out a few months ago is in this concept of poiesis that I borrow from the Greek tradition, so it has to do with production of artifacts, but I also expanded with some ideas that come from the philosophy of information, and now this is going to be a pretty long introduction before I get into artificial intelligence and to digital technologies. But it has also, for me, interesting to note that it was not my intention to kind of use this notion of poiesis in the research on AI and the philosophy of AI, but as it is unfolding, I'm discovering that the way in which I'm trying to characterize the relation between instruments and us, it was not intentional, but then it has this unfolded, also in collaborations that I have, and this project that just started two months ago. In practice, this idea of poiesis is really, is proving to be useful in think of these interactions between us, human epistemic agents, and the artificial epistemic agents. And so this is how I will try and set up the discourse, and I will also add a few reflections on why this may matter at the level of teaching. And I got partly inspired by that, or at least it was an interesting coincidence that, as I was discussing these things with some colleagues in Utrecht, it turned out that I listened to your podcast, John, on the on the Chat GPT, and I thought that there was a lot of concern. So I thought I should share some of these ideas with you as well, because we seem to be very much on the, on the same, on the same page. Ok, so, oops, sorry, that's it.
Federica Russo:So let me begin with the divide between philosophy of science and philosophy of technology, and here I am kind of making a very long story pretty short. What we have inherited is this division between science on the one hand, and technology on the other hand, something that has roots in the Greek thinking, and putting episteme on the one side, the technique on the other, and also putting episteme as hierarchical, superior to to technique, and so science being superior to technology. As things evolved over time, and especially in the last century, this became to kind of the correspondence between a division between philosophy of science and philosophy of technology, the two fields being by and large disconnected, something that I tried to document in the book by looking at the way the fields are organized, think of the conferences, think of the publications, think of job advertisement. It is very rare to have a profile in which we look for a philosopher of science and technology, and if we look for a philosopher of technology, it is associated with ethics and then again philosophy of science is not much concerned with normative questions. So I think this is something that we can see out there that has sedimented in a certain way.
Federica Russo:The publications are also pretty different. Core authors in philosophy of science, and you can think here of Nancy Cartwright, is rarely cited and discussed in philosophy of technology and, conversely, thinkers that have been highly influential in philosophy of technology, such as Don Heidi, is not thoroughly read in philosophy of science. So there is clearly a division there, and it is there, although I think we have to be careful in making this also an absolute distinction, because things are gradually changing, at least from the philosophy of science side. The philosophy of science in practice has been bridging this gap to some extent, and also this claim for this division should be nuanced in historical perspective, because there exist fields, such as French epistemology, that went largely forgotten in Anglo-American circles but never had this sharp divide between science and technology. So there is clearly an interesting discourse there to set up and also to learn from other traditions that probably did not make it into some of the mainstream that we are used to and definitely the type of mainstream in which I have been trained, the one in Anglo-American philosophy of science, clearly overlooked questions of technology, massively, delegating them to either the kind of analytic field tech or to the discourse done by sociologists of knowledge and the STS community. So clearly something to be investigated there.
Federica Russo:With this background, what happened to me is that I've always been interested in how we generate knowledge of, and looking primarily at, the biomedical sciences and the social science context and understanding how, through certain modeling strategies, we get to establish knowledge of this and of that. But over time something really kind of became clear to me and I was a big pink elephant in the room, and so something that became really kind of the big pink elephant in the room for me was that we were not considering the role of instruments in this discourse about knowledge generation. They were clearly not part of the picture. You can say that new experimentalists have been looking at the experimental setup, and that is true. In new experimentalism there is in part this attention, but ultimately what they have been trying to show is that we have to look at experiments, and the experimental setup, not just for the role they have in confirming theories, but also in their own terms. So the question for me it is still there. If we consider the proper epistemic role of instruments in this techno-scientific context, how does this discourse of knowledge change? And to illustrate briefly what I have in mind here, I'm going to give you two episodes that I have been, two episodes of techno- science that I have been studying in quite some detail.
Federica Russo:One is molecular epidemiology. What I found very, very interesting to study through and through in molecular epidemiology is that technologies are essential at all stages, from data generation to interpretation. Molecular epidemiology marks an important change in epidemiology because, unlike traditional epidemiology, they change the scale of measurement dramatically. They have started measuring exposure and aspects of disease at the molecular level, and this entailed a lot of analysis of biosamples but also generating data in new ways. So it is not that they have been doing the same thing, but just at a more granular level. It is,
Federica Russo:I think it is fair to say that none of what they do in the exposome research in molecular epidemiology could be done without the technology.
Federica Russo:They could not generate these type of data, they could not run the analysis of biosamples in the way they do, and because the data set became so big, they would not be able at all to perform the analysis.
Federica Russo:So here is not just a matter of understanding more, it is really a matter of creating part of the scientific object that they are studying, and this is of course very common through the natural sciences, the biomedical sciences, and, of course, when one starts to talk about the role of technologies, the large Hadron Collider comes to mind and all the sophisticated technological equipment that we may have in astrophysics, in high-energy physics, you name it, and so, as a contrast, I like to consider computational history of ideas as the other case, because the humanities too are concerned with this shift in focus. Computational history of ideas is a way of introducing the use of software for the digitalization of text and later also for the analysis of large corpora. So historian of ideas have been relying essentially on qualitative approaches such as close readings. And now the temptation would be thanks to the technology we can investigate just more of this text. But in fact that is, I think, a superficial understanding of what happens.
Federica Russo:But I see, anyway. Of what happens because In computational history of ideas, they also build up ontology, they conceptualize the very concept of ideas in different ways and through this then they use the technology to find out things that would not be visible otherwise, just with the method of close reading. And what happens is not that there is a replacement of humans. There is really a combination of the close reading that the human reader does, together with the type of analysis that you can perform thanks to certain type of software, digitalization of text and other techniques. So what has become very, very interesting for me here in episodes like this, is to understand what is the proper role of instruments in this process of knowledge production. But why do I want to put so much emphasis on the instrument? So, on the one hand, I would like to say that instruments do more than just mediating between us and the world, and this is an idea that has become pretty dominant in philosophy of technology and especially in the post-phenomenological tradition, this idea of mediation, and I want to be able to say that from a philosophy of science perspective they do more than mediating. Also, some people, especially in philosophy of science, have been thinking that the instruments augment our capacities to see the smaller or the bigger, and I want to be able to say that there is more than just augmenting, there is more than just enhancing ability to analyze more data, and in this sense philosophers of science have been pretty naive in just assuming that with the technology you see more. So I want to be able to articulate what is the proper role of the instruments in here. I'm now going to the important part of how I analyzed the role of the instruments, but I'm going to give you the solution first. What I am interested in is how we can come to reconceptualize knowledge in this techno-scientific concept, and clearly the critical target here, coming from an Anglo-American and analytic philosophy of science tradition, is knowledge as justified through belief. We cannot just analyze knowledge as justified through belief. There is much more that knowledge is in techno-scientific concept, so in the wrong context, and that I encapsulate with the acronym READEM, that stands for relational, distributed, embodied and material. So what I'm going to give you there is how I try to characterize knowledge, and it is really the result I get at. Let me read this in full, "Knowledge is a product of technoscientific activities carried out by epistemic agents.
Federica Russo:It is often expressed in propositional form in natural language. It is also encapsulated in material objects and is situated with respect to a number of social, cultural or material aspects." So, when I give you a characterization of this, what I'm trying to do is to grasp the many aspects that are relevant to understand what knowledge is in techno-scientific concept. I'm not trying to give you a definition. I'm trying to highlight things we may be interested in at different moments of different types of investigations. So there are elements of relation, for instance, in the way we relate concepts to each other, but also in the way in which we relate data to theory, the ways which we relate the experimental results against some background. There are elements of distribution across human epistemic agents, across communities, also across agents that are artificial and human. There are elements of embodiment, because the way in which we know the world depends on also our bodies and how we know with our body, and this is something that we can learn from cognitive science to some extent.
Federica Russo:But I'm also trying to bring into this discussion new materialism and this course developed, for instance, by Karen Barad, and how much we interact with the instruments in certain environment, and this is part of how we generate knowledge. There are important elements of materiality because the instruments are material, but we cannot reduce everything to this materiality and clearly there are elements of proposition propositionality and the vernacularity has a large role in this, because, after all, we express results in natural language, in our academic publications, in the presentations, but we cannot adjust the flatten what knowledge is to propositional contents. All these elements are interrelated rather than isolated, and what I want to be able to say is that one element may become more prominent depending on the specific question at hand, and so in this way, I want to give room for a philosophy of science perspective proper, or a philosophy of technology proper, or a more STS or those who are more interested in the power structure in science, and so analyzing also these elements of distribution across institutions and funding agencies and aspects like that. Now, if I try to reformulate, my question is how can I cash out the partnership of human and artificial agents in the process of knowledge production? So that's why I had to make a long story about how to reframe knowledge, because I want to be able to say that instruments have a proper epistemic role along us in the generation of knowledge in this context. And here is where I come to the idea of a poiesis, or how I try to express the fact that we co-produce knowledge together with the instruments. So, remember, I'm just looking for the mediation. I don't want to flatten this all onto the instrument, but I really want to be able to understand how we do this together.
Federica Russo:The legacy that I have here traces us back really to Greek thinking, because poiesis, from Greek philosophy, is about the production of artifact, it is about the technique, it is not about episteme and, as I said, it is, the root of the alleged superiority of episteme over technique. So this is clearly the legacy that I have to discuss in part. But there is also something important that comes from the philosophy of information, because poiesis has been introduced in the philosophy of information to explain how moral agents are also producers of the environment that they are in, and this has to be taken into account when we try to understand or perform some ethical assessment of a certain situation, because much of traditional ethics is about what is right and what is wrong. But from the philosophy of, from the perspective of the philosophy of information, the idea is to be able to say how we come to be in a given situation, and so moral agents are poietic agents in an important way. What I'm trying to do with poiesis here, against these two big legacies, one more far away in time from Greek philosophy and one more recent from the philosophy of information, is to substantially enlarge the semantic space of poiesis in the following way, human epistemic agents have a poietic character, so clearly we are involved in the production of artifacts.
Federica Russo:This is a topos in Greek philosophy and also in the philosophy of technology. It is not my main interest here, but clearly it is something that is at stake. We are the producers of all the artifacts that we use in techno-scientific context and in everyday life. But I want to be able to say as well that also the production of knowledge is something that has to do with poiesis. So here I am expanding the idea of homopoieticus as moral agent as to include human epistemic agents qua techno-scientist and also qua philosophers.
Federica Russo:And here this is how it may strike. It may be feel like a striking, because a Greek philosopher would never accept that we produce knowledge, because what we produce is something that is external to us, and for the Greek thinkers knowledge is something that is really internal. So here you may have a tension, but I want to be able that knowledge is something that we generate, that we produce. And one reason why I want to be able to make this argument is that knowledge does not fall from the sky and we have to be able to express how much effort it takes to get to establish what knowledge is. And I also had objections from those who would be more realist and be not at ease with this idea that we produce knowledge, because it may open the door to relativistic accounts typical of sociology of science.
Federica Russo:But I don't think I necessarily go that road. I just want to be able to emphasize that we have to consider very seriously how much it is an act of production from our side and not just an intellectual intuition, and that in these instruments have a proper role. So this gets me to the poietic character of artificial epistemic agents. On the one hand, it would be easy to make the argument that this is the place because of digital technologies, and especially because of those technologies, such as generative AI nowadays, that are able to modify the environment. But I think this would be too quick an argument and in the book I've engaged with again, French epistemology and especially with the philosophy of Simone D'Honne, to try and explain the way which also analog technologies have the power in degrees of course, to modify the environment together with us, and so we have to be able to understand how much together we and the technologies have this power to create, to generate the data, to analyze the data and also to interact with the environment in different ways.
Federica Russo:Now what I think becomes crucial is that we don't have to get to the quick solution that the instruments do everything and we do nothing, or the other way around. It is really to be able, for every single episode of techno- science that we want, to investigate how we have to modulate, how much we are involved, how much the instrument are involved. But also there are questions at the normative level that easily and quickly kicks in, because there are responsibilities that then become involved, and these are both epistemic and moral. And when I say this, what I have in mind is, for instance, all the debate about algorithms and how much agency and autonomy they acquire, and then the argument becomes very quick to say, oh, but there isn't much we can do. No, there is a lot that we can do because we are also the designers of these algorithms, and so we can decide how much autonomy and how much agency we may want to give to these technologies.
Federica Russo:So that was really kind of a long story, but for me it was very important to rethink that in many of the settings that many of the scientific cases that myself and my colleagues, philosophers of science, have been investigating, really the role of technologies and of instruments have not played a large role, and here I've been trying to say how it plays a role, because the technologies are there and they have a proper epistemic role in this process. Now you may think this is perhaps an intriguing story. Maybe it is a way in which we may bridge philosophy of science and philosophy of technology. But what is this for? What is this for? That's really not what I had in mind when I started working on this, but as I started collaborating also with other people, it seemed to me that it may be one way in which we look at AI, and some of the challenges ahead of us because of generative AI and many digital technologies that we are in use. So let me try and say what is the point of adopting this stance in research, one project that started just two months ago the acronym is SOLARIS and it's a very long title that it has but basically it is about understanding deep fakes, and when we wrote the proposal, it was largely about generative adversarial networks, understanding how these deep fakes are generated, how they spread on social media, how they can threaten democratic processes.
Federica Russo:That was one of the main objective of the call of the EU funding scheme that we applied to. What we are interested in and I'm going to give you some elements of how we are going to run these analysis we are interested in seeing how users trust deep fakes from a system perspective, and I'm going to say a bit more about the system perspective in a moment. Of course, part of the project will also be about what is the appropriate level of intervention to diminish the effects of infodemics, but also what is the good use of deep fakes, because what is becoming clear in just in the past few weeks of carrying out the research is that this has literally exploded From the time we wrote the proposal just about a year ago until now. It is a totally different landscape. When we started, it was mainly generative adversarial networks that were used to produce deep fakes, and now there is a whole array of technologies that are able to produce deep fakes about pictures, videos, reproducing even voices, and this in matter of even minutes and three clicks away from spreading deep fakes on social media.
Federica Russo:But what I would like to show you is how this system perspective, and so also this way of analyzing the role of humans and the instruments in the same system, may change it. Because what we are trying to say here related to deep fakes, is not that all depends on the technology or that all depends on the user, for instance. Oh, we are very naive and we trust whatever. We are trying to set up an analysis in which we identify several elements or aspects that may play a role. This may depend on visual content, for instance, whether disclosure of authenticity or inauthenticity is done or not. What is actually the function of deep fake content? What happens with the representation of the target? And here we will work a lot with colleagues in visual semiotics to understand how changing the target and the part of the content may change in turn the perception of trust, also the integration of inauthentic content in something that was originally authentic.
Federica Russo:But at the same time, we are also interested in understanding how viewing the content may make a difference, the medium of communication, the audience reception, and also the relationality and framing. So, for instance, it may make a difference whether deep fakes are shared, say on social media, and then you get a lot of suggestions, for you may also like this, this and this, or whether this gets to your WhatsApp messaging and it comes from one of your relatives or your trusted friends. You see, we are also trying to set up an analysis of a para- textual knowledge coming to establish degrees of truthfulness, how much content comes across as being true and maybe or may not be intentional, but also how is the perceived identity of the target and how is this deep fake discourse carried out at different level? What I find interesting in the type of analysis that we are setting up is that it inherits some of the elements of actor network theory, because basically we are putting a number of elements on the same level, just as A&T would do without a pre-ordered priority. So as the instrument, the generated content, specific elements of the generated content, parts of the environment, part of the technologies, and then, unlike A&T, the Dan does not really take a strong stance about the production. Here we want to be more in line with this idea of poiesis and then say that the instruments also have a poietic character in not only generating the content but also generating the trust at the same time. The idea of these system level analysis is also to say that it's not just on the technology and on the generated content, it is also on how us human epistemic agents interact with this and in an environment. This is still in the early stages, so I'm sure you will have questions and especially doubts that this is going to work out, but this is exactly what I was discussing just this morning with my colleagues that the analysis of this has to be very complex and include very many different levels of analysis, and this is what we will be trying to do in the coming months.
Federica Russo:Let me now turn to something that is not usual to discuss for us philosophers of science, but people in education may be more used to that, especially when they have an interest in how technology changes the landscape of education. And again, as I said, I was very pleased to hear from John that he was thinking along the same lines about Chat GPT. So I'm using this as an excuse to continue and pick up on that chain of thoughts. So think of how, for instance, the writing experience with very old-fashioned pen and ink, how this has changed the moment we introduced more modern pens that have the roll ball and how this has changed the moment we introduced the pens that can also be erased and you could rewrite on the same page. This happened when I was a young kid at school actually, and so likewise you can start thinking of why do we still need to learn hand writing the moment we have computers? Why do we still need to learn to make calculations the moment we have calculators on our phone, on our computers, and we have very sophisticated ones? And this is clearly on top of the agenda of people in education, how much of technology you want to introduce in education settings, right?
Federica Russo:So these are clearly important questions, and you can ask the question of how the technologies have changed or are changing our learning and thinking process, a question that is interesting also to ask from the perspective of cognitive science, and I'm sure that each one of us can also reflect on our way of learning, thinking, and writing, because if we are old enough, we have also changed these technologies in different ways and also how to make a balanced use of these technologies in teaching at different levels. So that's why I was saying there is still something to be asked about the value of teaching handwriting or calculation, and I have some intuitions from a philosophy of science and technology perspective, and at some point I was very much interested in hearing the perspective of teachers that may be very different and may have different reasons for keeping these low-key technological interventions like handwriting, although effectively now we type all the time right and this I hope you can see that there is a deep connection with the idea of poiesis. We tend to think of the learning process as something highly intellectual, but in fact it is not just mediated by the instrument, it is also co-constructed with the instrument. So this is something that I would be able at some point to articulate in a context that is not a technoscientific, such as molecular epidemiology or such as computational history of ideas, but that ultimately has an important role in our academic environment.
Federica Russo:Now, questions that I'm interested in, as I say, how do technologies stretch our thinking, writing, and speaking, how to educate and train advanced students or early career, to be self-reflective about the role, the user role, of technologies, but also, how much technology am I allowed to use or do I want to use? Now these things may seem highly disconnected again, but I hope I will be able to show you that the connection is there. So this is crumbled entirely the animation. So think, for instance, of how you can structure your thinking differently if you use Microsoft Word basic or writing in Latin, where you are much more forced of thinking of the structure of a document. So this is effectively a piece of software that then translate into text. And now that we have very simple ways of generating text, such as a ChatGPT, then you really want to ask the question why should I bother writing my own paper when an artifact that can write it for me? Now here there is something to be said about our role as intellectuals, thinkers, and this also has to be done in connection with the regulation.
Federica Russo:It was quite interesting for me that after a couple of weeks I took over the editorship of Digital Society, I received an email from Springer saying watch out, because this AI system cannot be listed as co-author under our regulation, but we are encouraging authors to document in the methodology section or anywhere else in the paper, in case they have used these technologies for the writing of the paper. You see it is not outright, no, you don't have to use it, it is watch out, from a legal perspective, these are not authors at the moment. Yet if you are using it you have to tell us how you are using it. So this for me, and now I put on my hat as a supervisor, it is a conversation that I need to have with my PhD students, with the early careers, and also with my master's students writing their thesis.
Federica Russo:Okay, because the point there is not just I'm going to run a plagiarism software or I'm going to use this detection software to see whether the content has been generated by the AI. The question is why it is important that we generate content or why it is okay that we make this content partly generated by an AI. Right, I don't have an answer to that. I'm just putting this on the table as something that we may want to discuss. And this is exactly the idea of poiesis not to demonize that the technology is part and parcel of my writing process, but thinking of how I want the technology to be part of my thinking and writing process.
Federica Russo:And, likewise, giving presentation, which is something we do all the time, is something that has changed. In the past, we used to make a no use whatsoever of visual support. Then we had the translucent things that we had to print out, then we got PowerPoint and now we even have an assistant on the PowerPoint giving us suggestions how to set up a beautiful slide, right? There again, I think there is something very, very interesting to reflect upon and that, clearly, you may consider it just from a legal perspective. This is what most people will do, kind of jump on it and say, hey, wait a minute, we have to regulate authorship. Or you may, as I'm trying to do, to think a bit more deeply of how this partnership happens and also be able to make decisions for myself, on the one hand, but also to help the more junior generation to see how they want to go about these interaction with the technologies. Okay, so I'm coming really to the conclusion.
Federica Russo:Technology has always been there with us, not breaking news, and it's not going to go away at all. If anything, it is going to be even more complicated how we have to negotiate our relationship with technology. Clearly, there have been very many changes and transformations and there will be many more in the future. The one, the big challenge that we are forced to face now is with these generative AI technologies. The deep fakes is the one that I am investigating with colleagues just now, but I also gave you examples of how this may be present in our daily life as academics.
Federica Russo:And now I'm kind of going back to where I started. I don't want to start just with oh we have a problem, panic, what do we do? I think there is a discourse to be set up from a proper philosophy of science perspective, and this is what I was trying to show that we have. I would like to think in terms of partnership, not in terms of opposition, and so thinking of how this may influence our research ahead, our teaching, our learning, writing and speaking and other types of activities. And I sense that if we can phrase this in terms of partnership and also make active decisions about how much we want the technology to be present in our research, in our writing activities, et cetera, then I think we can set up a discourse for the future, also at the normative level. And with this I conclude, and I also would like to thank the PowerPoint designer for giving me ideas, just to show that I am tinkering with these technologies myself. Thank you.
John Symons:Thank you, Federica. So, lots of deep and important questions on the table. I know that in this group I noticed we have Ramon, Ramon Alvarado popped in. He's been writing about instruments and AI as sort of a cognitive instrument. We've got David Tamez on the call, who's been leading the group on privacy and deep fakes here at KU. We have Antonio Fonseca on the call, who's also deeply interested in these questions. Denisa is here, so why don't I solicit your question? So if you raise your hand, I'm gonna put the thing up here. If you raise your hand, then I'll try to keep track and I'll make a list. So to begin with Joe and Ramon, let's begin with Joe. So Joe Bernal.
Joe Bernal:Great thank you. Thank you for the presentation, really excellent. I really enjoyed it because there's a lot of stuff that I think I thought about in very much the same way. I think there's a lot of short interest there. I have a bunch of questions, but in particular I'm trying to get to how do you see the role of, I was thinking about alternative forms of knowledge, non-propositional forms of knowledge. That came to mind because I wrote a paper about Ian Hacking's entity, realism in philosophy of science and his instrumentalist appeals, and what I thought was interesting was that Ian appealed to instrumentalism. What he didn't explicitly say, but what I was wondering about is how the role of the experimenter, how that acquaintance and how to knowledge they acquire through the use of the instrument, they cannot be quite representative propositionally, it's that experimental knowledge, that person who knows how to work the equipment, who knows how to make those fine adjustments. So I was curious your thoughts on, oh yeah, non-propositional forms of knowledge.
Federica Russo:So for me, the interest in broadening significantly the characterization of knowledge would be exactly to be able to include what you say, I mean this non-propositional aspect, in a fundamental way, which you cannot do if you keep working with JTB implicitly or explicitly. So, definitely, I don't want to just get rid of JTB and of propositional knowledge, because we do express knowledge in propositional terms, but I want to be able to say how much knowledge also happens at the level that is material. Material in the instruments, or material in the sense of embodied or in the sense of an environment, and the environment, again, can have materiality but also institutional aspects, you see. So that's what you are saying is exactly right and that's exactly what I'm trying to bring in as a legitimate part of a philosophy of science discourse.
John Symons:Ramon.
Ramon Alvarado:Hi, thank you very much for your talk. I really appreciate it. I'm actually right now I'm finishing my paper revisions from my paper on AI as an epistemic technology, so this is very relevant and I've been writing on the topic. So what you say is really resonant with three figures that I can think of in the philosophy of computing and the philosophy of science, right. So Herbert Simon when he talks about these sciences of artifacts.
Ramon Alvarado:Then you have Davis Byrd in the attempt to make instruments a source of knowledge independent from theory and experimentation, right? And then, of course, Paul Humphreys that says we have these symbiosis with instruments such that they enhance our epistemic capacities towards the future in three different ways. But one of the things that I see with these three figures is this sort of externalism, that they are acknowledging that with these instruments and with these new technologies, we're not just creating knowledge, we are discovering knowledgeable things. And in that discovering there's something that's happening, according to Humphreys, that as we make these artifacts more and more capable in enhancing our epistemic capacities, we also start making them a little bit further and further away from what we can understand or do. So in fact, the tension comes because these technologies, we're developing these technologies so that they do stuff that we cannot do, and so that they do stuff in ways that we cannot do.
Ramon Alvarado:And in that sense, for Humphreys, we're kind of offloading epistemic agency onto the objects themselves, right. And so I was wondering if this idea that the more capable these objects are, the more we offload these epistemic capacities towards them and the more they do these epistemic tasks farther away from the way we do them, does this do anything to your idea that we're actually ultimately able to cooperate with these instruments? Because it seems that if this is the case, ultimately we're being left, according to Humphreys, for example, rest in peace, we're being left behind in the epistemic projects and processes by these machines that were supposed to help us to enhance our epistemic agency, and so in some sense we're actually reducing our own epistemic agency, and so it's a sort of zero sum game here, right. I don't know if you agree with that or if you don't know if it does something to your idea of epistemic cooperation.
Federica Russo:Thank you, Ramon. That's a fantastic question. That's exactly the kind of thing I'm trying to problematize with this idea of poiesis, and I don't have a definite stance on the things that you raised, but that's exactly what I would be able to investigate in depth, you see. So I'm trying now to give you a few ideas of how I would go about it. So, yes, there is a danger that we have a big gap, kind of a distance, between us and the instruments, because the instruments do things that we cannot do. That's exactly what I was trying to say with the molecular epidemiology, right? I mean, it's not just the kind of looking through a microscope and analyzing a biosample of what a mass spectrometer does. It's something I would not be able to do in any possible way.
Federica Russo:So are we introducing that gap? I think it is possible that we are introducing a gap, but maybe we have to reintroduce a consideration of how much of us and our thinking and of our design goes into the machine. Then you might say there will be cases in which, no matter how much I am able to kind of log this process from theory to the construction of the artifact, then the machine will do its thing. That's the problem with these nested algorithms in the climate science right, and that's probably one of the things that you have in the back of your mind.
Federica Russo:So this is where my intuition is that maybe we have to become a bit more normative, right? And instead of accepting that things can go that way and then we introduce this big gap, then we have to say no, the gap is too big, we don't want that gap, you know, precisely for the reasons that you mentioned, because being partners also means being in some kind of dialogue. If I lose total control from what the machine does, maybe that's not a partnership anymore, right? So I guess that with this idea of poiesis, I could navigate a more descriptive level and a more normative level and analyze some of these contentious cases. So I think you are putting the finger exactly what I think I hope one would be able to do with this concept.
John Symons:Good, good. Do you want to follow up, Ramon, or should we go on to?
Ramon Alvarado:Yeah, I mean I can follow up, but it's again, it's a worry that I have, and I'm afraid that I'm just going to repeat myself, because if you read the, the conclusion, which is just one paragraph to "Extending ourselves by Paul Humphreys, right, the conclusion there is that the cat is out of the bag. We've done this. We've been doing this since we came up with computational methods. So he's not talking about AI or large language models. He's talking about simulations in the 40s, 50s, 60s, 70s and 80s, right? And so he's saying, like, look, now we've found, we've found a better method for the acquisition of knowledge. How can we go back? It just so happens that that acquisition no longer, I mean, that method doesn't involve us anymore, right? And I know with the techno- science approach that you're trying to bring back, I'm afraid that what you're doing is not so much bringing the instrument back. I think what you're trying to do is bring the human back into the loop.
Federica Russo:Oh as well. Yes, yes.
Ramon Alvarado:Right. But that's because, if we accept Humphreys ideas, we already are far behind the machines that we've evolved or came up with foreign knowledge acquisition and creation. And so I think the last sentence, I have it right here, of that it says "if this is true, the philosophy of science, or at least part of that of which deals with epistemology, no longer belongs to humans or the humanities," right? And so again, I don't know if you want to sort of, I don't know if I'm repeating myself, but it is a fear that is very complex and it's not so much part of paranoia that's going on right now, but it's still very much there.
Federica Russo:That's an interesting thought, Ramon, really, because, yes, I mean, there have been places in the book where I say the humans have to be in this process. That's why I was talking about the responsibilities. I guess it is because this project for me started from philosophy of science, where the instruments are absent, that I was trying to put the instrument in the typical philosophy of science discourse. But I think you are totally right that if you look at it from the Humphreys' perspective, then, yes, the project is to put the humans back and we have to be back. And so two things you said, Ramon, that I would like to comment on briefly. One is that Humphrey said it is a better way of generating knowledge. I don't know, I wouldn't take this for granted. I think this needs to be discussed, precisely because of the considerations that you made. If it is true that we are introducing this big gap and we don't even understand what the machine does, then I don't know if it is better. So, but because there is this presence of both, I want to discuss the presence of both and the role of both. The other aspect that you said, let me try, because now the thought escaped me the one was the better knowledge. Ah, yes, now it is kind of too late.
Federica Russo:I don't think it is too late, and I've been thinking about that in discussing with students who are super quick in catching kind of the stakes of this, and this is where I realized that I've been also a philosopher of medicine for a long time.
Federica Russo:And all these regulations in medicine say drug approval did not happen over time, right? So maybe AI should learn from medicine here, because they have introduced the protocols and the regulation of pharmacovigilants and all these things, also gradually, precisely because at some point it was too late, they had the scandals, it was going bad. So now that we are realizing this, so maybe we should think about how much of this technology we want, how much of autonomy we want to give, right, so, yes, there is a lot in climate science and elsewhere that escapes our, we don't have a grip on. But then you can say, from tomorrow on, you do this only if we establish the rules. Why is this not possible? It is possible. So that's how I would go. So I think I love your questions and I hope that this helps have this conversation not in a way that it is too late, there's nothing we can do, but really to regain a responsibility role that we have.
John Symons:So we're sort of verging into the territory of governance, and I'm happy to see that Denisa has her hand up. So.
Denisa Kera:So a fascinating discussion. Thank you so much, Federica for causing this, and Ramon, I like the provocation, but my question will be who is this we" that will do this poiesis with the technology? Because right now, when you look what is happening just today, it's like this call for a moratorium, like this idea that for six months, all the companies will just stop developing models. I mean, it's so ridiculous. And it's on a website of an organization called what is it? Long-term monism, like some transhumanist we" that is formed around these demands that we need to catch up with AI and so on, and but I do like. I do like what you,
Denisa Kera:so, first of all, I have a deeper problem. I think it's not just epistemic agency that is taken away from us. What I see as more disturbing is also the moral and ethical agency, because when you look at all these ideas of alignment and embedding ethics and these standards that they are trying to create with AI, it's almost like they'll just make some code that will rule the behavior of the users and in some sense, they're trying to do that even before the AI with these community guidelines and all this nonsense. So almost even that form of an agency for us to decide what should be the rules of the game is taken from us. With this prospect of with synthetic data, we will create better policies, and I don't know. So I have even like a deeper issue than Ramon, that I've seen even this type of agency is taken from us.
Denisa Kera:And then maybe where we need to go back because we are having some type of a five-draws discussion here maybe we really need to go back to that moment that we divided epistemic questions from these like moral question. So we need to go back to epistemic and techne. And where is the moral thing in techne? I honestly don't know. But I feel like we really need to stop thinking like Plato in some of the dialogues, that first we will have an insight and then we will know what to do. Maybe it's not about the epistemic agency, after all, that should have priority over our actions. So I'm not certain where I'm going with this. I kind of agree with both of you and a lot, thank you, because it will take a lot of time for me to process.
Federica Russo:Thank you for this question, this is a totally spot on. So, as a kind of disclaimer, the book really originated as a philosophy of science book, if you see. So the primary question was really about knowledge and how we come to establish knowledge in these traditional contexts. But obviously the step from epistemic to moral is very, very short. You are totally right, and I try to anticipate some of these questions in the book by saying, hey, next on the agenda is all these normative dimension that I have not analyzed in the book, but it doesn't mean it isn't an important, if anything. I hope to be able to do this with that kind of approach. And I hope again that if we can think, and you say it very well we cannot go back to Plato, you know, in this, I think that's exactly the point in setting up the question about the moral agency we cannot go back to Plato anymore. So these instruments are there, and then we have to think of how the moral question is to be asked differently because of these partnerships. So, 100 percent, we do that. We have a huge issue and we have to tackle it. And it is not just about that
Federica Russo:the, the, the ethical question arise because of the use of technology. You see, because that's the other, the other, the other wrong step that people have made in the past. You know, philosophy of science, thinking of value neutrality, ideal, and then problem arises because you use the technology. With this, I'm trying to say, the moral question arises already at the level of knowledge production. That's why I'm raising the question of responsibility, which is epistemic and moral. I'm just going to give you a quick example, fantastic research on biomarkers in the, across the earth sciences. What are the consequences of that? No, it's not a neutral concept from from a moral perspective, okay? So the fact that we can technologically study, identify, validate, biomarkers has to go hand in hand with what am I going to do with this knowledge of biomarkers? You see, and probably this goes to the, the, the problem of epistemic gap. So this is how I'm trying to put the question of morality straight, close to the question of epistemology there, totally with you.
Federica Russo:Then you started with who is the "we? Yes, who is the we? There are different we"' we in in this discourse. That's why I think it is important to have this element of distribution, not just because you say, well, it is distributed and then nobody knows where it is distributed, but precisely because, for every given case, we have to be able to say across which agents this is distributed. So who are the communities that are relevant?
Federica Russo:Now, in the simplest case, when I'm typing my book, it is distributed between me and the computer, but also among my colleagues that have been commenting on this. But you are right, the dark community is there, and we have to understand who these communities are. Maybe now the, the, the, the analogy with the medicine and philosophy of medicine becomes relevant again, because, you know, one thing is big pharma, one thing is the research done in university hospital, one thing is the meta analysis commissioned by the, by the, the, the government of health and whatever you know, and so these are different epistemic communities and the regulation has to be different, and so we have to consider these things. So, agreed, 100 percent. Again, thank you for for bringing this up.
John Symons:You want to follow up, Denisa, or should we move to Maria? Ok, Maria, you're up.
Maria:Let's, let's try. Can you hear me now?
John Symons:Yes, perfect.
Maria:Good. Thank you Federica for your time. I have a question regarding, maybe, the notion of trust that underlines this partnership, because it seems to me that one of the one of the issues with this epistemic gap and the, the ignorance that surrounds the incorporation of these technologies into our epistemology is that maybe the notion of trust is not as robust, epistemically speaking, as what we have in mind when we have discussions about real, epistemic real ability, especially in philosophy of science.
Maria:Maybe you, you have a weaker idea of what, what the trust is that makes this relation reliable, despite even if we acknowledge that this epistemic gap exists and that it's quite problematic for us as philosophers. But maybe there is something in, in the ground that helps us to understand.
Federica Russo:Thanks, Maria, nice to see you and excellent question. I haven't developed a lot on the notion of trust, to be honest, in part because this is a notion that is now popular because of AI, but it is not part of the traditional baggage of philosophers of science. But what I had tried to do in the book was to to rethink the notion of validity, which is instead the notion that is being more used by philosophers of science, especially in the variant internal, external validity, etc. And I was trying to say that validity is something more holistic than, that has to do with your model, your background knowledge, who you are, the instruments. So I was trying kind of to embed how much we trust the instrument in the fact that they contribute to establish whether or not or not the results of a given study are valid.
Federica Russo:I think one could actually set up a parallel, not a parallel, actually a complementary research and to understand the sense in which we trust the instrument. And I totally, totally agree. I haven't done that, but I think it should be up next to, to, to enrich what I was trying to, to do. But I think the reason why I hadn't used the, the notion is because it is not part of this philosophy of science background. It comes from another one, but maybe we have to somehow incorporate it in the way that you are suggesting. If you have the intuitions of how to go with it, I would be very, very happy.
Maria:I'll send you an email and maybe we can meet or something. Thanks for everything, thank you.
John Symons:Okay, so we have Ramon's shameless self-promotion in the chat here, so we'll noted. So Ramon's drawing our attention to his recent paper. What kind of trust does AI deserve, if any? Good, so David Tamez is up next. So David, floor is yours.
David Tamez:Great. Thank you, Federica, for your presentation and especially grateful for the many questions that you raise and that have been raised so far by other interlocutors. And, as John mentioned, we currently have a group going on broadly exploring emerging technologies and privacy. We have been recently discussing deep fake technologies and in my own individual work I'm primarily concerned with this question of how these technologies interact in the legal domain, especially when between lawyers and judges.
David Tamez:In the recent literature regarding deep fake technology there's been this question about just how much can be known about these technologies, to the point where the only people that can really speak to it are the developers, and even then they can't really speak to inner workings of deep fake technologies, how they produce, what they produce.
David Tamez:And I personally view deep fake technologies as being a bit more dubious than, I guess, Chat GPT, especially as an instrument to in our sort of epistemic enterprises and so on. But I mean, I could be persuaded, but those are just my intuitions. But so currently in the legal literature there's this question about whether and how much evidential policy should be changed in light of deep fake technologies, and right now it appears practitioners are optimistic about that the system, legal systems, can adapt without any too much additional changes or need for response from legislatures. A I'm wondering if you have any sort of notions or thoughts about whether deep fake technologies are so different from compare, from analogous technologies like photographs, videos, and so forth that it would require some additional understanding from our legislatures or judges to fully respond to in some, in response to their continued use.
Federica Russo:Yeah, Thanks, David. That's exactly what we were discussing this morning with my colleagues in this SOLARIS project, you know, and we have experts from the legal domain and we are trying to learn from them. We didn't go very deep yet, but that's exactly the question I asked them and they haven't answered yet. It will be an object for a future meeting, so I may be in touch with you about this, because that was exactly, the legal scholar in the group was trying to explain this morning why we needed this cross legal approach, because with the deep fakes it can be about private, about public law, and about competition law and about all these things. So the first point today was just to understand what these different legal perspectives might be. The question that I asked and we are going to investigate next, is does it make a difference that these are digital technologies and generative AI specifically, and what is the difference with respect to analog technologies? And that's the thing that we are going to investigate.
Federica Russo:So, short answer is I don't know today, but that's exactly one thing that we are going to investigate and, because it looks like that you are on similar issues, I made a note to get in touch with you because we clearly have to exchange views. For me, it would be very important to understand exactly what it is that makes the difference from saying a comics that disseminate false or a kind of funny content on a public persona kind of thing and a fabricated content and deep fake that is generated, like the Pope wearing the white coat that was circulating yesterday. I think it is important that we understand exactly what is the difference in that. Answer coming soon. I will be in touch. And, as a disclaimer, I made a note about your paper, Ramon. I think now that I am more in these projects with people on AI stuff, then it will really become crucial that the notion of trust is explicitly discussed and you are on the list of things that we have to study.
Ramon Alvarado:Thank you, Federica, and you know, the one that I am working on is, right now, is called AI as an epistemic technology, and it is also very similar to what you are trying to do, right, bring this instrument as a creator of knowledge and acknowledging that it has an active participation in the inquiry process.
Federica Russo:Super. I would love to see the draft.
John Symons:Great. So we have a question from Joe.
Joe Bernal:Yeah, I just thought of this idea that you know we are going on to continue with the idea of the details, the kind of going maybe the other direction that Ramon proposed. What can't AI do instead of what can it do and how much it can do? What can't it do is something I am curious that you have thought about. So, in particular, you know what you are talking about a posting information or something editorial. It looks like it is news. You know, in particular, I was scrolling through social media
Joe Bernal:the other day and I came by this, what is supposed to be kind of an archival post of news from MTV in the 90s or something right, and they had the reporter there that everybody recognized. But what I found curious and really caught me and I looked through the comments, the responses was that people could immediately tell, but not propositionally, they could not quite articulate proposition. But there is just something about the way it looked, something about the performance that immediately triggered people and they were able to identify "oh wait, this is not right, even though everything else was very genuine and it is archival sort of presentation.
Federica Russo:Yes, that is exactly what we will be trying to articulate in what we call the scale of trustworthiness, and that is why we want to identify these elements. You know, what it is that makes you trust or distrust or make doubtful and it is not just the quality of the image and say it may be a combination of things where it is posted, where you see it, who is sharing it, all these elements, and so it is really socio-technical. You see, that is exactly the thing that we are trying to set up, and then we will have a pilot with participants and we will test our scale and see whether we are right that this is how we get to trust these deepfakes, exactly about the elements that you mentioned, and we hope to have something more substantial in a few months. I am so happy that you say these things, because it looks like that we are on the right track to analyze these deepfakes.
John Symons:Great, okay, so there are obviously there are many connections between what we are doing here in the center and what you are up to, Federica. What I would say is let' s hope we can continue these discussions in various ways offline or by email and I would like to thank you for a really very stimulating conversation, a really great presentation, and obviously these are topics that you have mapped out for future work. It looks like we are going to stay in business as philosophers for quite a while. So at this point, we are coming to the end of this session and I would just like to thank you again and, if everyone can thank Federica, for a great presentation.
Federica Russo:I thank you and also for your patience for the disruptions.
John Symons:You have to have a co-presenter. You know you have to give credit and just like you gave the AI a credit, you have to give your daughter credit.
Federica Russo:Yeah, they have to start early.
John Symons:Yes, indeed, she can get her Google Scholar account going at this point.
Federica Russo:Exactly, and thank you all for your great questions. Thank you.
John Symons:Thanks everyone, bye.