Center for Cyber-Social Dynamics Podcast

Center for Cyber-Social Dynamics Podcast Episode 6: A Breakdown in Trust in the Open-Source Community with Dr. Perry Alexander

Institute for Information Sciences | I2S Season 1 Episode 6

Send us a text

Unlock the secrets of cybersecurity and trust in the digital age with Dr. Perry Alexander, who joins us to dissect the chilling XZ utils attack on Linux systems. This episode is a wake-up call for the interconnected world of open-source software, where code meets community, and trust is the currency of security. Listen as we journey into the belly of the beast, dissecting an attack that ingeniously combines technical acumen with social engineering to breach digital defenses.

Dr. Alexander, Director of the Institute for Information Sciences, lends his expertise to illuminate the delicate balance of trust within the open source community. Our conversation traverses the careful scrutiny of contributions in Git-managed projects, the strengths and vulnerabilities of decentralized trust models, and the human element that is both the fort and the front line against cyber threats. This episode is an eye-opener for anyone invested in the integrity of digital infrastructures, as we reveal how the guardians of our cyber world are both its greatest strength and its potential Achilles' heel.

As the episode unfolds, we're not just talking code; we're talking about people – the maintainers, developers, and communities that make open source software a bulwark of the internet. We explore the challenges they face, from social engineering to the sustainability of their work, and we discuss strategies that go beyond technology to foster resilience in the face of evolving threats. This isn't just a conversation; it's a call to action for collaboration and dialogue that strengthens our digital ecosystems against the cunning of cyber adversaries. Join us for an engrossing exploration of the intersection where human touch meets digital trust.

Speaker 1:

In the digital age, a lot of the processes that impact the way we interact online occur without our being all that aware of them. This means that both malicious and benevolent agents are working diligently to counter the other without online users even knowing that such activity is even going on. In this episode of the Center for Cyber Social Dynamics podcast, director John Simmons and I sit down with I2S Director Perry Alexander to discuss a troubling event that occurred within the cybersecurity community over a program that he describes as being used ubiquitously throughout the Internet, although an expert in cybersecurity, dr Alexander, argues that, given that this event involved an attack that was social in nature, we may need social solutions rather than just technical ones. We discuss this and much more. As always, you can find our podcast on Spotify and online at itwiskuedu. We thank you for listening and hope you enjoy Okay.

Speaker 1:

So yeah, we're here for the Center for Cyber Social Dynamics podcast in a sort of gathering, a sort of emergency session, to really respond to an occurrence within the cybersecurity community, an operation that is described as being used ubiquitously in Linux operating systems and, broadly, for operations that occur online. John Simmons and I are here with the director of I2S, dr Perry Alexander, to get a sense of what this occurrence means and what it could mean for not just the potential impacts it could have had for people who use these systems, but also in the practices and norms followed in sort of creating and introducing these programs to online protocols. And with that, perry, first kind of talk about who you are and sort of your interest and your expertise.

Speaker 2:

Sure, so I am. As David said, I am the director of the Institute for Information Sciences and my work is at the intersection of high assurance systems and secure systems, so high assurance being systems whose failure causes significant economic or personal cost. If systems crash it's very bad, and secure, of course, being protection of information. So, as I said, my work has been mainly in establishing trust and doing so with, I guess, proofs over those systems that they are correct.

Speaker 1:

And so you kind of given behind the scenes of this, of how we came to this, introduced this story to us and indicated that there are some, obviously the issues that are included in the articles, because there's an article about XZ utils. Can you describe what that is or give us a sense of yeah?

Speaker 2:

As I understand it, the XZ utils. They're used for information compression during communication. They're used for information compression during communication. So, and SSH and other transmission protocols all use, because we don't need to reinvent code compression every single time we do communication, we use a common utility like XZ to compress information in flight. So because it's so ubiquitous and because it's so innocent, right, this is not. When you look at SSH and you start thinking about issues with it, you don't think about XZ. In fact, I had heard the term XZ maybe once or twice in my entire career. Never thought about it, not for a minute. So the fact that it's ubiquitous and the fact that it's kind of just there.

Speaker 1:

And kind of get us in the sense of what happened, because it occurred recently and, from what I gather, hasn't been all that reported.

Speaker 2:

So there are two aspects. We call this in my world, we call this an attack, and it was nefarious individual or individuals trying to gain access to systems that they shouldn't otherwise have access to. So that's what we're looking at. The reason why this one is so interesting is it's a combination of both social and technical. You can look at this as a technical attack what the attacker did, what code they introduced and how they introduced it but you also need to look at the social attack. That was, in my mind, much more difficult and took much more time than the technical attack. So there was a social attack to put, and we don't know whether this was an individual or an organization, and I won't speculate on that, but I my feeling is that it was definitely a nation state, because that's the only groups that I know that can can pull this thing off. But what the what these individuals did to gain access is, as I said. It's harder and equally scary.

Speaker 1:

Yeah, it's from the, from the description or the way that it's laid out in the article. It sounds like this obviously it took time for this to really take place or for the attack to take place, but this person gradually infiltrated the community and gained what appears to be enough trust to be someone that loaded or added new protocols or updates to existing protocols. What is it exactly that they introduced?

Speaker 2:

So I think it's important to understand how they did the introduction, because it plays with actually what they introduced into the software any code they wanted to. The attacker, by knowing an encryption key that they defined, could introduce any code they wanted to into anybody's computer and run it at high privilege. And that's the crown jewels when you can run any software you want at high privilege, that effectively means you can do anything you want wanted high privilege. That effectively means you can do anything you want. And what was particularly interesting about this attack is the software was introduced in a way that resisted detection. It wasn't just that it elevated privilege, it was that it understood how to hide itself. Okay, so let me try to explain what that means. So, as I said, the software introduced the capability in XZ that would allow you to introduce and execute any piece of code you wanted to at high privilege. But the way that it did it was that the attack.

Speaker 2:

When you think about a system, when you download software and run it, there is an archive, typically that you that you download software from, particularly developers of linux or users of linux. You're constantly downloading from github or git repositories, and so what? What the bad guy did was inserted their code into the Git repository. But it was done in a way that if you started the code, if I'm sitting at the terminal and I type SSH or whatever I type to start that piece of software that was infected, it wouldn't start. The attack would not start. So if I'm typing at the keyboard and debugging you know it's what you do when you debug the attack would not start.

Speaker 2:

I would see the good software. The only way the attack would start is if it was started by the system, so if it was started during boot. So when the attack was discovered, when you look at the source code, when you look at what sits in GitHub and what you see, when it started it was fine, it was good, and that meant that the attack came from a fairly sophisticated entity, came from a fairly sophisticated entity. So the only way that this attack was going to be discovered was by observing an actual running system. You weren't going to discover it by traditional debugging techniques because, as I said, the code hid itself.

Speaker 2:

And is that a particularly new feature, where it didn't have to start with some sort of human telling it for the attack to start, but rather so I guess the best way to say it I've not seen this before in the wild, but I'm sure it's something people have worked on for quite a while the idea that if, in effect, if I'm looking for the attack, I'm not going to see it unless I look at a system that's up and running.

Speaker 2:

So part of your concern wasn't just the potential attacks that could take place with the introduction of this backdoor, but also how it would affect the community going forward and how it goes about making it or doing it doing their business. And so this particular attack never made it into production. It was done in a pre-production release and discovered in a pre-production release. We'll talk about that later. But had it made it out, we would have never found it and effectively bad guys could execute whatever they wanted on anybody's computer they wanted to, and it's that bad. I think that as worrisome as the attack was, the social part of the attack is more worrisome. So basically, it's important to understand how open source software works. It isn't there's a Boston Commons where everyone just dumps their software in public and everyone has access to it and can change everything. That's not how it works at all. So if you're unfamiliar with the way software is distributed, there is a very popular archival means or repository management system called GET, and GET allows you to incrementally patch software. And if you see a problem with software that's stored in GET, you can contact the developer and say, hey, here's a patch for the problem that I see. Will you add it? And the developer, or the maintainer or owner, as they're sometimes called, will then decide yes, I'll add that patch or no, I won't. So the idea here is that any patch that's added to an existing repository is viewed by more than one pair of eyes. It's very common in software development these days that when you add to a software system, you issue what's called a pull request, and that pull request is saying hey, here's my patch, here's the thing I've added, will you consider adding it? A second person looks at it and and approves or disapproves that ad and and then the. The addition is either made or not made, but the. The key here is is that nobody, no single person, updates the repository and even the, even the maintainers even they have to get their pull request. So if I'm a maintainer of X, xd, in have to get their pull request. So if I'm a maintainer of XZ, in order to get my changes in, I still have to go through this process of making a pull request. And again, the idea is that nothing goes into your code base without multiple pairs of eyes seeing it, and you can put whatever restriction on the approval pull request that you want. You could have 10 people look at it if you wanted to, and in the case of Linux, for example, if you modify the Linux kernel, I believe Linus Torvald still looks at every patch and every update and it's a very disciplined process. It doesn't seem like it would be we say open source and it doesn't seem like it would be but they created an identity and they were able to defeat all of the social systems in place to become a trusted authority or to become one of the maintainers of the XZ library. So what they did.

Speaker 2:

And it's also important to understand how trust works. So there are two basic models that I think about when I think about establishing a trust, trust in a system. One is having the notion of roots of trust and the other is the notion of having a web of trust. So the way roots of trust work is I have some trusted thing, trusted entity, that serves at the root of a trust tree, and that entity will generate or establish that things it knows about are trusted. So when you get an encryption key from KU, that key is blessed or signed by a trusted authority. That trusted authority was in turn signed by a trusted authority and the tree goes down until you get to the root, which is usually a company, a certified company or the government or maybe your company. I want to communicate with someone whose key I don't have is. I go out and I, I request, or I ask to add the root for that key to my, to my key chain. So you have this, this, this tree-like structure, um, and there are good things about that in that, uh, the only way, um, the only way you can get added to it is the uh is that someone who is trusted from the root adds you to or creates a certificate for you or a key for you that's trusted in that way.

Speaker 2:

Governments love this. This is the way governments work. If you think about it, your driver's license is effectively signed by the state of Kansas. There's something about your driver's license that only the state of Kansas can add. Governments are really really very good at this. In fact, it's one of their primary functions. The problem with it is, if you ever compromise any node in that tree, everything under it is also compromised. So it's great, until it's not, and someone compromises a root key and then everything that that key has generated is also problematic. It also is if you're a social scientist.

Speaker 2:

This kind of tree-like structure lends itself to centralized control. It lends itself to control, basically dictatorships and things where information has a central root. So it's good in many ways, but it does have its problems. That's not the way open source works. The way open source works is it uses something that I'll call a web of trust, and in a web of trust there's no root, there's no central authority from which all trust is derived. It's it's more, as the name implies, it's a web.

Speaker 2:

So, um, if, if john and I want to communicate, uh, securely, we would. We would establish keys and, in one another's presence, typically, we would, in effect, sign one another's keys, and so my signature announces to the world, in effect, that I trust that this key belongs to John and John's key, once again, john's signing of my key would indicate the same thing. So now we have a really weak web of trust, and anyone who trusts John should, in turn, trust me. So now David joins our group, and David joins our group and he signs our key and we sign his key. And now the Pam who's next door joins our group and the three of us sign her key and she signs our key.

Speaker 2:

And what you're doing is you're building out a collection of trusters, so to speak, a collection of certifiers that are saying hey, I know who this is, and when I get a key, I can look at the collection of people who trust it, and that tells me that that gives me an idea of how trusted they might be. This is really interesting in that I was I've always been very suspect of webs of trust, until recently, actually, but it actually works pretty well. So now, let's say, david's key is compromised and David's key is compromised. So and John and I learned this we would revoke our signatures of David's key and, effectively, we would pull back our assertions of trust and David's key would become untrusted, but the rest of the web is not impacted. And the fact that David's key was untrusted and used to sign our keys. Well, we've got lots of other signatures that say, hey, this key is okay. And the trick, though, is to make sure that you're only signing keys for people who you can assure own those keys. So I kind of chuckle, because when we used to use the web of trusted to establish trusting in email certificates, we would have, when we started to do a project, we would have a signing party, and the signing party was everyone would sit down at a table, or we'd go out to a bar or something like this, and everyone would sign everyone else's keys because you're supposed to be physically in their presence or somehow know for certain that that key belongs to them.

Speaker 2:

What this particular bad guy did that, I'll be quite honest, frightens me much more than the technical attack is. They successfully attacked the web of trust and used some I would call them psychological operations to increase their trust and increase their what's the best way to say it the necessity of them working on this piece of software. So what happened was an identity was established, and over a significant period of time, I think years established and over a significant period of time, I think years, this identity started making pull requests, started saying, hey, here's something I would like you to add, and they were legitimate. It was a fine, perfectly good thing to add. It was reviewed by the community. Everything is working as it's supposed to. And as this entity did more and more good work, and as this entity did more and more good work, that entity became increasingly trusted. And what eventually happened? So as you get closer and closer to being a maintainer, the scrutiny over what you do increases. It's not like you can walk in the door and say, hey, I'm awesome, I have a degree from top university, make me a maintainer, not how it works at all. You have to participate in the community over time and be accepted, and this is the key, no pun intended. You have to be accepted by the community of maintainers. In other words, you have to build up your trust network so that these people who are maintaining the software trust you.

Speaker 2:

And what happened in this case is this particular individual or entity established trust, but not enough to really impact the code. And what they did was they attacked the maintainer. It got to the point where there was really one maintainer of XZ and that maintainer was having some health issues and wasn't getting to the pull requests fast enough. So this entity is generating pull requests and basically saying hey, you need to let me help you, right? So it wasn't an attack in the sense of somebody showing up at your door with a sledgehammer. It was more like somebody showing up your door with a bouquet of flowers. Right, I want to help you. At the same time, different entities, assumingly created by the same organization, went to the I guess the Linux maintenance organization and said hey, this guy's not keeping up, this guy's not keeping up, we need you to, and, oh, I'll help. So what happened was the maintainer finally said okay, I'll let you help out. So now we had an untrust. Effectively, it was a trusted party that shouldn't have been trusted, maintaining pull requests, and they could then introduce their malware into the code base, but this took years.

Speaker 2:

This was not a simple thing in any way. This was not a simple thing in any way, and when I think about what I do as trying to set up defenses, what I'm always thinking about doing is making the job of my adversary harder. You cannot eliminate risk, you can't make things perfect, but you can make the job of your adversary increasingly difficult. This was hard. What this entity did was very, very hard to do. Hard enough to the point where most people, I know, are very confident that this was a nation state. This was done over years.

Speaker 3:

Yeah, let's talk about the. I mean, you've said an awful lot, barry, and there's a lot to unpack, so let's, let's think about the actual structure of the social hack. Right, so the maintainer was a vulnerability was detected. Right, so the maintainer himself was a vulnerability Right, as we all are, as we all are vulnerability, right, as we all are, as we all are. Then the GI T-75 hosted pull requests right, at a rate sufficient to sort of grind down the maintainer. Right, and you think that's the first step. So identify the vulnerable party right, overwhelm him with work. Then create sort of a community consensus that something's got to change. Then introduce yourself with a bouquet of flowers and say I'm here to help.

Speaker 3:

So that was the structure of the social attack. Right, okay, all right. So when we're thinking about the ecosystem that maintains Linux and maintains other critical infrastructure, we are in a situation where our adversaries are looking for vulnerabilities at the individual level, right, and then using these social tricks right, community coercive Well, not deliberately coercive right, none of the associated members of the Linux community were being deliberately coercive, they were just saying, look, we need to change things. Right, but it's the force of community opinion that is going to be the thing that allows JATAN to get in with a bouquet of flowers? Yeah, so how do you think we should? So let's just let's think constructively. So there's good reasons why the Linux ecosystem is the way it is. It's widely regarded as, in many ways, the most secure set of practices for maintaining an operating system. Right, I mean, you questioned that.

Speaker 2:

I'm not sure that most secure is the way I would describe it. I would describe it as it is the best we can do, given what Linux is and where it's used.

Speaker 3:

So there are more secure ways to develop software, no question of that okay, yeah, but I mean we're not talking about like relatively small projects with formally provable yeah, yeah, verifiability, etc. Etc. We're. We're talking about a, a giant ecosystem of code that's in everything, that's in IoT, all kinds of critical and high-risk applications, but they're diverse, complex, et cetera.

Speaker 2:

I think that you just said an important word everything Open-source software is in everything. I'm not exaggerating at all. Even software systems that are developed by companies that are closed usually will have some kind of open source code in them somewhere. So shutting down the open source community or somehow migrating it is not doable, it's not possible. And if you say, okay, no open source software will go in certain kinds of systems, which is actually done, that's done. But if you said that about all systems, it wouldn't be.

Speaker 3:

It's not feasible.

Speaker 2:

Innovation would end.

Speaker 3:

Right. So we're not going to talk about the space of the possible here. We're talking about the space of the feasible. It's not feasible to lock down, and that's probably OK.

Speaker 2:

That's probably a good thing.

Speaker 3:

I think there are all kinds of reasons why you know well-resourced, super smart adversaries who have plenty of time. What do you think about the feasibility, for example, of thinking carefully about human vulnerabilities? So, if we're looking for kind of insider threats or insider vulnerabilities, how do how do we as a, as a community um, think about, you know, taking care of, let's say, this maintainer who was subject to personal I don't know he had, he was sick I guess, or.

Speaker 1:

I don't know what the story was Illness.

Speaker 3:

Yeah. So I mean the key here would be that we would need to sort of be aware, somehow, of critical folks and their vulnerabilities, as a community, address those in some humane and generous way that recognized their centrality to the project but also recognized they're human beings, they need support, et cetera. Give me some sense, for how would we even begin to think about that, given that you and I, if we're part of this ecosystem, we are the vulnerabilities, we are the potential sites of attack and the support we need is social support. So we're critical to this social project that opened the door to this vulnerability and that instead, what would have been great is if the community had recognized his vulnerability and then made space for him to address it himself or helped him. So I'm not sure how that would. I mean. Can we even start thinking in those terms about human social infrastructure and how we kind of maintain it in a generous and humane?

Speaker 2:

way. So this is an exceptionally difficult question for me, because this is outside what I do. I don't really think about human systems that much. But if we're going to address security, we have no choice we have to think about human systems. So one very simple thing would be to make sure that the size of the maintainer community for any software was big enough to handle the load. So if there was a way that the individual who was in effect attacked could rely on some other entities to help out, that would have. That would have helped. But but in a way they did right. They just depended on somebody who's whose trust was not, who's was not trustworthy. He was trusted but he was not trustworthy very good.

Speaker 3:

So even if we were to do that, uh, the person with the bouquet of flowers who's come to help you out could be Jia Tan. Yes, and then he's a bad actor and he's coming to mess you up. So then the issue is are there other ways of thinking about creating trust that would be more reliable? Creating trust that would be more reliable, that would ensure that you're not trusting an office in Shanghai or St Petersburg, I don't know. So, in the early days, it would have been personal contact, as you said earlier.

Speaker 2:

Yes, so I think it's really important to understand that the system worked Okay. The code hack did not make it into production code, it was caught before it made it into production code. And the whole notion of open source, which is you've got thousands of eyes looking at software and those thousands of eyes can suggest changes, can suggest fixes, cause this attack to be detected before it made it into production Now. So, in effect, the system did work. It just got tenuously close to fail. It was very close. We were very lucky and you know, I also look at, let's say, let's say, the recent missile attack of Israel.

Speaker 2:

We could say the same thing about them. Those missiles didn't get through because they were very lucky. The system that they set up actually did work. Now that's not to say it shouldn't be improved, not to say that at all. But I think it's also very important to keep in mind that I believe and I have friends who know more than I do shall we say that if this wasn't open source software, if it wasn't maintained the way it was, the attack would not have been caught, it would not have been fixed.

Speaker 3:

That's a really important point, so we should give credit to the developer at Microsoft, andres Freund, who discovered the….

Speaker 2:

Absolutely and, yes, absolutely. But I think when we look at this we kind of have to get away from declaring it a win or a loss and sit in the middle somewhere and analyze what happened and, as you said, are there ways to improve it? I don't know. My background doesn't give me the kind of information I need to talk about the social system, other than to say you're absolutely correct. Years of years ago it would be physical contact, it would be one on one contact. You would actually see people. That would help. But I don't know if it's feasible anymore.

Speaker 1:

For Andres Freund was his troubleshooting. Was that prompted by what was introduced by Jia Tan, or was that a separate issue? And he just happened to notice in the middle of his troubleshooting that this attack was in the code.

Speaker 2:

My understanding it was triggered by Giatan's actions, that it wasn't serendipitous that he found the bug. He actually found the bug, the attack. He found the attack because the attack caused things to not look the way they should look. Okay.

Speaker 1:

Okay, because I just want to make sure that it wasn't a matter of just to get to the point that, further into the point that it was not totally a matter of luck that this came up or that he was able to find it it, but he was prompted by something that had already I think the element of luck was the fact that he was looking at it at the right time.

Speaker 2:

He himself not lucky at all. Incredible skills, mad skills was not only mad skills, but an attention to tail that most people just don't have things. That the, the SSH connections weren't running as fast as he thought they should, and there was also I forgot the name of the software tool that was monitoring memory was throwing errors, that it shouldn't, and instead of saying excuse me, excuse me.

Speaker 3:

Valgrind yes, yeah, so reporting on this said that you know he was troubleshooting problems, that a Debian system was experiencing with SSH right, and that was the SSH logins were consuming too many CPU cycles. The SSH logins were consuming too many CPU cycles. So that was how at least according to public reporting, that's how he found this. Yeah, I mean, it does seem. Obviously he's a highly skilled and brilliant software engineer, but it does seem like luck?

Speaker 2:

Yes, certainly, absolutely. I mean debugging, wow, it can be very much luck. Do you look in the right place at the right time? Do you sit on the needle in the haystack the first time you sit down? But the other thing that I find really interesting here is because he had access to the source, because he could see the source, he could actually go in and debug the issue that he was looking at. He doesn't think, I don't think when he started, I don't think he was thinking that his software had been attacked. He was just trying to debug his system. If he doesn't have source, if it's closed source, if it's a library that he purchased from someone, Okay, that's a really important point.

Speaker 2:

Then he can't diagnose the problem. Got it, yeah, and that allowed him to. I mean, he could have. I should say he could see the problem, he could contact the company, but he's at the mercy of the company now to go into their software, to actually respond to his request and go into their software and make fixes. And on the other side is I don't want to diss the company in any way Wow, if you're producing software and that software is publicly consumed, you're busy, and responding to every single request for change is just, it's infeasible.

Speaker 3:

Great, great, great. So in a kind of a closed ecosystem, of course, there is this resource constraint, namely, you know, manpower right. You don't have enough, right. So let's talk about the reasons why, uh, people participate in the, in the open source community, beyond just altruism. So let's talk about the, the motivations and incentive structure that people have and they build their reputations on these, on these projects and platforms, and maybe, maybe say a little bit about like why, about why the open source system actually works, how the incentive structure works for developers.

Speaker 2:

It is. I'm hoping that people study it, because, to me, why it works is fascinating. There's social things at play that I think are worthy of deep study, because you do, you're exactly right. You have this army more than an army of people who are not paid for the most part, maintaining software for the community. So there's a lot of stuff going on here.

Speaker 2:

One is if you're a software maintainer for an important piece of software, there's some real positive, good ego building going on. That's a great thing. It makes you feel wonderful. You're Superman, right. You're at the center of this community and you're doing a great thing for society. That's pretty cool, that's a pretty good thing.

Speaker 2:

And lots of doing a great thing for society that's pretty cool, that's a pretty good thing, and lots of people do things for those reasons. There's also the social, and by social I mean the community. You're respected amongst a group of peers. That's really positive as well. So you're playing that really positive as well, so you're playing that. I should also say, though, that a lot of open source is someone is actually it's a part of their job with their company to help maintain an open source library, because the company uses it. So you have a lot of companies, a lot of entities that depend on a piece of software, and one of the really interesting things is that, in a capitalistic environment, these companies have recognized that working together on an open piece of software that they can all use is better, is a better way to go than to have their own copies of everything.

Speaker 2:

Yeah, that's really important and maybe I'm fascinated by this because I know so little about it. But I think that that structure is worth studying and I'm sure it exists in other areas, but it seems to be outsized. It seems to be bigger in the software community than maybe other instances of it. Great.

Speaker 1:

Could you end by kind of talking about at least what I'm curious about of you discussing is what might be an overreaction within the community.

Speaker 2:

So the overreactions I'm hearing all surround the open source software community. I don't think people. I think they hear the word open source or community and they think again the Boston Commons, the tragedy of the commons, and I think people need to sit down and understand how the community works. And I think people need to sit down and understand how the community works. So it's my concern that very well-intended and yet mostly uninformed people try to pass legislation or somebody's going to get sued somewhere. That kind of thing concerns me and that those kinds of things will do more damage than good. So I'm a believer that there are three components to handling security issues. One is technical, one is legal and the other is policy or social. And I think the answers here. There may be some legal things that we need to adjust. There may very well be, but I think the real answer is looking at the social policy. Maybe there are some policies that are worth changing.

Speaker 2:

I think sitting down with the leaders of the open source community and talking with them not in front of a congressional committee, but talking with them to understand what they do and possibly make adjustments, I think that would be a really wise thing to do. But I think getting the people who make this thing run, getting them together and talking is going to be much more effective than any kind of legal thing. And I don't think there's a technical solution. I'm the techie as my friends would say. I don't think there's a technical solution to this problem. Friends would say I don't think there's a technical solution to this problem. There might be. I might be completely misinformed, but I think that the issues here, if we're going to solve a problem, it's a social problem.

Speaker 1:

Well, perfect With that. Perry, thank you for sitting down with us and talking about this news and this occurrence.

Speaker 2:

My pleasure.

Speaker 1:

Yeah.

Speaker 2:

That was fun, thank you, thank you.

People on this episode