Center for Cyber-Social Dynamics Podcast

Center for Cyber-Social Dynamics Podcast Episode 4: Navigating the Convergence of Education, AI, and the Future of Work with Paul LeBlanc

Institute for Information Sciences | I2S Season 1 Episode 4

Send us a text

Discover the transformative insights of Paul LeBlanc, the former president of Southern New Hampshire University, as we explore the cutting edge of competency-based higher education in our latest podcast. Paul, a trailblazer in online education, shares the intricacies of SNHU's model which prioritizes competencies and aligns closely with the evolving job market. This episode promises to peel back the layers of traditional education and reveal how a focus on real-world skills over coursework is preparing students for success in a competitive workforce.

The conversation takes a turn towards the future as we unpack the implications of AI on the job landscape. With technologies like AI and ChatGPT reshaping industries, we're forced to confront the reality of '10x' professionals who amplify their productivity through these tools. Paul and I wade into the ethical waters of AI's role in job displacement and the urgent need for innovative retraining. We discuss how educational institutions might adapt, not only to the technology itself but to the new kinds of questions and problem-solving skills that will become essential in an AI-driven world.

Wrapping up, we cast our gaze forward into the intersection of work, education, and AI integration, discussing how our valuation of jobs and skillsets must adapt. We ponder over the redefinition of knowledge creation, the necessity of interdisciplinary approaches in education, and the potential for AI to democratize educational relationships. Join us for an episode that's less about the 'what' of AI and education and more about the 'how' and 'why' as we navigate the shifting tides of the digital age with Paul LeBlanc.

David:

For this recording of the Center for Cyber-Social Dynamics podcast, we were joined by former president of South New Hampshire University, paul LeBlanc. During this workshop, dr LeBlanc discusses his insights into how universities can respond to the proliferation of emerging technologies, especially those that impact the development and teaching of riding skills. Here, leblanc argues that universities ought to focus on a competency approach over granular skill development and just exactly what this means for curriculums going forward. As always, this podcast is brought to you by the Center for Cyber-Social Dynamics at the University of Kansas. You can find us online at ituskuedu and on Spotify. Thank you for listening and enjoy.

John Symons:

This workshop is part of the workshop series that we're running out of the Center for Cyber-Social Dynamics here at the University of Kansas. This is our first semester really in operation as a center. We were launched in November, so we got started. Now we're up and running. The center has six main research topics at this point. You can learn more about the center if you go to our website. We aim to be a kind of interdisciplinary hub for the study of the interplay between society and technology here at the University of Kansas. So we're a cross-college unit. We bridge the engineers, social scientists, the humanists and so on. That's what we're hoping to do, so I'll introduce Paul. So again, paul, thank you for joining us.

John Symons:

Paul LeBlanc is president of the Southern New Hampshire University. This is one of the largest universities in America, actually SNHU. He's been leading the university since 2016. And during his tenure as president, the university has grown in terms of enrollment more than four times, which is quite extraordinary. I first learned of Paul when I read an editorial hero about AI and raised very clear and important questions about both the future of higher ed, the future of work, the role of education, in ways that I found actually really refreshing and I wanted to learn more, so I reached out to Paul and he was gracious enough to agree to join us for this conversation, and so just to say a little bit about SNHU and then we can jump right into the discussion.

John Symons:

Snhu is an interesting operation and it's certainly the largest provider of online education. Maybe in the Northeast, western Governors is also a very big operation, but maybe SNHU is as big as Western Governors, I'm not sure. The other striking thing about Paul's leadership is he's created a university where competence is central as opposed to coursework. So I'm not sure. Maybe you can say a little bit more about it. Yeah, absolutely. The idea is to have a competence-based degree so that when you graduate from SNHU, you've demonstrated a level of competence that is the target of the education there, rather than just having finished a certain number of credit hours and so on. So maybe with that, paul, before we jump into AI and the future of the university, you could say a little bit about features of SNHU and the operation you have there.

Paul:

Let me just give you a broad stroke, a little bit of a thumbnail, and that would just sort of ground the conversation. So, as you said, I've been here 20 years this July, so it goes by very fast. Like when I came it was 2,500 students. So it was very, very small, almost entirely residential, very tiny online operation 185,000 students. So massive, massive growth. In that time, in 2012, we were the 50th largest nonprofit provider of online degrees. Babson University does an annual listing and just three years later we were number four and then two years later we were number one or two, depending on how you count. We as governors are about exactly the same size. We get more undergraduate degrees, they get more graduate degrees, so we're roughly the same size. We'd be the two largest in the US, not in North America. Unam in Mexico City, for example, I think it's close to 400,000, there's some other schools, but in the US, certainly the two largest. Now there was a time when there was a larger one, of course, which was Phoenix, which really kind of sullied the reputation of online, but they have about 500,000. When they're at their height, they're down to about 85,000. And the for-profit providers are kind of in a nosedive right, they're either being closed down by the Department of that, et cetera, as you said. So we have a bucolic, beautiful little campus in Manchester, new Hampshire. There's about 4,000 students, 181,000 online.

Paul:

I want to say something about who we serve, because it's very particular. We really think about the 45% of Americans who say they would struggle to come up with $400 for an unexpected car repair. Our learners are a 30-year-old two tours in Afghanistan, even Vietnam which is that some Vietnam grads graduate this weekend but more typically, iraq, afghanistan. They maybe have a couple of kids. 86% of them are working. They're stuck in a dead-end job. They need to get a post-secondary credential to unlock economic opportunity for themselves. It's very clear On a residential campus, on an undergraduate campus like the University of Kansas, you offer sterling academic experience, but you also offer a coming-of-age experience that has this very quintessentially American and quintessentially residential.

Paul:

This is where you go to find out what you're all about late night conversations in the dorms, playing on teams, drinking too much beer on Saturday, falling in love, meeting your future spouse, travel abroad All of that collectively I would put in a bucket called coming-of-age. It's a very American, higher-ed, traditional-age world. My students they've had all the coming-of-age they can handle. They get kids' job, economic challenges, et cetera. We get very, very focused on cost, making it within reach, linking it to jobs and job opportunities.

Paul:

If you take a look at our students 80, so it used to be 80% of them came to us with credits from before they tried college life got in the way. They weren't ready. Whatever happened was 10 years ago. Now they're coming back to complete. That percentage continues to go down as we do more and more work with very large employers. We're the largest academic partner for Walmart. We're the largest academic partner for Amazon. They're training frontline workers, they're upskilling. Interestingly, they don't say much about this. We are not allowed to say much, but I'll share it in our conversation. But these are companies that see the coming automation and partly what they're trying to do is get their employees to a place where they won't be displaced because those frontline jobs and warehouses are increasingly automated. For just one example, the result of doing more and more with frontline workers and large employers is that the percentage of our students who have credits is down to 60%.

Paul:

Those are the most challenging students. That's the highest risk of attrition. Because they have had no college under their belt, they tend to overestimate the amount of time they have, underestimate the time that is required, and it's been a long time since they've written a paper. Their high school math has long passed them, so it's a big challenge. We have 30,000 students of color, more than the largest HBCU. We have more Native American students than the largest tribal college and, interestingly, we see a new demographic coming out of the pandemic, which is about 30,000 traditional age students, so we have 30,017 to 22-year-olds. Those were not typically online learners in the past, but some things happened and I think that something is a combination we're still trying to unpack it A combination of the pandemic. They had online learning, whether they wanted it or not, and a lot of them discovered this kind of works. For me, a lot of those 30,000 traditional students have the same problem as our adults They've got family responsibilities, they're working. All of a sudden, I realize the convenience of online works much better for them, I fear.

Paul:

Familiar with the work of Clay Christensen, famous Harvard School business professor, who coined this phrase of disruptive innovation and did all this research on disruptive innovation. Clay was a dear friend of mine, 40 years. He passed away a couple of years ago and was on my board and was a trustee emeritus. We very much use his playbook for how we think about innovation. So we, in 2014, created College for America, which was a competency-based ed program, as John mentioned to you. So the bulk of what we do is still traditional and credit-based. We have thousands and thousands of students in our CBE programs and they are very different in that they're untethered to the credit hour.

Paul:

In a book I did in 2021 called Students First, I argued that the credit hour is not only a faulty foundation for trying to understand learning, because, if you think about it, it is very. It is time, a 15-week semester, the three credit hour course and the variable is student academic performance A, B, c, d or F and the competency-based model. You flip it. So what we were able to do is make time a variable and learning was fixed. So you don't progress without demonstrating mastery, and mastery is defined by very well articulated rubric and you don't have grades. You have mastery and not yet.

Paul:

This is very powerful For one. It actually brings genuine rigor and integrity to the learning process, because I talk to CEOs all the time and I can ask them raise your hand, if you hired somebody from a really reputable four-year institution. It doesn't write very well. You know how many hands go up in the room. Every hand goes up in the room. I can do this with quant. I can do this with almost anything you think about that. We typically associate with a college-educated person. So employers really love competency-based education. They speak in terms of competencies and skills. They think about what can my people do or what can't they do. I want to come back to this because I think in an age of AI, this is going to be critically important.

Paul:

If you're familiar with the Gartner curve, the Gartner curve argues that new innovations come up. We get this irrational exuberance and then they disappoint and they go down into a slough of despair where we say, oh, that was just another fad. And then they kind of make their way back slowly and organically and in a more substantial way. So the easy example there was MOOCs. When MOOCs were created, george Siemens who's name will come up later again created the first MOOC in Canada and then it became very popular. They were like oh, it's going to change the world. Add-ax, corsair are going to change the whole world. Higher ed will be disrupted.

Paul:

And of course they didn't, and their graduation rates were poor and the only people who did graduate from their courses were people who already had college degrees and while they could still find a Mongolian sheep herder to talk about, you know going you know mastering physics and not going to Princeton. They were the edge case. You know, they were the exception, but it was a great story. So then everyone wrote them off. Oh, moocs were a disaster. And you have another higher ed fad. But in reality MOOCs kind of didn't find their way and Add-Ax was just sold for $850 million as Corsair was educating lots and lots of people and they didn't change all of higher ed, but they became part of the ecosystem. I think the same is happening with CBE today. So now that's kind of our story. And now chat, gpt happens.

John Symons:

Great, great, great. So this brings us to you know the way that AI is going to change this landscape again. So maybe you could say a little bit about how you think AI is going to force us to reevaluate what we're doing in the classroom the character of sort of institutional design in higher ed. There's a whole range of questions that I know you've given a lot of thought to that. But before we go to yeah, so maybe we'll just let you take it from there so I can think about this and sort of you know I can sort of use.

Paul:

What altitude do we want to take up the question At the sort of 10,000 foot level? You're getting lots of campus discussions about academic integrity and cheating and how do we think about writing in the future, and I'd love to talk about that. My dissertation research a long ago was on literacy, behaviors and technology. So this has been something I've thought about forever and I think it's just fascinating to think about this question of writing in AI. What is its role People now think about wait a minute, what are the tools that my students now have to master in their repertoire as a Bill and the Blank major?

Paul:

So if you're in the creative fields, what are the new tools? Like you know, if you work in the world of film, you know there are certain tools you're expected to know. So let me give you a quick example. It's talking to someone who has a two year program and one of their programs is for paralegals and they've now quickly integrated the use of chatGPT as one of the tools of a paralegal needs to master and really think about how to use it. So it's just like here's how to use it, but how does it change the work and what they're basically saying to their graduates is you have to enter the market now as a 10x paralegal. So what is a 10x paralegal? It's a paralegal who can do the work of 10 paralegals, and that's a really interesting question. And that sort of raises another question we should get into, which is well, what happens with the other nine? Like I am in the camp that thinks we're going to see massive job displacement, like super threatening to society levels of job displacement. So that's the second level. So what tools do you have? So if I'm sort of producing mathematicians, what tools? If I'm producing accountants, what tools? You know, pwc announced last week that they're putting billions of dollars into the use of AI and while they talk about giving their accountants more powerful tools and training them to use more powerful tools, I would bet hard money that we have 15% of the accounts in the future that we have today. I think PWC is going to automate all those low-level and mid-level knowledge jobs and maybe even some high-level knowledge jobs. By the way, this is happening in technology, just field after field. What are the tools that your students, that our students, need to master? Next, a workforce what jobs are going away? Should you be training accountants in good conscience when you counsel a friend's son or daughter to become a software programmer. Look at this point Example I was talking to someone who heads up a cybersecurity team at Oracle.

Paul:

His team just used ChatGPT to write what would have taken, as he described, hundreds of hours of contract programmers. Chatgpt did it in lightning speed. I said so. One first point a lot of people didn't get work from Oracle on that project. Second point I said was it good? He said 90 percent of it was really good. My team looked at about 10 percent that they wanted to change. I said so. What happened then?

Speaker 5:

He said well, they started changing it.

Paul:

Then they paused and said why are we? Doing this.

Paul:

Let's prompt ChatGPT to fix its problems, which they did, which actually raises questions about their job. On another level, you start thinking about workforce. What majors? How do we change majors? There's a way of thinking about this. I just did an essay with a colleague in Paris that is getting published, I think next week.

Paul:

But we went back to look at the Industrial Revolution and said, really, they're going to go back and look historically. You can put jobs into four categories when a paradigm shifting technology happens. Jobs that absolutely don't get touched. Jobs that completely go away. Those are the easy ones. Jobs that persist but are substantially augmented and changed because of the new technology. Then jobs that are created that don't exist because of the technology, of course.

Paul:

The interesting question in this one is that if the jobs that AI creates are other knowledge work jobs, then there's a good chance that AI can do those jobs instead of humans. Again. Now I'm going to go up to the highest level and I'm going to borrow from George Siemens who, as I mentioned earlier, great thinker here in this space. George argues now that if you think about industrial excuse me, a bunch of steam all of a sudden physical labor was much less valuable. The ability to do physical labor. As steam and power tools and technologies took hold, you didn't pay very much for physical labor anymore. Those are the lowest paid jobs because it wasn't scarce. There are lots of physical labor out there and I'm going to use the tools instead.

Paul:

If you think about knowledge, knowledge now is much less scarce, it's abundant. I get all here with a chat, gpt, prompt, and we can talk about accuracies and hallucinations, et cetera. If that's true, knowledge just got a lot less valuable. Then we ask the question what then becomes valuable instead? What George has argued is that higher education, education generally, has always been built on epistemology ways of knowing. What is knowledge? What is it to know? And now we'll shift over to questions of ontology, of being. We need to really serious questions about was it mean to be human in this world? Was it mean to be in community? Was it mean to be in work? Was it mean to be in relationship? That's a flip of the switch for higher education, because what we do go back to that coming of age bucket of things.

Paul:

I described to you what we do is, we spend all our time on epistemology questions. We could spend weeks in a curriculum committee meeting debating a program. All of you who are academics and have sat through them know this. We really focus on that as our primary function, properly so, and we generally believe that those ontological questions will get addressed in that coming of age bucket. Yeah, like you're gonna live in the dorm. You're gonna be in a community of people that gonna figure yourself out. You're gonna travel abroad, you're gonna experience things, you're gonna grow, evolve and you're gonna be a different human being than you came in four years earlier. George might argue that that's gonna flip that. The knowledge making peace becomes way less valuable and important in this sort of these ontological questions. So that's an interesting question right to work through. I mean it would be a massive existential change for our universities.

John Symons:

Let me jump in. Yeah, so I'm sorry, I'm giving you a lot, but those are the big ones. No worries, no worries.

John Symons:

This is really good, this is really interesting. So maybe you could say a little bit then about, just very concretely, given that this change takes place as you've predicted, and given that knowledge work per se or sort of, let's say, the kind of expertise, let's say in specific details, that we can now look up on Google we've always been able to do that or via chatGPT, comes to the fore, and how does the actual institutional structure of a university change? So how, for example, would a place like SNHU rethink its pursuit of meaningfulness or these kinds of what you're calling ontological questions, given, for example, that you don't really have a philosophy department? I mean, you're you know. If suddenly this is the revenge of the humanities, where we're kind of coming back and displacing a more vocational model of education or a traditionally vocational model of education, how does an institution like yours respond?

Paul:

We'll do what everyone will do, which is we'll start to deploy AI and point solutions. We'll start to find places where we can deploy it. We'll have those conversations about what our majors need to know. Those are easy. They're not easy, but those are accessible to people and we'll do all of those things. Our people are using it already across the university. Our instruction design people are using it. And give you some quick examples Is ChatDPT to do a structural unit and phlebotomy for our nursing program?

Paul:

It did it in about 30 seconds. They looked at it. They tweaked a couple of things using ChatDPT, just like different prompts. You know, explain this concept. They brought it to the nursing department. They said, hey, take a look at this. And they said, yeah, that's pretty good, but where'd you get it? And said we just did an AI to go into a minute, so sort of a notion. Right, we're using it in marketing. Our folks in the creative team said you know, write about a 60-second commercial in HE with the two following value propositions. Say the same conversation with our CMO was it good? She said it was really good, like 90%, we tweaked it. And then, when they tweaked it, they said to ChatDPT we're gonna shoot this commercial. Give us the shop selection, no more than 10 shops. I said how was that? Eight of them were the ones we would have chosen. We tweaked a couple. Hr was using it. Someone just did a 360, they sent me an email out to eight people in someone's

Paul:

team said you have questions, we're looking at. You know that John has been working on these things. He'd love your feedback on these. Can you just send us to me? Rather than going through and distilling that, they just put it all into one document, loaded into ChatDPT, said distill this and gave me the three main takeaways and give me two suggestions for this employee to improve their performance based on their. I looked, was it good? Yeah, it was really good. I tweaked it.

Paul:

So everyone's like there's low and middle level knowledge work where you're really using your judgment and your personalization. You can write. We and other universities will do that, I think. At the very same time, we will not unlock the full potential of AI when we are doing point solutions, and this is an argument that comes out of a really very good book that I recommend to everybody came out about a month before ChatDPT became popular. It was called Power and Prediction by three economists at the University of Toronto and what they argue is that you really only unlock full potential of something like AI when you do full system design. So we're launching a dual effort at SNHU.

Paul:

Go back to your question. We're encouraging everyone to play with it. Use it. It's the best way to learn is to play, figure out how to share our learning and just do that updating on a regular basis, and that's gonna change things. Some it's gonna be difficult. I have one department of 200 people who do transfer credit evaluations. Remember I said all these people come to us for credits. Theoretically, ai could displace 200 jobs. I don't wanna have 200 people lose their jobs, so we're beginning a conversation with HRs. How do we get ahead of that and start retraining them as advisors and other roles that are actually more relational and harder to automate. So we're gonna work hard not to displace people, but that's just one of many examples. While we have all of that point solution work going on, we're having a special sort of skunk works, if you will, through to just do a clean sheet of paper redesign of the university if we were starting it tomorrow using AI and no assumptions about what we do and what we don't do.

John Symons:

If, for example, we become more invested in this kind of interpersonal relationship-based service model, is that going to cause you to redesign or rethink SNHU?

Paul:

The notion I did a book that came out last fall called Broken, subtitle of being. How are Social Systems are Failing? I said how do you fix? I have been long looking at this question of automation and I didn't know it was gonna happen just two months later in this profound way that it did but anticipates that. I've been in the camp for a while that thinks we will see massive job displacement. You know there's one study that came out of the University of Oxford, pre-chat, gpt predicted something like 55 to 60% of American jobs get wiped out by automation Much, much higher percentage, by the way, than happens with free trade agreements, for example.

Paul:

Understanding that what I quite argue we have massive sectors of our society are really relational in their systems of care. So healthcare, mental health treatment, k-12, ira, these are all systems of human transformation and the book argues and this really comes out of spending a lot of time with people like Jessica Benjamin, the psychologist at NYU, and Matt Steinfeld at Yale, sociologist like Greg Elliott at Brown to really transform a person's life you have to do it in relationship. It only happens in relationship. If you know the work of Donna Beetle, who looks at extreme poverty, she would say no one who has escaped extreme poverty has ever cited a program, a platform, a policy system, our government program. You know what they cite Site of human being, someone who believes in them and helps, sponsors them, mentors them, lifts them up. I think most of us who think about our educational experience, when we talk about the truly transformational aspects of that, it's always people we identify. It was a teacher in grade school or high school, a mentor in college, colleague or coach. It's always in relationship with someone. So if you think about that, we have tried to, through the use of technology and efficiency and late capitalist post-Reagan small government, we have tried to drive the human relationship out of our systems of care. We minimize how much time you get with your doctor or your caregiver. We overload classes. We just thought I see some nods, so you all get this.

Paul:

So what I argue in the book is that as we lose knowledge, work, jobs, really what we need to do is flood. There are no lack of jobs that need to get done. Tear point John right, we just don't like to pay for them. But we should flood our K-12 system with great teachers and social workers and counselors. Take a look at the state of mental health in America, that troubling CDC report six weeks ago which said that 25% of adolescent girls in the United States have planned their suicide in the last 12 months and 13% have tried it. This is a freaking crisis. So we need to flood our system, our K-12 system. We need to flood our inner cities with social workers. We build a broken mental health system which is essentially our prison system. At this point, we don't have a functioning mental health system in the United States. Counselors, right, psychologists like hiatrists, caregivers. We need to rebuild a broken criminal justice system.

Paul:

I would argue which you know, I went through and read mission statement of almost every correctional system in America. They all talk about redeeming people, bringing them back into society, educating them, getting them prepared for reentry, and a state like California spends about 1.5% of its prison budget on those efforts. It's a joke. And if you look at a single county like Cook County in Chicago, 70,000 young men, mostly men, go in and out of Cook County jail every year. It's just a rotating door right.

Paul:

We don't have a lack of jobs that need to be done. We just don't like paying for them. We don't give them much status and as knowledge, work starts to be displaced. I think there's an invitation or an opportunity to radically rethink the nature of work in America, and I think we'll have to because the civil unrest that will come when people don't have jobs. I'll give you just one example it's not even a knowledge work job, but it's a job that AI will displace Truck driving, truck drivers.

Paul:

Truck driving is the number one middle-class job for undigreed white men in 34 states. So in 34 states, if you don't have a college degree or post-secondary degree, you're most likely out of road to a middle-class wage as it to be a truck driver. Those jobs are gone. 10 years from now they will be gone, are largely gone and there'll be many, many other examples of that. So we need to rethink this and of course, the question also comes how we pay for it. So Bill Gates would say tax the robots, Like, if you are a large corporation making a ton of money and you've automated, you'll be taxed for it.

John Symons:

So the question of how we value those kinds of jobs and how we incentivize people to move into that kind of work, that's one question. But let's think about education and return to sort of the core topic here. How do we rethink education to prepare people to be ready to live meaningful lives that are deeply relational, that are involved caring for?

Paul:

others. So there are a number of things. So one is we're gonna take a look at majors. So you ask how's that sitting to thinking about this? We're looking at a major like do we need to really start to expand much more quickly in areas like education and healthcare, which are systems of care that aren't easily automated? They'll all be AI enhanced, but they won't be AI replaced. We start moving into more and more of those jobs. Think about phasing out areas. Like you know, we have a coding bootcamp. Should we hold on to a coding bootcamp? Probably not. How do we rethink it or do we discontinue it Like? These are just. These are hard questions, because real people's jobs, real people's lives, they will be answered for us. I'd rather answer them for ourselves. Those are some of the conversations that we're beginning to have for those AI enhanced jobs.

Paul:

I'm going to go back to something we said earlier. I would argue in some ways that all curricula has been rendered instantly obsolete. I at least got a date. Every department needs to be looking at a curriculum and saying how does it change? I was a writing teacher, so I'll talk about writing.

Paul:

So writing has always been itself a technology which we forget and with every change in communication literacy, certain cognitive skills went down, disappeared off the scale of the top five most important. Others came up. If you think about what an oral pre-writing culture looked like, memorization was your number one cognitive skill. Where would you put memorization today? You wouldn't even put in the top 10 of cognitive skills you want our students to possess. Once the printing press came along and there was lots of knowledge now, probably more than you can memorize the ability to think in terms of taxonomy and classification became more important. I can't know everything in the library. I need to know where to find the things that I need to know when I have a question. And with each of these changes in the cognitive priorities are so way we think, driven by changes in our literacy technologies. Writing was a technology, printing press was a technology.

Paul:

With each of those there are critics who railed about oh my God, the world is ending as we know. That includes Socrates, right Pantillion, who sort of rails against writing. He's going to make you stupid. There is no dialogic in writing. Once it's written and you say a stupid thing, it will be stupid for as long as it ever exists. But in writing they had arguments about what this was going to do with the quality of our thinking, for having one of those moments. Now Pistemology given way, at least entirely, to ontology. But I do think how we think about knowledge and knowledge making and knowledge management will evolve pretty dramatically.

Paul:

If you think about writing, I think it's going to be a question of do I have the frameworks that allow me to put my algorithmic coworker to work? Am I giving them the right questions, the right problems? So prompt engineering is the hot phrase these days, right? So am I doing? Do I have those frameworks? And then, do I have the frameworks to judge the quality would it produce for me? So if I used to go back to that Oracle example, those engineers had to know how to shape the work for that coding that they needed and then remember my question then was it any good? They needed to have a framework so allowed them to make that assessment and then they could rewrite the prompts. Right, but they weren't doing that work.

John Symons:

I think Bert Ramon, syed, I mean Syed has his hand up, bert doesn't Ramon has. So let's start with Syed.

Speaker 7:

Hello, thank you for the talk. It's been super interesting. Yeah, so let's say it. I am a fourth year PhD student at work with John. My entire dissertation is on Lars Langley and Puddles, but my specific one with writing is guest editing a journal issue for teaching philosophy, and it's, of course, heavily focused on writing. So quick question is something a clarification question. So are you suggesting, at least for your online, more of an online university, that about how to assess people outside of writing?

Paul:

Yeah, absolutely. I think writing will be an art of advice. I think this is the way I said it. I would eventually circle back to prompt and see based education. Comptly based education essentially has only two questions. When you start to unpack them they're pretty complicated, but the two basic questions are what can you do with what you know? So I think that's a really pertinent question in AI world, because what you can know is a prompt away. What can you do with it? That really becomes more the critical question. And then the second question asks how do we know what's your assessment? The assessments in this case become format, space, right. So we now start to think about other ways of assessing. I would argue that writing has long been a relatively poor assessment vehicle. Except for understanding writing. Can you write well? But for lots of other things I'm not so sure. So my hope is that the move into the influence of chat RAI on education actually accelerates our movement towards competency based education, which I'm a big believer in.

Speaker 8:

Yes, Hi Paul, thank you very much for this very sobering talk. I'm Ramon Alvarado. I'm an assistant professor of philosophy and I teach data ethics, and most of my work is in philosophy of computation, epistemology and ethics of AI and data science at the University of Oregon. Besides prompt engineer, of course, which is the new thing that people are jumping in how do you prepare students for the possibility of being able to ask good questions going forth? Right, because it's not just about having knowledge. It's not just about managing propositions that are already there. It's about a knowledge, a lot of it, especially for us philosophers, but for most of us humans that want to know, questions are the most important element of knowledge right, I was a literature major.

Paul:

As an undergraduate, I became pretty good at asking the right questions of the novels I was reading. I never wrote a novel. I don't write fiction. Do I have to write code in order to do what you described earlier? When your team looked at code and made a decision, I used to think so. But you know what? Our coders don't write code, they curate code. They're on GitHub, they're pulling code right. So they have frameworks, they have architectural, they have architecture of understanding that allows them both to curate. They can farm out as they were doing, because they were doing something new and kind of complicated, but they still needed a framework to understand and assess, just as I understood and assess a novel. They had it for asking questions of the code they were looking at Was it elegant?

Paul:

Do this, did it integrate? In this way, knowledge making will be less about the production of knowledge and much more about the kinds of questions we ask of the world and our ability to assess the answers and then what we do with that. I mean, I'm sure you've all followed the work around protein folding. What you have is the ability to ask a whole new set of questions that couldn't even be asked before AI. That's where I think it gets very exciting. You've called my comments sobering. They are in some ways, but holy cow, what's possible now is just amazing.

Speaker 9:

I'm Ryan Lamassos. I'm a first year PhD student in the Department of Philosophy here at University of Kansas. I do stuff on, I think, some technology. Thanks for the talk. I just wanted to ask you a question. We've been focusing on the student, the impact of a competency-based education from the student perspective In this talk. Right here we have a majority of PhD students. How is this competency education going to impact instructors? What are you looking for as a president of an institution that we're going to probably see more of like in the future? How do we prepare ourselves to be marketable and get into these institutions and teach in these?

Paul:

institutions. We can spend some time unpacking the challenge of the poor doctoral student America today, given the nature of the market and the changes that are happening and the broken business models for so much of higher education right now. We can unbundle the roles, the questions that I said. The first question is what can you do with what you know to curate and create competencies, understand what are the right competencies, ways which evolve, what are they tied to? I use philosophy often as an example of this, because I think philosophy departments and humanities generally have not told their story very well. And an economy in a society that's obsessed with jobs and job preparation, that's the number one thing that is asked of higher ed today. We've seen, as you know well, the steady decline of the humanities in liberal arts generally.

Paul:

What I'll say about competency-based education is like it only says to the philosophy department what claims would you make for what your students can do when they graduate with a philosophy degree from the University of Kansas? What I would argue is that there's a reason why McKinsey recruits philosophy graduates from the University of Kansas and Harvard and Yale and other places is because they actually have competencies that those guys value a lot. The work of a faculty member in a competency-based program is very much of a curator In some ways. You could argue that that's always been part of the world of a faculty member. But here's what it's probably not in the sort of delivery of the learning, but rather design of the learning. Competency-based education because it's always an assessment of performance really then becomes about how do you design the practice that leads to high levels of performance. So basically, in all of the places where our life depends on it, we actually don't trust traditional education.

Paul:

We think about pilots or nurses or doctors who say look, it's great that you had a 4.0 from Emory Riddle. Great, you did great in your flight program, but we're still going to have you spend a lot of time in the simulator, you're still going to do additional FAA exams. And, more importantly, you're going to lock a lot of hours in the right-hand seat before we let you slide over to the left-hand seat. We're basically going to watch you perform again, and again, and again until we're convinced you perform well. We do this.

Paul:

Clinicals for nurses and doctors are essentially the same thing. Great that you went to Danzis and you got a 4.0 as an undergrad. We're going to put you through a lot of practice and we're not bringing you free till we've assessed your ability to do something with all of that knowledge you gained along the way. Assessment will be automated. It already is being automated in lots of ways. In the places you see that, for example, are in simulations. Look at games, for example, as a way of kind of analog to what we can do. So don't know if that gives you any more faith in the future. I am, but I think those are the ways. I'd start to think about it at least.

Speaker 5:

My name is Ajibonulu Akradi. I'm from Nigeria. I'm currently in Nigeria. Basically, I work on media and information on the TREC. I co-founded a network for student talks about media information TREC and I prepare them for the future. My question is not theoretical, but really related to activism getting the government to really do what is needed concerning the future of education, concerning the digital future. Basically in Nigeria now there is no online program, no degree, master's or anything that is approved by the government. So my question basically is based on your experience running the university, getting your university to really work on digital learning and all that. What were the challenges you faced? Basically, what challenges do you expect you are still going to face with the new technologies that are coming up and trying to inculcate them into our education system? And, basically, how did you deal with the problems you faced before and how do you think we should deal with the problems or challenges you are going to face in the future with the government and all? Thank you.

Paul:

So in terms of the growth of our online program, we really have never faced much of a regulatory challenge in that. If you remember, when online learning really took off in the United States, non-for-profits looked down their nose at it. They saw it as poor quality faculty didn't want anything to do with it, nature, of course, a vacuum. So that's when the for-profits rushed in and that's when you got the growth of Phoenix and some of these other schools that I mentioned earlier. At their height they educated about 12% of all American college students. Then eventually, non-for-profits kind of came around and you had early players in that space and there was some actually pretty traditional, you know, pretty high on the food chain. Schools like Rexell was a big early leader. Northeastern University was an early leader. Bu had a pretty substantial not compared to us today, but at the time, you know reasonably substantial program. So you had a little bit of that, but it really wasn't full. More, probably in the last 10 years 10 to 12 years that not-for-profits moved much more emphatically into that space. And now, of course, the pandemic tons of schools are in the online world. There's probably a university in the country that is running some online program, some of them more aggressively and some of them not so much. So we didn't have the regulatory challenges once actually, was the for-profits kind of broke through that space, right, and we kind of opened it up. However, to your point, when we try to bring our programs into other countries, we do face a lot of very traditionally minded regulators who still believe that a face-to-face in the classroom with a traditionally prepared faculty member is sort of the paradigm of quality. And I don't know how that changes, but I do think it's going to likely happen is that when we see I think AI is going to drive down dramatically the cost of delivery for high quality education and that is a huge boon to developing economies that can't afford the models that we, generally speaking, build today. If you take a look at what someone can afford in Mexico, for example, where you do a lot of work, you need to be below $1,500 a year to make it work Very hard to do traditional models that we've built for today. So the one thing that might happen actually in this next generation of models is you'll leapfrog online, you'll leapfrog the model that we use today. So, like you know, those countries that went right to cell phone technology and skipped over telephone poles and miles and miles of cable, but every country is its own regulatory nightmare.

Paul:

And, look it, we're not clear in the US still on how this is going to play out. I was on a call with the White House yesterday in which they're talking about pulling together a convening of people to at least just lay out what are the critical questions that have to be accounted for as US education moves into a more AI shaped paradigm, for example. We haven't talked about algorithmic bias, and a number of you have talked about ethics like what are the ethical questions that policymakers need to think about? And just you know, the reality is that good policy always follows practice. I won't anticipate it, but we should give policymakers the right questions to ask good, remote sort of framework. Again, what are the right questions the policymakers should be asking? A regulator should be asking as they watch this emergence.

Speaker 10:

Thank you, paul. I'm David Albert, so sometimes for Westbrook I'm a professor a lot of things, but my professorships and law at SUNY and finance and I do sort of globalization theory stuff. There's a huge amount to say here, but I'm trying to figure out or try to figure out some way to say it that's useful in the present.

Speaker 10:

This kind of talk has become kind of a genre already, and it echoes talks about technology and what it means that go back at least to the beginning of the camera. So. So there are tropes and cliches everywhere, and one thing one might, I think, usefully do is spend some time thinking about the apparatus with which we think about this sort of thing, and so I'm going to pick one, but I'm going to suggest another one first, and the one I would suggest is not that long ago and something you know, there was something called the widespread suspicion of metanarratives that Leotard famously used to define postmodernism right, the sort of notion that Olympian positions were not available to the human and that therefore what was hoped for was a kind of plurality right, but now we have a device whose stated purpose is to eliminate plurality by weighing probabilities.

Speaker 10:

That seems to me to be a fairly strong epistemic shift in a long generation. I think one could fairly easily tie that into I teach finance, tie that into monopolistic capitalism, which is of course giving us AI divergence that looks like convergence is very much Microsoft's model. It's a little interesting to watch the way in which this has been sort of taken on board in this talk and otherwise, as the inevitable course of history Truckers will. So I'm not going to take all of these things apart slowly or anything. But I have been asked one more pointed question and that is I really quite liked speaking about relations. I kind of like the moved ontology thing, but it seemed to me that one of the relations that that is most fraud and has been you mentioned Socrates, it has been for a long time is the teacher student relation.

David:

And so I think there was an awful lot of.

Speaker 10:

This is my objection yesterday, John, to the Saudi idea. Right, I'm also a fellow in the center. Anyway, among other hats. But it seems to me that one of the things that we are, we are saying, and we have good financial reasons to say this right, we also have good administrative reasons to say this, good bureaucratic reasons to say this.

Speaker 10:

So chat you bet he's going to make everything available and cheaper. I'm not sure that most education isn't about establishing status, and I'm not sure that has much to do with being cheap, and I'm certainly not sure that, even in somewhat more modious circumstances, you know truth and beauty and all of that. Maybe it matters who teaches you. And so it strikes me that, in a weird way, when maybe arguing or you may be arguing I don't know if you mean to be argue that if we flood the economy with relational and emphasis on relations right and relational employees, are we, are we sort of talking about the democratization of the tutor? And I'll stop there.

Paul:

So I was tracking you. So they said the democratization of the tutor. So when I talked about flooding the society with relational jobs, I use teachers as examples, but also we talk about healthcare workers and mental health counselors and yeah, just to clarify, focusing on education, right, right.

Speaker 10:

And if you say people, you know people who see themselves coming out of poverty, you know there's somebody made a difference, right. And so, as teachers, when they say, look, in constructing your worldview, somebody made a difference, yeah, right. And that may be a relational thing, and I think of that as tutoring or coaching. I often use coaching as a metaphor for teaching, right, I agree, but that's all that was meant by that.

Paul:

And so, yes, I am arguing that we would. I don't know if it's a democratizing I'm not sure why we would call it that but we need to hire a whole lot more teachers. If we're going to fix a broken K-12 system, we're going to need to do it in a way that allows them to focus on the relationships with their learners in a way that the system makes very difficult to do today, and it makes it difficult to do because we put too many people into a classroom. We ask teachers every day. I gave us a version of this talk. I really would like to spend a day with you, David, because now I'm thinking oh my God, am I falling into the tropes that you're moving to?

Paul:

I want to unpack everything you said but it's okay, no, no, it's super provocative, but I gave this talk in England recently, in London, and it was interviewed by a journalist who said look, I covered K-12. And in your book this book that I alluded to, that came out last fall I talked about sort of you know, will we get to a place where teachers can love their students again and by love I mean putting in the time, knowing who they are, knowing them and their complexity, really being invested in them in the way that all great teachers have shaped us, I would argue. And that journalist said if I said that to a teacher in London today they would say are you kidding me? So my week is filing. They use different terminology, but we would say lesson plans and serving on this committee, and they don't use lunch duty, they use a different phrase. And then I do lunch duty, blah, and then I got this one troublesome kid and I'm understaffed. So I got to sort of take care of that and I'm lucky if I even remember the names of my students and you're asking me to love them and what I would say is so much no, you can't, not the way the system is built and what it rewards and what it punishes, but so much of what you describe has to do with satisfying the system, not actually improving the life of the child in your room.

Paul:

If you, I'm sure you know David Graber's work and the last book you did before he died was bullshit jobs, and we talked about not only the 65% or so people globally who think they're go away tomorrow no one would notice, but the percentage of your job. That is not about the thing that matters most and you know we use a coaching model, so SNHU's model is not actually built on the centrality of the academic program. You know it sounds weird for a university president to say but it's gonna be well done, it's gonna be reliable and have integrity. But the place we put our efforts and our sort of secrets outside of, say, is our academic advisors, and we call them academic advisors. They're life coaches. They span. If you think about who we serve. They spent 20% of their time on academic questions and 80% on no, david, I know it's been 10 years since you were in school and blah, blah, blah.

Paul:

I know you think you can't do the work but look at you just did two semesters you were great, I know your boss is crazy and your kids are, you know, acting out like it's amazing what they have to do.

Paul:

It gets together with them. Recently, a group of advisors and I said hey, I want you to walk me through your day and your week, and so what percentage of your time are you in direct relationship with students, are doing a thing that touches an individual student, versus things that satisfy SNHU's bureaucracy? It wasn't a great answer and this is a place that really invest in it. So the approach we're taking with AI right now for the existing institution is how do we automate the hell out of everything that isn't about human relationships and make more time so we can lower our caseloads, give our people more time to be in relationship with our learners, because that's the difference maker for us, and I think that's a difference maker in a K-12 school, right. So when I said, I feel like let's reduce class sizes to one to 10. That's not democratizing, it's staffing up appropriately. So I don't. Am I answering your question?

Speaker 10:

No, it's okay, though I we're getting towards better questions. I think, maybe, that I think the question of education is deeply complicated by the questions of material employment and, by extension, social status, and it's very difficult to know what we're talking about at any given moment, what blend of those we're talking about at any given moment.

Paul:

So can you say a little bit more about the social status piece, cause I want to make sure I'm tracking you? I think I am, but I think that's a really important component. It's part of that conversation we don't have enough of, or we sometimes have too much of, when we talk about American higher ed.

Speaker 10:

Well, the varsity blues problem, right. You've got a middle class that is desperate to preserve its position in the middle class, which this profoundly retains, right. You've got a. You've got a society of whatever it is 340 million plus, and that's just this side of the Atlantic or the Pacific, right, and so who one is in that matters, right? So you know, think of the use of names and titles and institutions, and that justifies things like payrolls and stuff like that. And university is doing awful lot to credential people for that sort of thing, and you know. So there's this whole sort of relational network that is nominally built around the meritocracy. I do not think that a monopoly like AI is going to go far up in that, but I'm not sure I was going to transform that. I don't suddenly think that. I think I'm lame.

Paul:

When I hear you say status, I want to unpack that into all of the various ways that status plays out. When we talk about American education so we use our varsity blues we're talking about the status of highly selective, elite institutions which represent a rounding error in terms of the number of students that get educated in the US. I can go all the way down the other end of the scale and I watch people. We just had commencement this past week and so thousands of graduates who are flowing from everywhere, students who did not necessarily expect to get a college degree. They went to destination at Harvard. From one vantage point, it was definitely not status. From their vantage point, the neighborhoods they grew up in, the places and families they come from.

Speaker 10:

So one of the things that your institution does is it makes a degree of status available by credentialing, which is fine. It's not the same credentialing one perhaps gets a Yale not occupying the same position in the current dispensation but, it is a key function and, for that matter, it's a key relational function, right?

Speaker 10:

So your earlier discussion was knowledge. I'd like to tell my students facts are cheap, right? We all got phones right. So what can you do with the facts, right? What can you do with the facts on the way to getting your law degree, in my case, right, and which is a status thing, not necessarily a knowledge thing? And then there's the other relational part what can you do with the facts that will impress me if you care about that, and if you don't, I certainly understand, but you probably won't stay in the class. I think a lot more is going on than access, right? In other words, I'm actually trying to extend your earlier point about moving away from epistemics, right? So then what does that then create for us socially, particularly if we've put this epistemic to use a freighted word meritocracy or this educational industry, as a key lever in how we construct social order?

Paul:

I think that a lot of jobs that we associate with status, things one can do with knowledge and facts, are going to be displaced. I think for the percentage who stay behind, who still are working in those fields, I think the bar has been raised. I think we had a conversation with the British institution today who wanted to talk about their interdisciplinary programs and their argument was and I don't know how to evaluate their case here, but their argument was like, hey, interdisciplinary programs are often the poor stepchildren within an institution. They actually may be the most valuable thing we can do now, because it's into disciplinarity that we start to think about system design, so that the limitations of AI, ai becomes a powerful tool. But it isn't that. So maybe that's important, don't know yet.

Paul:

And then I do worry about a huge social upheaval that I think is coming with loss of jobs and I don't know how that intersects with your question, but we're going to be facing even more basic questions about the status, which is what do we do when more than half your country doesn't have meaningful work and I go back to those relational jobs as ones not easily displaced by AI, but also not ones that don't have much status right we Good, good, good.

John Symons:

Let me jump in here. I think this is a really deep and important line of questioning, but I'm going to kill it and look to Maz Maz, introduce yourself and Floor is yours.

Speaker 11:

Hi, I'm Maz. I'm a PhD student here at the University of Kansas. You guys can hear me right. Yeah. Yeah, and my focus is on memory and cognitive science. So my question is around two ideas One is accessibility of education and one is motivation of students. Hmm, so, as technology continues to play a pivotal role in making education more accessible to the wider range of students, I think it begs the question does increased access to educational resources translate into increased student motivation?

Paul:

I think not at all. I think we have been preoccupied with the long things quite often and we will spend forever talking about the design of a program and I don't think that's the thing that makes a difference for the learners I'm most interested in. I'll sort of turn to Matt Beal here, who's the chair of the Marriott Chair or something, but he's the head of Child and Adolescent Psychology at Georgetown and Matt, when I interviewed Matt for my book, I asked him about this question, like how do we think about deeply underserved communities and learners where we're trying to bring access? Like how do we engage and what's critical to their success? And he didn't talk about the design of the program. We sort of evoked on a beetle again. What he says is sort of three things Ideally some spark that you can sort of hook onto right, like it can be almost any interest at all For an inner city kid. It could be physics, but it could be astronauts, it could be basketball, it could be fashion design, it could be music, but something in which they have a genuine interest. And I'll talk about a school that I think that does this brilliantly. Second thing he said is they ideally have one year of normal, and what he's talking about is a sense that there's a world that's better than the one I'm in. So they experience a time that you can evoke or they've observed it close enough.

Paul:

So one of the stories of a student I told in the book is a student who came out of the poorest neighborhood in Boston and the poorest of circumstances. You can almost write the script without me filling it in. But there's a program in the Boston area called MECCO which buses inner city students out to basically white, wealthy suburbs and puts them in those high schools and they go back and forth. And she saw a world that looked better than the world that she was in and he didn't normalize for the place. I'm not a psychologist, I don't know the right term for this, but he really understood that there was a better world out there and there was another option.

Paul:

Now I would say those are really important, but the single most important thing one person who believes in you, like one person, goes back to that question again who believes in you, and says you are better than this. So I really despise the book Hillbilly Elegy by our Senator Vance of Ohio. But if you remember one part that dead resonate, by the way. This is, in some ways, my background, so part of this is my own lens on the world. Immigrant parents. My mother worked in a factory in 2017.

Paul:

If you remember the grandmother in that book, if you read it Jesus, crazy, gun toting hard, drinking, swearing he would say Hillbilly, but she's a constant in his story in terms of believing he was better than the world in which they were in, and I think that sort of captures that really well. So when you talk about motivation of students, I think it comes out of relationship. I think it comes out of not wanting to disappoint that person who's put time into you and believes in you and thinks you're better than this. I don't think it's in the design. Like good program, design is important, integrity and rigor are important, but I don't think that's the thing that drives learners, at least the learners I'm most interested in.

Speaker 6:

Hi, sorry my video was off, I didn't notice. So, Denisa Karam from Barrowland University in Israel, and I'm just very surprised that you have such a strong belief in these future relational jobs, how you call them, or in like focusing human humanity's attention to some form of a care. And I wonder, is this like because it sounds a bit like a phrase, or is it based on some, something you see as a trend? Because what I see is actually that our emotions, our well-being, our mental health, our happiness index everything is in some sense already taken care of in terms of quantified, and I just shared an article in the Growth there is. Like in Denmark, in the schools they're using some apps that monitor students' mood In order I don't know to synchronize them on the optimal mood level.

Speaker 6:

In the US, you have something called Gaggle that is massively used in the primary education and it monitors what the children write and puts like these suicide alerts and I don't know other things if they use profanities.

Speaker 6:

According to the articles that I read, I know companies that are doing this like a mood detection with university students. So I'm just like maybe not so optimistic about this luxury communism idea that the AIs will do all the job and they will just be nice to each other. I find it a bit actually that you're not asking the difficult question what is the function of the university? And I don't think it's about building competencies that will help Microsoft get even bigger monopoly. I hope you're also educating people that are able to ask the difficult question of who owns these companies, these platforms, why they're not open source, why we don't have infrastructure for training models so someone in Nigeria can access it, where they cannot access even to GPT and all these things. So for me it does sound a bit and I'm joining, actually, david a bit like you're just helping them to get even more power over our lives, rather than creating some form of competencies that will create meaningful future for people. Sorry, it's nothing against you, I'm just really curious.

Paul:

No, I will follow up on my friend Tick-Nod. You don't know me so I don't take personally anything that you just said and we haven't talked about. And my arguments about competencies, by the way, has never been to say what competencies. In fact, I think that's a common mistake, that people think that CB programs somehow dictate to an institution what competencies they should teach. So the competencies a program might elect to teach are the ones that allow you to deconstruct these systematic sorts of problems that you've identified, with monopolistic capitalism, for example, like that can be the competency that you teach. Those competencies are simply forcing institutions, universities, educators, to make clear the claims they make for what students can do with what they know.

Paul:

The thing you may want students to be able to do with what they know is to take apart Microsoft, is to take apart capitalism, to take apart these structures that are so problematic. So I wouldn't tie your critique, I would untether this question at your critique, from the question of competencies or not. So the question of helping Microsoft I'm not sure I'm tracking you and part of the question I wanted to say I'd love to ask David some more questions and some of us are asking you the same question. Do you hear the attempt to understand and ask better questions of AI in an educational setting as helping Microsoft, because the questions we might ask, just as we might ask questions of algorithmic bias, for example, are ethical questions around data and privacy.

Paul:

Other questions we might ask when I say I want better questions, those what are the things we have to be careful of if we are going to deploy AI, unless you argue we shouldn't deploy it at all. What are the questions we need to ask ourselves if we're going to deploy AI in education so that we don't actually contribute to inequity, so we don't contribute to monopolistic power that defines us in ways we don't. So I don't know that we're in disagreement. I just think I probably haven't expressed myself very well, but that's how you read it. I think one of the things that David I don't know, denise said this is part of what I think I was inferring, but are you arguing?

Paul:

are you hearing in me a kind of inevitable surrender to the use of AI, as we're currently experiencing it?

Speaker 6:

No. I'm just, you know you claim, since there will be this, relational jobs, since we'll come back and I don't know, I don't know. We will do the superstructures based on what do you believe there will be more relational jobs?

Paul:

So, if you listen carefully, I never said there would be a making an argument that that's where we need to shift our thinking, because a lot of the sort of jobs that we assign status and compensation to today will go away, and what I said is that there's an enormous need for those jobs called the relational, I would say, in systems of care. There's enormous need whether our society will figure out how to fill that need to. I think a lot of people would love those jobs if those jobs paid better. I don't know that. I don't have the answer to that.

Paul:

Jamie Marisotis, who is the head of the Lumina Foundation, I had worked on this idea of a human work initiative that argues we need a massive redistribution of wealth in the country to start paying people on the jobs that really matter, and the jobs that really matter, like early childhood educators, way more important than most college faculty in terms of actually shaping lives, but lowest on the status. We don't pay those jobs very well, do? I think we'll do that? I don't know. I think that's the complicated question, but I think we better figure it out. I do think those systems are deeply broken in our country.

John Symons:

Okay, good we are. Thanks, Danisa, we are nine minutes left. I want to finally jump back to say the very patience I think they always You're following question on our own.

Paul:

sorry, All yours. Yes, that's nice.

Speaker 7:

Well, I guess it's a follow-up question, but it's also bringing it down to more practical matters. I was wondering one of the things that professions you mentioned was like accountants getting. Do we need to train more accountants? I used to be an accountant and I did a little bit of auditing Nothing too deep or nothing too wide, but I was involved in a couple of audits.

Speaker 7:

I'm wondering if we don't train people to know basic accounting, would we not lose auditors? Or we just trust AI to do the auditing also, and why I'm asking this is related to their writing and reading. I don't know the psychology of this. I know that we hear all the times that if you want to become a better writer, read more, and I'm wondering similarly, if we forego writing as one of the assessments, would we have worse readers? Would we have people who are less likely to know what is good writing versus bad writing or quality of literature? But also in the accounting sense and I guess it can go into any of the professions Like, if we don't train people on these basic accounting like debits and credits, can you really do high-level auditing that is required for humans to do, or do we just give that up to AI? Let's just AI to the auditing too.

Paul:

We'll say what would you do with comment from the guy at Oracle who said we don't really write code anymore?

Speaker 7:

So my brother is a software developer for Chase and he does exactly what you said they just grab stuff.

Paul:

There are architects that are curators, link code from GitHub et cetera. So somewhere along the way in their training they had to know enough to be able to do that task. And my question to him was but did you have to write lines of code in order to have those frameworks of understanding? And his argument was no, it raises the bar you have to have. So just, I'm going to use my analogy. I never wrote a line of fiction, but I think I got pretty good somewhere along the line, so I looked at a lot of fiction. I obviously read a lot of fiction. You have to look at a lot of code, and can you train me for the things for which I need to look? I think those are the ways I might come at that.

Paul:

I think writing for me is one I still really wrestle with. I don't have a good sense of it, like AO Wilson's comment about storytelling. So I think of writing storytelling often because I write a lot. Storytelling is not the thing we do after we've thought about something Now we want to share with the world. Storytelling is actually the way we think. We think in stories, and I don't know what that. I don't know what the implications are for writing and reading. I know writing and reading have evolved. Certainly writing has evolved dramatically over the over the eons, right From oral storytelling to writing, to printing press, to email and hyper literacy and other kinds of things. So I think we will go through some other similar kind of shift here and still trying to understand what that might look like.

Speaker 7:

Thank you, I like David's going.

John Symons:

Yeah, David, David, jump in David.

Paul:

David.

David:

And.

Paul:

John, I have to apologize. I have to preside over a ceremony in five minutes, so I have five minutes.

John Symons:

I have to sorry, sorry, sorry, bird. David, you didn't have your hand up, did you?

Speaker 4:

David, no, I didn't, but I do have a question. Ok, I will have to burn. I can go up to birth.

Paul:

We may not have time for two.

Speaker 10:

So I deferred.

Paul:

David.

Speaker 4:

OK, so this is kind of a question on the sort of last line of answering a question between you and Syed. I guess one fear I have with the testimonials of the coder is one they're already coming from a background of knowing how to code and things like that. So I fear that it may be hard to assess what we'll lose if we haven't quite lost it yet. Yeah, that's my concern and I'm not sure if you'll be able to speak to it. Maybe we aren't.

Paul:

I don't think these are exactly kinds of questions we're just trying to surface and understand at this point.

John Symons:

Yeah, very good.

John Symons:

So one of the things that, just by way of closing, that we thought a lot about here at KU is how to restructure the computer science curriculum around these kinds of foundational skills.

John Symons:

So you would, you would understand the foundations of computer science in addition to having some exposure to coding, but the coding is really a secondary kind of a skill you'd pick up along the way, and that in many ways, it's more important for you to learn first order logic or maybe some some basic meta logic, to learn the lambda calculus, instead of learning Python, and I think that's probably the way we're going to go. Similarly, I mean, I could go on and on, but we only have three minutes left, so we keep that better. Yeah, so maybe what we should do is, paul, if, if, if, you'd be receptive to email questions from the crew, I can, I can share your email with folks and and we'll continue the discussion in text form. So at this point, I'd really like to thank you, paul, for your time and thank all the attendees for great questions and excellent discussion. So thanks so much for having me.

Paul:

You made my challenge, my thinking, in lots of great ways, which is like when you have some conversations like this. I really appreciate it.

John Symons:

Very good, glad to hear all. Take care, bye, bye. See you guys next time. Bye, bye.

People on this episode