Welcome back today in our happiness gym, we want to talk about AI: how to use AI in a good way or in a bad way so that you can get a benefit out of it that is a true benefit for your character, rather than just cheating by using an AI to do your work. I am Andy, I'm a philosophy lecturer. I have worked with AI a lot. I'm teaching courses on AI, and in this video, hopefully, at the end, you will know in which way you can meaningfully use AI in your work in a way that benefits you as a human being and enriches you.
Nowadays, the world of work is changing very rapidly. AI is everywhere, it's coming after your job, it's coming after my job also, it's coming after my students. My students use it, and there are these problems with copyrights. There are all the artists who say that the big AI companies are copying their work, making use of it to train their models without actually giving anything back to the community. All these are big problems, but today we want to talk about something else, which is the influence of AI on you as a person who is using this AI to do something.
One example I have a lot of opportunity to think about are my students, because students generally, you know, they naturally want to avoid doing too much work. So if they can have AI write their papers, many of them will take the opportunity and let AI write the papers if this is allowed. Now, obviously, this is not permitted, but I think it's also a mistake to say we should not permit AI at all in university work or school work. And, in fact, I am encouraging my own children to use AI responsibly.
And what does it mean to use AI responsibly? Sometimes they will have a question that is a little more difficult to answer or that requires a lot of research to answer. So they might ask, in biology for example, "How exactly is this pathway? How we get from the DNA to actually, you know, the form of our B? How is this influenced by the DNA and how does all this work?" If you wanted to find a description of this by Googling it, it would be very difficult. You would very soon be stuck in very complex descriptions of the genetic code and how it expresses in proteins and so on, while often the AI is able to do a much better job on the level of a child of a particular age. You can say, "Describe to me how genes are expressed in appearance of an organism in a language that a 12-year-old can understand," and the AI will do that. And I don't think that there is anything cheating about that. This is a genuine wish to learn something, and what the AI is at this moment is just a customizable textbook. It is a textbook that contains the information that I need at this moment in the way that I need it at this moment for a particular age group. And I could find the same way in principle in a textbook written for this age group, but I don't have this book, and I would have to go to a library, search for the book, get the book, find this paragraph, read it. While I can ask the AI to give me exactly this information for exactly this age in the same way for my students.
If we have a course about, let's say, presocratic philosophers, ancient Greek philosophy, I could tell them, "Find out about the presocratic by reading a 200-page book that I can give them or that is in the university library." But this is difficult, it's hard work, and actually, if you have ever tried to read such, you know, philosophy or other science books, by the time you reach the end of the book, you have spent so much time reading all these details that you have already forgotten what was at the beginning. So this is a very bad way to get an overview. But using AI, I can ask, "ChatGPT, give me exactly what I want: give me a summary of what the presocratic philosophers said, group them into two groups, and give me a short summary of each one and the dates of their birth and death and explain to me what the importance of each one was." Something like this is a prompt you know you could give this and you would get exactly what you want. And again, this is not cheating. This is consulting a machine as you would consult a textbook, only that the machine provides you with a textbook that is customizable to contain precisely the information that you want. This is, I think, a good use of AI.
What is a bad use of AI? If you use the machine to do your work, so if you use it to replace what you should be doing, you could argue, you know, writing a paper for example is something that the student should be doing. Writing a paper is not something that a machine should do. So when you use the machine to write your paper, you are actually cheating now, because and here comes Aristotle, you are not getting the skill that you should. Aristotle gives us a very nice way to distinguish when it's good use of AI, when it's bad use of AI by thinking in terms of skills. If I use AI to replace acquiring a skill which I should acquire, which I have a duty to acquire, and I don't acquire the skill because I'm using AI, this is a bad use of AI, this is cheating. When I study philosophy, let's say the skill that I'm going after is being able to write a good philosophy paper. If I use AI in the cheating mode, I am not getting this result, I'm not getting this skill. So then it's bad.
If I use AI in the same situation but just a stage earlier, just to get an overview of philosophers about whom I want to write, and now I go on and write my paper using that information that I got, or perhaps even one step more, I do in between, I use this initial information to research the individual philosophers more using conventional means, and now I write the paper about them, I'm not cheating then. I'm using it as part of my process, but I'm still acquiring the skill, I'm still practicing the skill, I'm still writing my paper, and this is not only AI. You can say this about other things. Let's think of a cook, you know, when you're cooking a meal, you can say, "I use a cookbook." Using a cookbook to get ideas for a recipe and then going and cooking this food yourself is not cheating. You're still cooking it, you're still acquiring the skill. You just use the cookbook as help, as an idea, as has you know experience of somebody else which tells you what works and what does not work and how much of each ingredient you need to mix in order to get a working dough. This is not cheating. This is using the information that the cookbook provides.
If instead, you say, "I will cook by calling this service that brings me, you know, the the bread. Let's say I just, I just remote order a bread and it comes to my home, the bell brings, you know, and Amazon brings me a fresh bread and then I put this on the table and say, 'Look, I have cooked, I have baked the bread,' then I'm cheating because I didn't do this right. Looking at the recipe is not cheating, replacing the final product, replacing the skill is cheating. And I think this applies to everything, and it also shows us an important difference when we are thinking about art. So when I say, "I am an artist, I'm a painter, and now I use AI to create an artwork and market it under my name," then I'm cheating. But now let's say I am what I am, I am a philosopher who writes a blog post, and for this blog post, in order to make it attractive to the reader, I just need a picture on top, a picture of Aristotle, and now I use AI to create this portrait. This is not a skill that I am supposed to have or a skill that I am supposed to acquire. Creating a blog post with philosophy content, writing this content, writing an article about philosophy is what I am supposed to do as a philosopher. But making an illustration for the top of the page is not what I'm supposed to be doing myself. I would normally hire someone to do it or I would get a stock image or I would get an AI image in this case. I not defending this practice, I'm not saying that this is good because there are other problems, there are copyright problems, and so on, and I acknowledge that all these are important problems and they need to be solved and artists need to be compensated for their work. So all these things I am totally in agreement that there are lots of problems. But just from a perspective of cheating or not cheating, I don't think I'm cheating if I do the work I'm supposed to do and then use a picture on top of it. This is not cheating. While if I replace the writing of the article, then this would be cheating because as a philosopher, my job is to write the article.
But now say I order this picture on top of my article from an artist, I go find an artist whom I like, and I tell you, "Make me this picture," and the artist comes back after two days and gives me an AI picture. This would be cheating, right? Because now I wanted the skill of this artist, I wanted his expertise, not an AI picture. So let us just think about this for a moment because this also affects our own work. We all use AI in some way or we will be using AI in some way, and I think it's important in terms of satisfaction, in terms of happiness, that we use AI to improve ourselves, to improve our skills, to make them better and stronger, and to give us more abilities, but not to use it in order to impoverish ourselves, in order to become less able to have fewer skills because then we actually destroying our personalities, we're destroying our character. We are becoming lesser people. So AI is not the enemy, the bad use of AI is the problem in my opinion.
And now I would really want to know what you think. Give me in the comments your feedback on this. Do you feel the same way? Do you use AI? Do you use AI in ways that you think are good or bad? And where you wish you could either continue to do it or you would give it up because you feel that this is a cheating use of AI? Let's just collect a few experiences and see what comes out of it. And if you're interested in how work can be done honestly and why it's a good thing that work is done honestly, then please watch this video here where we talk about the importance of work for Aristotle and for Bertrand Russell. Thank you and see you next time.