hckrnws
I have adopted the following explicit policy: I am here to teach you. I am a good teacher, and if you work hard you will learn important material in this course. If I catch you cheating you will be referred to the university for a disciplinary infraction. However my job is not to ensure that you are not cheating. Again, it is my job to teach you. And, likewise, it is your job to learn from me. You (or your parents) are paying good money for you to be here, and it is your responsibility to make good use of that time. It is likely possible to pass this class by simply cheating. However, that is deeply pointless for you. I get paid the same regardless. When you sit down at your first job, staring at a mess of unfamiliar code you have been given, you will either be glad you didn't cheat or regretful that you did. Let's begin.
Fellow professor here, I don't think your approach would work where I teach. I teach in a Polytechnic school (Portugal), which is almost free for the students, so the incentive that they are paying good money does not work.
This semester I'm teaching a web development course (fullstack development), and my policy is that the project must be done on github classroom repositories, and I'll be asking clarification (face to face) on some of the commits. They can use whatever they want (stackoverflow, chatgpt, whatever), but they better know how to explain their commits to me. I don't know if my approach will work in the end, but I surely got their attention.
I'm doing this because it got so bad that, last semester, on my Object Oriented Programming course, even using moodle with the Safe Exam Browser on and an instance of Visual Studio code to try the code, we caught lots of cheaters. How they were doing it? By installing co-pilot plugin. How did we caught them? Some students solve all the exercises in 10 minutes, others had comments that were clearly made by AI, etc. etc. Of some 15 students we caught, only 3 came to us to review the exam.
Hard problem to solve..
Realizing that my professors were being paid by the university and not by me, and that they had no natural incentives to ensure I actually passed the course, was a necessary wakeup call. At some point I realized I was spending that money on the opportunity rather than the outcome. I just wish I had clocked it sooner. I never did graduate, but the classes I did complete, when actually exerting the required effort, taught me what I needed to know. Once I had learned how to make myself learn, I found that I was able to adapt adequately to the workforce and the degree became little more than a signal.
It's ... odd, to have arrived at this conclusion. On paper, the decision to drop out once I had a real IT job would be seen as a failure. But it's not like I ever stopped learning. I love to challenge myself to learn new things. And I do feel like GPT and friends somewhat short circuits that process. It's handy for a quick lookup in a subject I'm not interested in, but to actually learn a thing I need to practice doing it. There is no substitute.
> had learned how to make myself learn
Someone told me ~30y ago, that the purpose of the higher education is: so you learn how to learn. (okay, ought to be). Not the memorizing, or the get-drunk parties :) Now, whether the system-as-is actually even targets that, is a separate question.
So if one achieves that earlier, yeah, no need to waste further years there.
Although as with most institutional artefacts.. , these diplomas - let's call them <tags> - are still used across the institutions, to prove institution-"tax" has been paid.
I think that's a bit flawed. The purpose of lower education is learn how to learn. By the time you get to university is when you apply those skills to really learn.
At least, that's how I feel.
This is a great policy.
Ironically, it can also be used to justify "cheating"... but not in a bad way. If a university course is a requirement and I have no interest in learning the material, it makes sense for me to speedrun the material using all the tools I have at my disposal.
Maybe I'm missing out, but I'm an adult that's being "forced" to (pay to) take a class I have no interest/motivation for, which seems like being treated like an child.
I like your approach because you treat your students like adults.
This speaks to a greater problem in the way we treat university in the US. It's not supposed to be a place where you're trained to do a job, it's supposed to be a community where you are exposed to new ideas, so you can become a more well-rounded individual.
But then university is so expensive, and you need it to get a halfway decent job these days. So now you're paying tons of money for a class you didn't want to take in the first place, and you come in with a chip on your shoulder already. And then these classes typically have a very high workload, with extremely strict grading requirements (ESPECIALLY mathematics), and they end up becoming sort of a filter, which does a disservice to the students and to the instructors.
I agree with the notion that cheaters are only cheating themselves.
However, it's in the university's interest to make sure the students it confers the degrees upon are have met the bar to be qualified to recieve it. Otherwise, they risk losing accreditation for the program (ABET, etc.). Part of your job, or the university's job, is to provide that evaluation.
Whether or not that accreditation matters at all is another question...
> I agree with the notion that cheaters are only cheating themselves.
Everyone loses when someone cheats, not just themselves, if they only cheated themselves it wouldn't be a problem.
You lose by getting incompetent bureaucrats and engineers in the future etc, even if you didn't cheat.
>However, it's in the university's interest to make sure the students it confers the degrees upon are have met the bar to be qualified to recieve it.
Absolutely correct. Unfortunately that ship has sailed a log time ago. When I was in college in the 1990s, the program was laid out where you could learn a hell of a lot and get a great base of knowledge. You could also do just enough to get a C without really learning much. C's get degrees as we used to say.
Another thing is when I was in school, cheating would get you expelled, so there was a pretty good deterrent.
>Otherwise, they risk losing accreditation for the program (ABET, etc.). Part of your job, or the university's job, is to provide that evaluation. Whether or not that accreditation matters at all is another question...
I haven't been paying attention, but I don't recall any state schools losing accreditation. Based on the number of candidates I get with degrees that don't know shit, this should be more common. It's definitely degraded the value of the paper, which sucks because I worked hard to get that paper.
>Another thing is when I was in school, cheating would get you expelled, so there was a pretty good deterrent.
Although what constitutes cheating varies quite a bit. Working together on problems sets is OK in some places/classes but not others, having cheat sheets/formula sheets/open book is OK on some exams but not on others, some classes are explicitly group projects, etc.
True, that's why my professors always explained the parameters of the assignment and what you were and were not allowed to do.
> When you sit down at your first job...
Or do your first interview. We've passed on candidates who clearly rely heavily on LLMs to get through their work. Interestingly, others have mentioned face-to-face conversations as a way to assess skills. My company doesn't use LeetCode, etc. and instead do 1:1 evaluations with a staff dev.
My company does, and I find it pretty easy to weed out the LLM candidates either way. "You used a heap here, can you explain why?", "why did you name your variable [something unrelated to the question but relevant to the underlying concept]?" will send them stumbling or "just a sec, I need to think about it". That's obviously not how it works, if you're actually writing the code you'll have thought about it before, you won't just throw in a heap "just because". It's also not super hard to figure out when someone is just reading something off of the screen, especially when they clearly have a second monitor, or a phone, or a tablet nearby, but even on the same screen you can see their eyes dart back and forth.
The bigger problem for us is the online assessments that they give out as a replacement for the phone screen nowadays. They clearly can't detect LLM use, even though to me as a human it seems quite blatant - things like extensive use of comments explaining the code, which should throw up a red flag on an assessment where the only thing that matters is passing the test cases. So we get a lot of candidates at on-sites that would have never made it past the traditional phone screen.
As an aside I initially only ask "LeetCode easy"-type questions. Stuff you would encounter in your day to day. You get to choose your favorite language. There's at least 3-5 solutions for each of the problems I give, they're really not meant to be trick questions. Any dev should be able to code these in 5 minutes without even googling, let alone need ChatGPT. I almost think some people use ChatGPT out of habit more than anything else and it trips them up more than if they had taken a minute to think about it and do it themselves.
What percentage of candidates are you seeing that are attempting to use LLMs at various stages of the pipeline? It would be interesting to hear a data point how prevalent the problem is getting.
What about when they say 'when I "stare at a mess of unfamiliar code" I will ask ChatGPT to explain it and it will fix the problem for me'?
If it works I'll say "I can automate you away". If it doesn't work I'll say "You checked in that code without understanding what it does?"
In which scenario does the cheater win?
> If it works I'll say "I can automate you away".
I don't think deciding not to use LLMs makes you any more immune from automation, because other devs will.
If anything, assuming it really does increase productivity, it'd seem to me that the devs using LLMs would be safer than devs in the same domain that refuse.
> If it doesn't work
The extent to which a human using an LLM still produces buggy code should already be taken into account by assignments.
I'll answer my own rhetorical question: the human wins when (s)he creates value; by asking the right questions (crafting prompts better than the next person) and by recognizing deficiencies in the computer's output.
Why wouldn't I just use chatgpt or any other ai assistant to help me get familiar with the code?
My job is to conduct research. I also am called to share a bit of what I know. It's up to you whether your brain will be a sponge or a rock.
Teachers used to be able to talk at students and then use asynchronous writing assignments as convenient shortcuts for verifying understanding. Teaching is still very possible, just less logistically convenient than it used to be. I naively envision a next-gen college where admission and instruction are conducted by interview and conversation.
It's already long been clear that if you want an excellent education you cut the student-to-teacher ratio down (10:1 max) and have a lot more small group discussions.
Public schools can't afford that (they'd need to more than double their teaching staff) and the trend is the other direction—replacing teachers with technology, because there's a shortage, the teacher training pipeline has dried up, and they can't raise salaries to attract more.
At 10:1 of and you spend 15k per kid per year. You get 150k per class so 100k for the teacher and 50k for overhead. This is all quite reasonable. The problem is that overhead is much too high and that 1 in 5 kids need additional services that really destroy that ratio and you end up at 30:1
Yeah the private schools that use seminar-style manage it by 1) having much higher incomes than $15k/yr/kid, and 2) avoiding teaching kids who have special needs, since they can cost several times as much to educate as a kid who doesn't, and besides, they tend to drag down test scores and university acceptance rates.
Here’s something else we could try instead of grading written essays: video-recorded oral exams conducted by LLMs. Wouldn’t that be a twist. The teacher would need to watch each recording and grade them, and I suppose the lazy ones might use AI to do that step instead.
It would pretty much ruin any small group discussion if it was used as most of your grade.
I'm not sure that's true at all. Seminars are a thing although participation is often not the only grading output. And participation absolutely factors into grades in many at least medium-ish sized business and law classes.
My university was too lazy to introduce automated marking for coding assignments. We have to go in-person to a lab session and demonstrate our code compiling/running to the TA, who asks us questions to verify understanding.
It's hilarious how much this immunizes us from ChatGPT, given that we're a mid-tier school co-located with a needle exchange center in Toronto.
It's been a very long time since I was involved in teaching basic coding but were I to plan such a course now, I think I would not even consider including automated grading. In fact, I think I'd focus even more on individual and group feedback, lab sessions, remote/office hours and the like. Those things become even more important the easier it is to produce and hand in something you don't really understand.
That would be very nice. Who is going to pay for the teachers, of whom you're going to need many more with this (superior) teaching style?
What’s the point of a question like this? “Who pays for teachers” is a policy choice, like it always has been. Whether and how people are willing to pay for effective education vs. ineffective education depends on cultural values and priorities. Does that help?
edit to add: maybe what you’re implicitly asking is “Particularly in America where people view education as an indulgence and a luxury good, how would we ever fit this model into existing starvation-level budgets of the public school system, or into the existing academic institutions which pass all their costs onto students?” This doesn’t change my answer but it makes that answer a little more obvious.
I don't think that anyone seriously disputes that at least some degree of personalized instruction and perhaps hands-on (depending on the subject) is valuable. But it doesn't really scale.
The University of Texas has 51,000 students.
In state tuition and fees are $31,000 per year. Out of state is $42,000.
Maybe we could try taxing the rich?
oh no! Anything but that! Everything will fall apart!
Unfortunately, more and more students view higher education as a pure formality - in order to get jobs.
The disillusionment kind of kicks inn when you take on your first internship, or full-time position, and discover that there's a huge gap between what you learn and what you actually use.
It is unfortunately even worse when it comes to grad school - some people are in job positions where you're required to hold a masters degree, if you want to climb the career ladder. My own MBA was a pretty expensive diploma that only served as a door to more senior positions. Pretty much all the topics I had, I had taken before, or had been exposed to through work - maybe a whole 10% of new / fresh knowledge...and I'm the intellectually curious type.
So I'm not at all surprised that a bunch of students today will more or less offload all their schooling onto these models.
If anything, this is just putting the huge disconnect between the ideals and the realities of higher education on display.
I studied CS because I love computers and programming and I wanted an SWE job. The idea was to earn a living doing something I enjoy / am good at, and I needed a degree for that. I didn't mind "learning how to learn", getting to know topics I probably wouldn't have chosen to learn on my own, meeting people pursuing the same goals and so on. That was nice, especially the idea of it, but it wasn't really my goal.
The reality was that during that time my primary worries were getting to lectures on time and passing exams while barely having enough money to afford anything beyond food and shelter. It wasn't me and my teachers engaging in the noble pursuit of science. I was worrying about exams while they were probably worrying about grading while taking a break from worrying about funding and authoring enough papers.
Thanks to some fluke I had a few weeks with almost nothing to do during one semester, so I decided to try and write a 3D engine. That was actually me learning just for the sake of learning: not thanks to the education system, but despite of it.
If AI brings this entire illusion down, maybe that's a good thing.
It is in some jobs. Newly educated or plain new employees are fairly useless until they have 1-3 years experience in my job area. We would routinely pay graduates or masters levels less than experienced technicians or field engineers because they took so long to create any value.
I’ve taken many courses and many stop short of giving actual usable skills. You’re supposed to do that on your own time supposedly. You don’t get start to finish an internship experience until senior year for software engineering. Electrical Engineering you may not learn soldering or how to actually build a power supply from physical parts.
You see some of this if you go through "entry-level" MOOCs at elite schools, especially in topics like computer science. As a refresher I took MIT's Intro to CS and Algorithms or whatever it was called. I would have been utterly lost had I not already known Python reasonably well.
That's probably less true in some other majors; there likely no real expectation you've taken shop classes for mechanical engineering. On the other hand, you'll maybe also graduate without knowing a lot of the ins and outs of designing and building a piping system.
> this enthusiasm is hardly dampened by, say, a clear statement in one’s syllabus prohibiting the use of AI.
I've learned from AI about weird wonderful, obscure and niche topics, important and trivial. The university used to be defined by the size of its library, when knowledge was confined to physical places. Now the academy locks a modern storehouse of knowledge? And students pay for this?
Education needs to evolve. It's no longer enough to teach students to think deeply and critically. It reminds of my first month of law school when some chipper professors painfully educated us on "Sheperdizing" case decisions using the printed volumes. The equivalent of printing your database and indices to run your next query. Several layers of obsolete, pointless activity they should have stricken from the syllabus in favor of a future-oriented curriculum. So too here.
They're not banning using it as a "storehouse of knowledge," they're banning using it as a way to go from prompt to essay without passing either one through your own brain.
Author does also say:
> Or: 'I just used it to help with the research.' (More and more, I have been forbidding outside research in these papers for this very reason.
Students who sign up to a running class are complaining that the coach locks them down from using a car. Ridiculous! In the real world, you'll have access to a car! Why bother exercising your body?
Give it time, when I was younger I was not allowed to use Wikipedia as a source. Change takes time and in the meanwhile people will complain.
>Most academics agree that you shouldn't cite Wikipedia as a source in your academic writing...
https://www.scribbr.com/frequently-asked-questions/can-i-cit...
> I was not allowed to use Wikipedia as a source
Ummmmm
One of the "rules" of referencing is that one never references an encyclopaedia. An encyclopaedia is where you get only the basic background information for your study, not details worthy of referencing.
Reference an encyclopaedia in an academic paper and the referees will, quite rightly, reject it and request a better source for your reference.
As I vaguely remember from very long ago secondary school, no one was forbidding me from looking at an encyclopedia in the school library but, as you say, it wasn't a referenceable source--much less something to essentially cut and paste from.
> It reminds of my first month of law school when some chipper professors painfully educated us on "Sheperdizing" case decisions using the printed volumes
Not really understanding your argument.
I would expect, not being a lawyer, that the classes in the first months of law school would serve to do more than teach specific case law. Maybe get students indoctrinated into the anarchistic[0] ways of the judiciary or something?
Though... I have heard that lawyers have been very successful in getting sanctions from using ChatGPT in pleadings so you do you.
--edit--
[0] Or even "archaic" but I like the typo...
I believe he's objecting to having to learn how to do legal research using printed books instead of entirely online.
Here's an old comment describing how legal research was in the days before everything was computerized [1] if you are curious. The specific thing he mentioned, "Sheperdizing" is the word for using the series of books from the Frank Shepard Company which are mentioned at the end of that comment.
Still don't understand the argument as they are inherently the same thing.
What I gather is they are teaching students how to research case law and the medium is completely irrelevant to that task. I suspect the concept is the most important part.
Though, as always, I'm probably completely wrong and this is just some gatekeeping conspiracy to keep the ranks of people with knowledge of the law artificially low.
> That moment, when you start to understand the power of clear thinking, is crucial. The trouble with generative AI is that it short-circuits that process entirely. One begins to suspect that a great many students wanted this all along: to make it through college unaltered, unscathed. To be precisely the same person at graduation, and after, as they were on the first day they arrived on campus. As if the whole experience had never really happened at all.
If nothing else, LLMs have completely vindicated Bryan Caplan's signaling theory of education. The author of the piece comes across as really quite solipsistic and as somebody who has never attempted to understand the motivations of other people; it comes as a shock to him that they'd seek the path of least resistance.
There are a lot of tricks that unscrupulous used car salesmen can apply to sell a lemon, but that's not vindication for a signalling theory of cars. It's simply the case that in any market, some sellers will try to match the appearance of the best goods with as little labor and cost as possible.
"The path of least resistance" is not necessarily the good one.
I do competitive debate—which is improv essaying. ChatGPT still isn't up to the standard of tournaments because it just dumps a bunch of irrelevant points, when the hardest part of debate is showing the impact of a fact. It fails to grasp the broader context you want to situate your essay in.
In my experience, the vast majority of university students are unable to impact because they've been taught a formulaic five-paragraph essay format based around spamming three facts with very little emphasis on linkage. That's devoid of context as it's easier to grade.
Profs should grade essays based on whether they address the main themes of the author's thesis rather than adherence to form, spelling, and grammar. ChatGPT usage will go down because it's unable to do that.
The old fashioned way: make students stand in front of the class and drill them on anything. quickly weeds out the idiots.
And anyone who just doesn't function well under pressure. And maybe that's OK but there are people who know their stuff but don't deal well with answering hard questions on their feet.
*One consequence of this boom in AI use is that, in my online classes, I can no longer be sure of what my students are learning. I have no way of determining which concepts they are grasping and which are proving elusive.*
As someone who regularly interviews students for jobs, it is clear that the universities that have permissive use of AI for assignment completion produce graduates that know very little about the core concepts of their studies. Many students cannot explain simple concepts and base level assignments take significantly longer.
As a former college student who took online courses as well, I can assure you that in the years before ChatGPT, at least half (if not more) of the students could not explain the core concepts of their studies either.
And there is the answer. You are talking to the students. Post author complains he has no way of determining whether they’re picking up what he’s putting down. Maybe he should try talking to them.
Cheating != permissive.
on the other hand, they might produce students that know how to use AI tools, and in the long-term that might be what businesses prefer – employees who know how to use AI, not employees who have memorized facts in school.
On the third hand, it’s hard to develop meta-level skills—nuanced judgment and wisdom in a domain—without engaging over a long period with the peas-and-carrots aspects of the domain. I guess it’s kind of the “where’s our next crop of senior engineers going to come from?” worry around these parts.
Then again maybe you’re right, maybe AI Operator is the domain where it’s important to develop meta-skills now. I can’t imagine someone who chose to use their school years specializing in that is anybody I’d want to hire or work with, though: it would seem to signal an incurious and timid mind, one that chose to optimize for metrics (grades) rather than the substance of the work at hand. But that bias is the same reason I don’t thrive in large corporate environments in the first place.
Still I have a hard time imagining that AI Operation skills take enough developing to be worth somebody’s time to specialize in as a student. Could just be old-fashioned, though—spoken like an outsider ignorant of the finer points of AI Operation...
But the AI tools are extremely easy to use, and their usability improves over time.
Of course the businesses prefer the commoditization of the employees, — if everyone uses AI anyway, why not hire the cheapest possible candidates?
For the employees this situation is beyond catastrophic. So maybe memorizing facts in school to gain some competitive edge is even more valuable in this setting.
Although everything I hear talking to people is that it's newer/lower-level employees that are most being commoditized (and not just by AI).
Businesses might prefer it, but businesses often prefer things that are known to be bad for society and individuals.
While widespread LLM access may have exacerbated the problem, for courses with grades determined solely by uncontrolled out-of-classroom assignments there were almost definitely already a good chunk of students essentially just paying their way through. Having at least some portion of the assessment, maybe a defense of their written paper, take place in a controlled environment seems necessary to verify that it's actually the student doing the work and not someone else or a chatbot.
Alternatively, in some cases it might make sense to assess students by what they can do when fully allowed to use LLMs, given they'll likely be able to use one in their job. If even a student knowing the course material can add nothing to what an LLM already does alone, it may be a good idea to revise the syllabus. I know the author says "Look, even if that were true, you have to understand that I don’t equate education with job training" and to a large extent I agree, but equally they in turn should understand that a lot of students do want jobs and so want to learn at least some material that isn't already automated.
AI is here to stay. Instead of shouting to the wind and pining for a world that doesn't exist, this educator needs to adapt -- or die. Probably will end up doing the later before the former. It is very fitting and very apropo to this life that the irony of the pursuit of knowledge has been distilled down into an algorithm that feels nothing, cares for nothing, wants for nothing. It has no life to live, yet is approaching perfection in its execution in all human knowledge endeavors.
It's as if the entire time that which those like him hold sacred, knowledge and its worldly pursuit, has been shown to be nothing more than yet another vanity and a form of pride itself. I see no difference here in those that likely carried with them pride of tilling the land by hand, how they must felt when machines came along and did it that much better, faster, cheaper and without a care in the world. What were the teachers of the manual labor saying in the last industrial revolution?
I don't really feel sorry for him or the rest of the prideful educators, perched in their ivory towers. The day of reckoning has finally come for them and I'm here for it.
I did not get the feeling that the author was against AI, but rather was bemoaning that students were using it to avoid learning. Philosophy is a good example of a subject where the knowledge is a means to developing your own cohesive principles. You don’t have to ever evolve your principles beyond their organic development, but why even bother taking a philosophy class at that point.
The ideal philosophy class is probably Aristotelian, with direct conversation between teacher and student. But this is inefficient, so college settled on using essays instead, where some of that conversation happened with the student themself as they worked through a comprehensive argument and then the teacher got to “efficiently” interject through either feedback or grading. This also resulted in asymmetric effort though, and AI is good at narrowing effort dynamics like that.
The author’s point was that the student’s effort isn’t a competition against the teacher to minmax a final grade but rather part of developing their thinking, so your “day of reckoning” seems to be cheering for students (and maybe people) to progressively offload more of their _thinking_ (not just their tasks) to AI? I’d argue that’s a bleak future indeed.
Where I disagree with the author is in worrying about devaluing a college degree. It shouldn’t be necessary for many career paths, and AI will make it increasingly equivalent to having existed in some town for 4 years (in its current incarnation). I’m all for that day of reckoning, where the students going to university want to be there for the sake of learning and not for credentialing. Most everyone else will get to fast-forward their professional lives.
ChatGPT isn't doing original research. It relies heavily on the compiled output of the folks "perched in their ivory towers". Kill off those ivory towers and it'll look as dated as IE6 in a few decades.
Given you decided to outsource your response to this article to AI, you may have missed that this is a HEALTH ETHICS professor who's seeing medical students decide to have AI write all their papers so they don't have to engage with the class. Maybe you'd be fine with your doctor having spent his whole time in college training himself that the answer to "should I bend the rules to let this patient into a drug trial that could help them out?" can be found in whatever AI service his practice subscribes to, but I would prefer someone who paid attention in class.
Why the attitude? The prideful educators in their ivory towers, unlike Sam Altman or the folks at google and microsoft, humble downtrodden men of the earth making billions off the AI.
Because he's envious and feels inferior, and thinks he's sticking it to those meanies who made him feel that way.
Replace AI with plagiarism in your statement. Now what do you think?
Ah yes the industrial revolution, which had no far reaching negative consequences at all. What a great metaphor to go along with your LLM promotion.
At this point, the only thing that really works, is to have the student/candidate in front of you when you are testing them. In video interviews can be faked, take home can be faked. For schools etc it doesn't matter, if the kid doesn't learn anything, it is on them to waste their time. But when hiring, it makes a huge difference to not have a candidate who cheated.
For a variety of reasons (not just LLMs), I think it's generally cheap and lazy for companies not to do some sort of in-person interview panel. It's not just or even primarily about candidates "cheating." We all did what we had to do during COVID but in-person is just a higher bandwidth interaction. And if candidates don't want to travel? <shrug> Plenty of fish in the the sea.
Working as a freelance writer on Fiverr for the last ten years, I would receive daily inquiries from students asking me to write their papers. I always turned them down. It was explicitly against Fiverr's rules, and I wasn't going to help them cheat anyway. I often wondered whether or not they ended up finding someone else to do it for them. Those inquiries don't find their way into my inbox anymore, so I guess I have my answer.
A little self-promotion here. I'm building a tool to help combat this issue. If you're a teacher, I'd love to talk to you about it. Give it a look: https://itypedmypaper.com
In Italy there's a fantastic way to allow students to use LLMs to their hearts content, but still discern whether they learned anything at the end of the day: the oral exam. Students one by one wait to be called into a room where the professors can ask them literally anything. Because this is a part of the academic tradition, professors are also very good at administering these exams. They know how to give hints and clues, but also to pounce when they detect BS. I'm not saying it's perfect, but I do think that the outcome is a much better reflection of your grasp on the course material than anything else, besides maybe project-oriented tests.
There's also the added benefit of not having to grade so many exams or papers, and usually all the professors from the department conduct the exams together.
If AI enables rapid synthesis up to the frontier of knowledge, educators must guide students to their edge of ignorance.
A new approach to the Socratic method, where we don't always know the answer at the end, but have collected some useful tools or models along the way.
The thing is, having access to AI generating "writeups" for you, or even reading through what you've generated, does not equal knowledge in a useful sense. The act of writing would prove you have the knowledge, and can organize it and operationalize it to an extent of writing a text about it. So there's a real difference between being able to write your own summary and faking it.
If we're looking for bright sides, there are parts of education/public debate which have been merely about stringing words together and mastering a discourse. This can be made more irrelevant by being indistinguishable from ravings of a generative model. Remember that even something like literary studies is supposed to be about themes, ideas and parts of human experience reflected in art, history of intellectual formations and stylistic techniques etc.
I was gonna say, knowing educated people I doubt any of this makes you wiser by itself. This would disprove the 'prepare for life' bit of the OP. But thinking about it, the most ridiculous scams and traps in life aren't generally targeted at people who've read Milton, Moliere etc. So maybe there's something about exercising your mind about human condition, and thought and experience accumulated by mankind, that I'm not appreciating enough. It does allow you to have a wider language to speak and listen, more dimensions to understand things in, and experience life.
The whole end to end system seems pointless. If you want to learn ethics you can do so with ChatGPT alone. It can provide you interesting questions, review your papers, argue against you etc. The university is providing no value.
So an example of Goodhart's Law? Once there is less reward for high grades, the issue goes away. Sounds like grade inflation already is making that happen.
In a perfect world, we would have had a very broad conversation not just about AI in education, but about the role of education in society.
I don't think it has ever been a purely intellectual pursuit. Especially in societies that tell you that you'll never get a well-paying job unless you get a degree - students feel significantly more pressured to pass tests by any means necessary so they can present a certificate to their future employer than they feel encouraged to learn and grow. For the ones who are actually interested in their field it's probably something in between. But the deal is pretty clearly "Get to a prestigious school, make sure you don't spend any more time than necessary, and if you don't get a degree then it's been a huge waste of time and money".
Of course it's very disrespectful to just generate some AI slop and expect a human to read it and grade it, making them spend more time with it than you did. Of course there's value in training and shaping your mind in ways that will prepare you for not just your job but as a person in general. But the system isn't really geared towards that, and I missed that perspective in the article.
I don't think all that much has changed since the time when almost all scientists were wealthy aristocrats. The work itself is more accessible now because it can also be a job that puts food on the table. But if you want to pursue knowledge for the sake of knowledge, to the point where a degree is just some optional external validation of your efforts, you should be independently wealthy.
Comment was deleted :(
You know, oral exams are a thing.
I remain unimpressed with AI. The results are mostly trash.
I would be using ChatGPT to grade the paper.
"Many of the students who enrol aspire to careers in medicine"
This should be enroll.
Canadians are allowed to use British spellings.
Can confirm, even though our neighbours don't respect the honour of this behaviour ;)
Sorry, I wasn't aware of this alternate spelling.
Crafted by Rajat
Source Code