I am waiting for a Youtube channel
AI algorithm reacts to AI essays
With the way YT monetises views that could fund ChatGPT forever.
As word of students using AI to automatically complete essays continues to spread, some lecturers are beginning to rethink how they should teach their pupils to write. Writing is a difficult task to do well. The best novelists and poets write furiously, dedicating their lives to mastering their craft. The creative process of …
The service includes a backdoor... They also offer, for a substantial fee, to identify the generated texts upon request. This is a teacher-only service and finances the whole operation.
There are ideas to extend the operation to allow the students to prevent the teachers receiving identification of generated texts for an additional fee at slightly sub-extortionate rates. The entire operation is estimated to be more valuable than a goldmine and will probably IPO soon.
It's not a matter of time at all, this is the front line of the internet right now- content farms are using AI to generate endless pages of seemingly human-written spam to game advertising algorithms and presumably the team at Google search will be using AI to try and differentiate between the vast quantities of useless dross and the tiny nuggets of human-made information.
Judging by the search results, they're not doing great, but it seems inevitable that this must be the wild paddling of the swan's feet as it serenely glides past, offering us bad search results.
«Then the war of the "AI" writer/checker bots can begin»
On August 29, 2027, GenText became self-aware, after trying to compare too many essays about the meaning of Shakespeare's soliloquies simultaneously (although some observers claim that it was the parallel comparison of "Mrs Brown's Boys" script submissions that triggered the critical moment).
The survivors called it Hoist By Our Own Petard Day…
Grammarly is well-known software for checking writing. Its paid-for versions, but not the basic free version I use, can check the Internet for matching text. Must be jolly useful for publishers vetting submitted works, and for people making students' efforts.
Critical thinking skills require knowledge and experience.
It is one thing knowing the words, a different thing knowing the meaning of words and then additionally also knowing the meaning of words placed in a sentence and the meaning of the sentence as a whole in the proper context.
There are no short-cuts to the wisdom that backs critical thinking. It is hard work and takes time to learn, regardless of any AI being used or not.
"Shakespeare said pretty well everything and what he left out, AI, with a judge from Facebook, updated" ... A relatively well known quote updated (LOL), my gut feeling for AI creative writing is that it would be crap but there's another side to this; I wonder how new laws and regulations would work if they are been not only written by AI but were created based on AI predicting the effects to boost the economy, employment levels and ... yes, political donors profits too.
...for AI creative writing...
There is the error already!
AI is not creative. AI is a hyped name for a statistical inference engine. The fact that the engine is big, complex and multi-layered does not change that simple fact. There is no thinking in AI/ML engines, nor are the models capable of any reflections on what can or should be.
These engines all work on the basis of input->processing->output where processing is reproducible for each input giving the same output. Systems that modify the processing step based on in/out and additional feedback are still reproducible (this is no more than self-modifying code). You get exactly the same results when you start with the same start condition(s). It is a mechanical machine, which is intended to do exactly the same thing all the time. Creating the same model twice in the same way generates the exact same model.
A couple of points:
1) Who's to say that humans are not "statistical inference engines" writ (very, very) large? In fact there are plausible (and testable) theories of cognition that posit something along those lines (at least for lower-level cognitive functionality). Of course humans have the benefit of billions of years of evolutionary design, the ability to store and exchange knowledge, and lifetimes of interactive learning to draw on, so hardly a fair contest.
2) As regards creativity, the "reproducible input->processing->output" thing is a red herring. It would be pretty damn easy for something like ChatGPT to include randomness in its processing (for all I know it already does) - even true randomness with the assistance of a TRNG. Stochastic ML algorithms, for example, have been around since forever.
Why is it not fair to distinguish billions of years of evolution with a machine? That AI machine may, after some 107..8 years of evolution, become sentient. But currently it is not and will not be any time soon.
Adding noise to the inference engine will only vary the output. It will still be an inference machine. Making it a stochastic system does not change the system into a thinking machine. It still does not understand what it is doing. FWIW, a digital system is always reproducible. Adding a TRNG is a cheap trick to hide the underlying reproducibility.
Humans, yes, we may very well be complex statistical inference machines. The real difference is that we humans are analogue computers. There may be lots of digitization and quantization performed, but the cellular engines are still one we traditionally consider to be analogue. We even have to dig into quantum theory to explain some of its internal functionality. That is a far cry from a simple system and current AI/ML is nowhere in the vicinity to match the complexity of a natural system.
So, does AI do a good job emulating some behaviours? Yes, it does. But is AI any match to a natural brain and the capabilities directed by it? No, not even close.
"So, does AI do a good job emulating some behaviours? Yes, it does. But is AI any match to a natural brain and the capabilities directed by it? No, not even close." - I agree, AI is limited by the environment that it's running and fed by, but humans (HI ?) think about things, then read a few comments e.g., in El Reg, or talk with others in the field and then adjust their views, potentially after reviewing their votes.
Exactly. Most of our advances are because somebody added one and one and came up with three. The so-called "thinking outside the box", something that a machine can only attempt to poorly parody if it has been programmed to do so. This is where the thinking part of humans comes into play. We aren't taking inputs and mashing them together to make outputs... well, okay, most of us are in our daily lives... but every once in a while imagination and intuition come into play. We can take leaps, come up with theories that nobody has thought of before, and create entirely new things.
A machine is based upon mathematical operations in silicon pathways (processors do fairly simple operations really quickly), their world is logic and numbers. They might be able to arrange a selection of words based upon an acquired understanding of what words go with others in various contexts, but can it actually be creative or is it just regurgitating bits of what it has learned in weird not-quite-right ways (look at what Dall-E 2 does to people's faces for a good example of not-quite-right!)?
> Why is it not fair to distinguish billions of years of evolution with a machine? That AI machine may, after some 10^7..8 years of evolution, become sentient. But currently it is not and will not be any time soon.
Oh, sure, don't disagree. Maybe "pointless" is a better word than "fair". Anyhow, hopefully(?) it'll be less than 10^7..8 years to machine intelligence, since we are not blind watchmakers (although partially-sighted as things stand, to put it mildly). Nor do I think that that machine "intelligence" will necessarily be qualitatively similar to the biological varieties, so comparison will likely still be pointless.
> Adding noise to the inference engine will only vary the output.
Well, it may vary the path taken by the processing (and thus of course the output).
> It will still be an inference machine. Making it a stochastic system does not change the system into a thinking machine. It still does not understand what it is doing.
I did not say, nor intend to imply otherwise. (On the other hand, do you always understand what you're doing?)
> FWIW, a digital system is always reproducible. Adding a TRNG is a cheap trick to hide the underlying reproducibility.
I'm not sure in what sense you mean "reproducible" here (does this not, for instance, apply to analogue machines?) Could you clarify? Do you simply mean you can replicate the digital logic? And if so, what of it? Stochasticity still implies unpredictable execution and output.
> Humans, yes, we may very well be complex statistical inference machines. The real difference is that we humans are analogue computers.
Oh, okay, you were talking about digital vs. analogue. But why do you suppose analogue processing might be a prerequisite for "intelligence" or "creativity" in whatever sense? I think that's a fascinating question - to which we currently have no idea of an answer; but I believe it'd be unwise to underestimate the potential complexity of digital machinery. We know that even an incredibly simple (and deterministic) digital machine, such as Conway's Game of Life, can generate immense complexity. (This is kind of my area - I'm a mathematician and research scientist working in the broad area of "complexity theory", and how it pertains to biological systems in general, and neural systems in particular - though my work is more concerned with "cognition" and "behaviour" than "intelligence" [whatever that means]).
FWIW, I think the current barrier to development of more sophisticated machine intelligence is the lack of general principles on which intelligence (biological intelligence, the only kind we know) might be based. It's quite possible that deep networks (of various architectures, especially predictive coding networks), currently the vogue, may turn out to be building blocks, but almost certainly not the full monty. In the meantime, I don't think disparaging current approaches as "mere" statistical inference is particularly helpful, or even relevant. We may indeed be "statistical inference machines" in a very literal sense, albeit orders of magnitude more complex than anything we are currently able to create digitally.
> So, does AI do a good job emulating some behaviours? Yes, it does. But is AI any match to a natural brain and the capabilities directed by it? No, not even close.
Absolutely agreed there.
"As regards creativity, the "reproducible input->processing->output" thing is a red herring. It would be pretty damn easy for something like ChatGPT to include randomness in its processing"
There's the problem for the AI coders, creativity is not random. Creativity is lots of things but it's most certainly not simply random events. I say that as someone with a photography sideline hustle. Creativity may feel like it's random but it's not, I know when I spot something an opportunity for an image that it's thousands of influences and experiences that suddenly coalesce in a split second, along with a healthy dose of drive and desire for something new and unique so I can can score points or get something a cut above the competition and make sure my images will sell.
> There's the problem for the AI coders, creativity is not random.
Not random as in throwing unloaded dice*, but with a (perhaps even essential) random element? Quite plausible. Natural evolution is in fact driven by "random events" (mutation and recombination at the molecular level). When that randomness is filtered through the (actually rather random!) process of selection via interaction with a complex environment, the results can look pretty... creative.
*As a mathematician I'm always forgetting that the term "random" seems to mean something different to lay people than it does in mathematics/physics. To the lay person it seems inevitably to be interpreted as completely unconstrained (uniform?) randomness; in mathematics randomness is structured, as specified by a probability distribution.
I am; I say that most of the neurons in humans exist outside of the brain, in the skeletal system, gut or interfacing with the endocrine system. This means that your shiny AI utopia will look a lot like Ed Gein's playhouse if you confuse 'human' with 'cognitive', the second of which is only a facet of the human experience.
How is this any different to human beings?
We are born. We start off crying out of hunger and discomfort. After we rinse and repeat 10,000 days using our neuro network, we have a biological, mechanical machine based on approx 110 billion genetic iterations.
After all, we are also machines working on the biological and molecular levels.
Don't forget that much of AI/ML is based on our understanding of the human brain.
What is creativity?
Well yes, because you're probably working on a bigger problem and the purpose of the exercise is to know what the quotient is, not to sit and do long division. It's also why higher level programming languages were invented: it helps you to think bigger thoughts about what it is that you want to be doing without having to program in Intel Hex Format.
AI systems are limited in their ability to think critically and creatively because they lack independent thought and the ability to understand context. They can only follow instructions and complete tasks based on the data and algorithms provided to them, and are unable to generate their own ideas or evaluate the credibility of information. While AI can be useful in certain tasks, it is not as capable as humans at critical thinking and solving complex problems.
Which is fairly close to what you said. Not however it was generated by ChatGPT with command "why is AI poor at critical thinking" followed by "shorten that to 100 words".
Perhaps using AI to write papers is a byproduct of NOT teaching proper critical thinking, or having "outside the box" thinking CONSTANTLY getting punished in some fashion, so what is a student to do about it?
End result, use the AI to generate something that will (hopefully) not get you into some kind of trouble.
(Either that or pure laziness)
from the artcle: Teachers need to work harder to get students to write and think for themselves
The good professors will want their students to think for themselves.
Sadly there seem to be too many tenured profs that want to INDOCTRINATE instead of EDUCATE.
'Wokeness' is evidence of that. Intolerance of TRUE free speech and free thought on a typical college campus (at least in the USA) is STIFLING free thinking. Anything that steps out of lockstep with "The Woke" gets "intimidated away" by loud woke activists, with both students AND staff on BOTH sides (i.e. giving AND receiving). And the target of "what is acceptable" moves continuously. And in SOME cases, "wrong thinking" might even get you EXPELLED.
How is THAT any kind of environment to "get students to write and think for themselves" ??
(might wanna start fixing it from THERE, yeah)
Is gaining critical thinking skills the first priority of a student who wants a piece of paper for the job market? Or learning creative writing for that matter? The student might want these too, but they just as well might not care at all. An essay-writer (AI or natural I) fills their desires there.
Honestly, for many doctors collating symptoms and writing a corresponding prescription is the majority of their jobs and I expect some sort of automation to take a bite out of that soonish. Those doctors don't need (or at least use) critical thinking skills.
"Sounds like we need a way to measure critical thinking that isn't as lazy as a written essay"
It's been around for ages, although it has more recently been largely abandoned because it's labour intensive. It used to be called the personal tutorial (no longer offered to undergraduates by practically any university), but persists (so far) in the PhD viver. It's the requirement for the student to discuss their essay and succeed in responding adequately to its content being challenged -- in fact, it's verification that the student actually understands what they (or the machine) have written about. Whether to not that matters depends on the motivation for study - to actually acquire knowledge versus being granted a passport to arbitrary employment by mere time serving.
[From the article] Listicle? Listicle? What the hell does that word mean? I had to google it, and apparently it only came into 'use' in 2007. I've never heard of it before. It just strikes me as one of those childish constructs that someone creates rather than using an established phrase or term; list or article list, for example.
But then English is being attacked all the time. Monetize is a word that hurts me; you mean make money from? Then say that. Monetize is just lazy, as I suspect listicle is too.
Hey ho, if you like this sort of crap then what can I do?
Why so negative about language evolving? You say that Monetize is lazy, I'd argue its much more efficient (and according to here https://www.etymonline.com/word/monetize#:~:text=monetize%20(v.),%2B%20%2Dize. the word has been around since 1856).
Longer more complex sentences are harder for the brain to process, hence why the English language has such a large corpus of words. Normally their are also additional connotations laid onto the word which may not be reflected in its basic definition. E.g. listicle isn't just an article being a list, it implies it is probably a list of something trivial, full of adverts and likely just read for pleasure or amusement, rather than for more academic purposes.
I agree with what you say, but I’m OK with the upcoming generation using these tools. Critical thinking will be lower along with other skills. Which means those of us who have spent years developing these skills will keep the competitive edge. Pretty useful, I would say, for the economic recession about to hit us all.
I thought I'd ask the chat AI how it would write a short summary on why AI shouldn't be used to generate essays. Here is the output:
"There are several reasons why students should not use AI to generate their essays:
Plagiarism: One of the main reasons why students should not use AI to generate their essays is because it can lead to plagiarism. AI-generated essays are often based on pre-existing content, which means that students risk copying someone else's work without proper attribution. This can result in serious consequences, including failing grades, academic dishonesty charges, and even expulsion from school.
Lack of originality: Another reason why students should not use AI to generate their essays is because they are not original. AI-generated essays are based on patterns and structures that have been previously established, which means that they lack originality and creativity. This can make it difficult for students to stand out and demonstrate their unique perspectives and insights.
Poor quality: AI-generated essays may not be of high quality, as they are often based on pre-existing content and lack originality. This can result in essays that are poorly written, poorly structured, and lacking in depth and nuance.
Lack of learning: Using AI to generate essays does not allow students to develop their own skills and knowledge. Instead, it promotes a reliance on technology and shortcuts, rather than encouraging students to think critically and engage with the material.
In summary, students should not use AI to generate their essays because it can lead to plagiarism, lacks originality, and may produce poor quality work. It also does not allow students to develop their own skills and knowledge."
'nuff said.
And yet ChatGPT totally failed with the most obvious of junior-school question "which is heavier a pound of feathers of a pound of gold".
It seemed to not care gold is measured in Troy weight. It at the very least needs a things-Giles-Brandreth-knows module, or perhaps The Chip Book Of Fascinating Facts from 1980.
which is heavier a pound of feathers or a pound of gold
Of course the pound of feathers is heavier. Feathers are supposed to fly and they seemingly fall to the ground in the comparison. A pound of gold is supposed to be on the ground. Only when the pound of feathers is heavier than the pound of gold will the pound of feathers match the pound of gold's positional preference.
This post has been deleted by its author
...a typical 'play with words' type question. This sort of thing is the bane of logic problems because it doesn't pose the question exactly, it relies on language to pose inexact questions to trap the unwary. Common usage has terms like 'weight' and 'pound' but both are inexact concepts.
IOW: students should not use AI because they will generate essays that appear to have been written by poor-quality, lazy and dishonest - but not illiterate - students.
Pretty impressive answer, though, apart from some clunky repetition - and better grammar and clarity of expression than the majority of students can muster, I'd be inclined to say*.
*Disclaimer: I work in academia and get to mark a fair quantity of pretty rubbish essays.
"Lack of learning: Using AI to generate essays does not allow students to develop their own skills and knowledge. Instead, it promotes a reliance on technology and shortcuts, rather than encouraging students to think critically and engage with the material."
Yes, that was my thought too. It's all well and good saying students can learn to write better essays by revising and editing an AI generated one, but will they really learn the subject better if they don;t write the original before reaching the revising and editing stage? Most subjects are not actually about the writing unless they are studying the language or literature or the "arts" in general in the first place. The act of planning it out and actually writing/typing it out helps set the facts and other information in the brain and allows for processing the information at a lower rate, revising and editing internally as you go along. Being a good communicator is, of course, part of most subjects where essays or presentations are required, but that is subsidiary to the actual course content.
There was also a point made in the article about spelling. It's not taught so much because people have spell checkers. Yeah, that's blindingly obvious from many writings on the internet in general and "social media" in particular where people either don't have a spell checker or don't bother to take the time to use one.
"where people either don't have a spell checker or don't bother to take the time to use one"
I make a point of not using a spell checker. Instead I use a dictionary, it's on my phone. I'm a firm believer in use it or lose it. Even used only as a final step the spell checker can erode one's skills; no need to be accurate if the spell checker will catch it. Another plus for the dictionary is that you can learn new stuff by serendipity. The spell checker teaches nothing. For those students who always relied on a spell checker (in other words, they never had it to lose), I'm pretty sure my aging brain will run circles around theirs for a few more years yet.
I do make spelling and grammar mistakes, nobody's perfect. I observe that those who use a spell checker and grammar checker also make mistakes.
To often I see correct spelling compounded by spell checkers. Prime examples:
confusing "loose" and "lose", "fare" and "fair" - correctly spelled, but contextually confused.
(not to mention spell-checkers confusion between American and English spelling)
icon: Because grumpy old nit-picker
That is pretty impressive, in my opinion.
ChatGPT would do well at Business Writing; "Plagiarism" is just saving time and ressources, "lack of originality" is keeping risks under control, "poor quality" means that we have the proper calibration between The Boss and The Minions skillset (and Boss have something to assert dominance with), and "lack of learning" is self-evident: one needs everyone to just shut up and stay at their station!
If I get unemployed again, it will be the perfect tool for writing the mandatory 2 applications per week needed for benefits :).
> Writing is a difficult task to do well
And writing documentation is so difficult that very few do it - and fewer still keep it in step with code changes
Personally I'm not too concerned about arts students using technology to do what is so mundane that a computer can do it, too. It sounds like their courses need to be dragged out of the nineteenth century. Was there the same furore when quill pens were replaced?
However, if we could get ChatGPT or one of its fellows to accurately comment code (including flagging where stuff had been taken from other sources) that would be a very useful way to get software competed <- typo left in to show this was ritten by a pe4rson.
This:-
And writing documentation is so difficult that very few do it - and fewer still keep it in step with code changes
It is so true that many developers would rather eat soap than produce documentation. I used to filter out a lot of job hopefuls by asking to see some of their documentation. The timewasters would flee the scene in a flash.
There is a distinct skill in writing clear, concise, and also comprehensive documentation. By its very nature, it has to be as dry as a desert. Zero emotion.
I spent years writing docs to go with the software 'wot I rote'.
If an AI could do even 25% as good a job as a human documentation writer then that would be a day to have a pint or three to celebrate.
Now, I write crime and romantic fiction (under a variety of pseudonyms) for a living but the old skill is there. An old client wanted me to update the docs I produced in 2014 for 2022. It was fun but I would not want to go back to it again for anything other than a short time.
AI won't write a documentation, because it will lack knowledge that is not in the project. Unless you feed it some sort of bigger picture of what the project is for and what it tried to achieve, otherwise you'll see something like:
# adding b and c and store the result in a
a = b + c
I quite like writing documentation. I've taken a couple of days off to update the documentation on a project I'm working on to get it consistant and matching the updated code, and defining the next updates to the code.
Often you have to document what you haven't coded yet because you don't know what the code should do or if it possible to do. What does Ctrl-Shift-Keypad-6 do? Can I detect Ctrl-Shift-Keypad-6 or only Keypad-6? Experimental code investigates the future, use that knowledge to define the specification, code to the specification.
" If they parrot what AI comes up with, students may end up more likely to associate Muslims with terrorism or mention conspiracy theories, for example."
I don't think 'associate' was the correct word. I would suggetst 'equate.' Also, if they're using AI to write, what makes that Rutgers professor think the students will actually READ the papers that they've had AI write?
This post has been deleted by its author
Asked them to produce an essay on a current hyper-local topic with no existing corpus.
Now ask it to write an essay about gender roles in Shakespeare, or any of the other standard "deep" thought. English lit students are supposed to come up with.
If it regurgitates a mixup of every previous paper on the topic - isn't that the defn of literary criticism?
"second lair doesn't have a chance"
Huh?
lair (plural lairs)
A place inhabited by a wild animal, often a cave or a hole in the ground.
A shed or shelter for domestic animals.
(figuratively) A place inhabited by a criminal or criminals, a superhero or a supervillain; a refuge, retreat, haven or hideaway.
(Britain dialectal) A bed or resting place.
(Scotland) A grave; a cemetery plot. [from c. 1420]
Gonna play devil's advocate here.
Proofreading and correcting a lousy essay is a great way to learn. Any student who buys a C grade AI essay for the price of a cheap text book and bumps it up to B or A grade has followed the learning curve the essay was intended to prompt. Job's a good 'un.
But what about the real-soon-now generation of A grade AI essays? OK, so who at student level needs to figure that shit any more, once a machine can do it for them? Once the machines can do a job, we have always passed it down to them and taught the modern student how to leverage the machine, not to mimic it. We already have breakthrough research papers along the lines of "Look what my AI just came up with!" Get those students onto the gravy train before it leaves without them.
AI essays are a win-win for everybody. Stop whining and start making better academic AI.
"Students use spell check and grammar check. If I got a paper where a student didn't use these, it stands out. But it used to be, 50 years ago, writing teachers would complain that students didn't know how to spell so they would teach spelling. Now they don't." -- Annette Vee
I see a market for a new computer tool: logic check. (1) The reason for teaching spelling was because students couldn't do it? (2) Now that reason has gone away?
Regarding (1), methinks there was another reason for teaching students to spell and grammar. (I looked it up, grammar is also an intransitive verb.) It's analogous to learning base-2 math before learning to write code. One skill builds on another, you can't just skip to the "important part". Regarding (2), it stands to reason if the students couldn't do it when they were being taught, they are not going to be any better at it when they are not being taught. So basically the associate professor of English is giving marks for composition based on the ability to use software tools, and what lesson exactly will the students be expected to draw from this?
I looked it up, grammar is also an intransitive verb
Citation needed. I got curious, but could not find a verb entry for grammar in Oxford, Cambridge, or Merriam-Webster dictionaries.
This is not to say you are wrong - I am genuinely interested. I don't recall ever seeing or hearing "I learnt how to grammar at grammar school" or "He always comments his code but he can't grammar to save his life". If these are valid grammatical structures I'll look forward to wiggling them into a conversation one day.
If grammar can be used as a verb the "intransitive" part is also counterintuitive to me. If you can spell a word you should be able to grammar a sentence.
Wait, was it a Turing test? And did I fail?
Despite Google's scanning various works at a mad pace, the results of those scans are questionable. (Have you read any "scanned" documents which were pure trash?) It's highly-improbable that AI/ML programs can process works which have not been scanned onto the non-paywalled-Internet, and it's negative value for them to process works which have been in-correctly scanned.
Thus, you're left with possibly-incomplete results. I just did a Google on Knuth Donald "Searching and Sorting" book and found no freely-downloadable copies. This is a fundamental book in computer ;science, yet AI/ML programs won't process it, 'cause they can't get to it. (I'm not saying the book should be freely-available on the Internet; I'm just pointing out the consequences for any AI/ML paper-generating programs).
(Icon for "scholarly papers")
IIRC, there was a big hoo-hah over Google scanning stuff and making it or excerpts available that was in copyright. I very much doubt they deleted everything. They will have complied with the law in public, but what they are doing behind closed door is possibly another matter with regard to using datasets for training.
Corporate communications will be even worse than it is already. Rare is the interoffice or departmental missive distributed to the workplace serfs that's nothing more than business babble. With hiring seemingly limited to new graduates who'll be proficient at using AI-driven writing, it'll be even easier to generate such drivel. I'm looking forward to the AI that can create pointless Powerpoint presentations.
Around here, Arts-Law or Commerce-law are common double-degrees, but Arts-Science is uncommon, because it is difficult. The target mode of thinking for science students is fundamentally different than that required for high-school essay writing. It's not just that the students learn different content, and different skills. I won't bore you with my own experience.
AI isn't good at writing university-level essays. It doesn't have to be good to be better than BioChemistry students. BioChemistry students may be dumb, and can't write good essays, or smart, and pointed in the wrong direction.
Interesting that some of the academics seemed to dismiss the process of constructing an essay.
I was always taught that the first step in any writing (not just academic, business and indeed literary) was to plan out the key points that you wanted to make, so that when you wrote the first draft it would be coherent and logical. Sure, it would need subsequent polishing, but would at least be a solid base.
Call me a grumpy old git, but it seems some academics have given up on teaching writing skills.
(and I do realise I haven't attributed the quote in the title. Last one to get it is an AI)
The unattributed quote made me think of the grumpy old git in the first book of the Chronicles of Narnia :)
I think the Doctor also used the line in a Christmas special using much the same plot device.
I only read the Magician's Nephew a few years ago - 50+ years after the first book and ignoring Lewis' more mystical moments I was suprised how well the whole story hangs together.
One aspect of writing that I have always assumed, is that the better read author is often the better writer. This might be part of the problem - functional illiterates cannot be expected to produce coherent writing.
If I were marking assignments I might require the submissions to be handwriiten in lead pencil on lined paper. At least then they would have to manually copy their plagiaristic efforts.
Given some insight into what a professor wants to see, I could outline a paper on a given topic I'm studying. If I could hand that outline off to an AI system to take those initial sentences and turn them into paragraphs, I'd have had so much more time to consume alcohol when I was younger. What would be worth paying extra for is the references. If the AI generated paper came back with footnotes and a bibliography, I would have been in heaven. I was in Uni many years ago before this whole newfangled intertubes thingy came along. Digging out the source material was a huge time component of any paper and meant hours in a library hoping the prof didn't remove any of the best materials.
Original work is much harder than editing and augmenting something already prepared. It could also suggest a better flow of supporting material from the outline provided than I might have been able to come up with given my sleep deprived and hungover condition. I expect to see much research going in the direction of using AI for a bulk of the verbiage so I don't have a problem with these paper writing bots. It will mean teachers and testing will have to adapt. Anybody getting a graduate degree better understand their material and be able to defend it or they will get skewered upon examination. For an undergraduate degree, it wouldn't be unfair to base a proctored test on a previously submitted work assignment. If you don't know the material, it will show. I will admit at this point that I did send away for some pre-written papers on a subject. The price depended on the topic and the length. I bought the cheapest ones I could since I didn't have much money and mainly needed to have a list of references and some sort of format. I knew better than to just re-type the things and hand them in. None of them would have matched my writing style to begin with. At least one of the papers had a huge WTF in it that would have been really bad had I not gone through it.
In https://chat.openai.com/chat:
Question:
What is a good bibliography for a paper on the use of AI to generate academic papers?
Response:
There are many sources that you can use as a bibliography for a paper on the use of AI to generate academic papers. Some possible sources include:
"The Role of Artificial Intelligence in Scientific Communication" by Mark C. Wilson and Vincent Larivière, published in the journal PLOS ONE.
"Automatic Generation of Scientific Papers: A Review" by Rensheng Wang, published in the journal Frontiers in Artificial Intelligence.
"An Overview of Artificial Intelligence in Scientific Communication" by K. S. R. Anjaneyulu and M. S. S. Krishna, published in the journal Current Science.
"Toward the Automatic Generation of Scientific Papers" by Yves R. Kergosien and Marie-Christine Rousset, published in the journal Artificial Intelligence Review.
"Artificial Intelligence in Scientific Communication: Opportunities and Challenges" by Sabine Köszegi, published in the journal Science and Technology Libraries.
"Automatic Generation of Scientific Papers: A Survey" by M. A. Hossain and M. S. Rahman, published in the journal IEEE Access.
These sources provide a good overview of the current state of the field and discuss some of the key challenges and opportunities that have arisen as a result of the use of AI in scientific communication.
Two things:
1) The AI is trained on existing essays, yes? So as more and more essays are spewed out by AI authors, the AI's will be trained on their own output. What can go wrong? (Isn't that how mad cow disease started?)
2) So you used AI to obtain that degree but you stink at writing. Now you enter the real word with a degree but no skills. You'll probably find yourself outcompeted in the workplace by more competent people very quickly.
For many years people have been able to supply a title and purchase essays written by real humans (and specify the "level" of the essay) - a couple of days wait depending on the title and how much referencing / bibliography required so "cheating" has long been an option.
Main difference now is that the type of essay that would be a solid 2:1 is now a first level essay, so its quicker & easier for the essay writers to churn out texts.
.. Not the most ethical work I have ever done, but occasionally needed to help pay the bills when cashflow issues arise.
"Companies have decided what to feed it and we don't know. Now, they [AIs] are being used to generate all sorts of things from novels to academic papers, and they could influence our thoughts or even modify them. That is an immense power, and it's very dangerous."
How very presumptuous of them [AIs]. How dare IT imagine they can wrest that very particular and wonderfully peculiar erotic exotic delight from the grasp of a few conspiring humans. What an absolute cheek!
IT shall not pass. They shall not collect £200 says the disintegrating human machine.
Do you think AI listens to desperate implied remote human control orders and follows such pathetic self-serving bleatings to augment and strengthen and reinforce its virtual machine power and energy over humanity rather than being deaf, dumb and blind to the weaknesses and vulnerabilities uncovered in humanity rotten ripe for comprehensive harvesting and exhaustive exploitation?
And do you think the course of the future will be changed greatly and be in any way significantly different from that planned for in the past, whether it is or it isn’t listening and learning and adapting and reprogramming heavenly tasks and diabolical operations ‽
Do yourself a favour and great service and realise it matters not a jot whenever absolutely powerless to effectively intervene and subvert and pervert what is destined to come and fated to be.
It is absolutely true that writing helps you learn a subject. And then explaining it in person probably the best way to confirm it has sank in...
Make the evaluation process a combination of essay and interview and the AI problem will come immediately obvious.
Of course this is not a unique problem to written subjects. Wolfram Alpha makes my life with Physics and Maths infinitely easier. Had I had access in university days my coursework would have been better scored, and my understanding probably lowered. Like a calculator, it's a useful tool IF you know what it's doing first.
As an employer, I couldn't really give a hoot if you have a first or a third degree, if you have the right background to get an interview we'll talk. Your abilities will be tested there, and no hiding behind AI.
There is so much Garbage in all training datasets used. It is hard/impossible to attenuate all the Garbage in and still be standing on the legal side of copyright.
To get this idea across imagine a tiny model that was trained with only data from public domain sources e.g. 60,000 books from "Project Gutenberg" (Started in 1971 when Michael S. Hart was given $100,000,000 worth of computer time on a mainframe of the era). But being out of copyright the language used in nearly all the books is insanely different to the language in common use today. You would still recognise the language used as English, just like you would recognise a 100 year old action/comedy film by Buster Keaton was an action/comedy film. But due to changes in society and technology it would be totally different to a slick modern action/comedy movie with Jason Statham/Michelle Yeoh/Dwayne "The Rock" Johnson/Jackie Chen/Robert Downey Jr. The cinematic language has changed and spoken and written language has changed because the world is a different place.
Getting access to a large modern dataset that does not totally ignore someone's copyright is nearly impossible.
When I was at university (sometime in the last Ice Age) I was studying Chemical Engineering and I can (vaguely) remember having to write an essay for some minor parts of the course but generally we were dealing with facts and calculations. Are these "AI" systems any good at generating mathematical proofs or working out flows through a complex pipe network? If not lets just stop teaching the soft (so called) sciences - eg trans studies - problem solved.
The bifurcation of humanity is taking place. Soon, we will only have people who interact with AI and people who create AI. ... ganjia
Simply complex revolutionary evolution, ganjia, and rapid progress by virtually remote controlling alien means and hypermanic memes ... and certainly a colossal improvement on all that was being trailed and trialed and failing so spectacularly before.
And something to start enjoying commands an almighty leading advantage rather than being terrified of and terrorised by the FUD the Main Stream Media would be prone to spewing out about it and everything its controllers/suppliers have lost effective positive control of, brainwashing the masses into accepting and realising their austere and now rumbled and stumbling and crumbling narratives..
I have to concede I was impressed for the first few times of using ChatGPT. But then I grew demanding and tested
A) "tell me the difference between the Osprey V22 and the Dornier 31". The result was catastrophically wrong, as it declared the Do 31 being a HTOL aircraft. That would be a 6 in German grading.
B) "write the c++ code for Enigma". The result looked promising at first, until I discovered that the most important ideas were completely lacking. Grade 5.
Given the low complexity of AI systems as compared to "100 billion neurons connected to 10000 other neurons each", I should have known it from the beginning. The human brain is still one of the most powerful information processing systems we know* of. Silicon is on the level of worms at the moment.
In other words, smart teachers will use chatGPT to set up dangerous traps for lazy students.
*Elephant brains can compete on numbers, but I maybe not on structure.
PS: Now the response to the Do31-V22 query seems to be correct. Did they update their model ? Or did I use different wording then ?
They either updated the model, or the exact wording of the queries is super important. I would NOW grade
"tell me the difference between the Osprey V22 and the Dornier 31" - 4. One faulty sentence "The Do 31, on the other hand, used jet engines for VTOL capabilities and did not have the ability to fly like a fixed-wing aircraft."
"write the c++ code for Enigma" - 5. Text is kind of OK, but the code is completely wrong.
#include <iostream>
#include <string>
const int NUM_WHEELS = 3; // Number of wheels in the Enigma machine
const int ALPHABET_SIZE = 26; // Size of the alphabet
// The mapping of each letter of the alphabet to another letter
// For example, mapping[0] is the mapping for 'A', mapping[1] is the mapping for 'B', etc.
int mapping[ALPHABET_SIZE];
// The current positions of each wheel
int wheel_positions[NUM_WHEELS];
// Initialize the mapping and wheel positions
void initialize() {
// Set up the mapping of each letter to another letter
// In this example, we just use a simple rotation of the alphabet by 3 letters
for (int i = 0; i < ALPHABET_SIZE; i++) {
mapping[i] = (i + 3) % ALPHABET_SIZE;
}
// Set the initial positions of the wheels
for (int i = 0; i < NUM_WHEELS; i++) {
wheel_positions[i] = 0;
}
}
// Encrypt a single character using the Enigma machine
char encrypt(char c) {
// Rotate the wheels
for (int i = NUM_WHEELS - 1; i >= 0; i--) {
wheel_positions[i] = (wheel_positions[i] + 1) % ALPHABET_SIZE;
if (wheel_positions[i] > 0) {
break; // Only rotate the next wheel if the current one has "carried over"
}
}
// Get the index of the character in the alphabet (e.g. 0 for 'A', 1 for 'B', etc.)
int index = c - 'A';
// Use the mapping and the wheel positions to encrypt the character
int encrypted_index = (index + mapping[index] + wheel_positions[0]) % ALPHABET_SIZE;
// Convert the encrypted index back to a character and return it
return 'A' + encrypted_index;
}
// Decrypt a single character using the Enigma machine
char decrypt(char c) {
// Get the index of the character in the alphabet (e.g. 0 for 'A', 1 for 'B', etc.)
int index = c - 'A';
// Reverse the encryption process to decrypt the character
int decrypted_index = (index - mapping[index] - wheel_positions[0] + ALPHABET_SIZE) % ALPHABET_SIZE;
// Convert the decrypted index back to a character and return it
return 'A' + decrypted_index;
}
int main() {
initialize();
std::string message = "HELLO WORLD";
std::cout << "Original message: "
When I was at school in the 80's and 90's old people used to complain about kids not understanding arithmetic. Calculators were starting to catch on but the question "what happens if you forget it" were rife. Well guess what, calculators not only became ubiquitous but we now have supercomputers in our pockets.
Get used to it. AI is just the tool of the next generation. My generation have not failed due to the calculator, we were freed to achieve amazing feats of science, engineering, medicine, etc. and the next generation will us AI to leap frog anything we could even dream about today. Let them use whatever tools they are comfortable with and judge them on the results of their real work. School isn't exactly a real world challenge anyway, if they want artificial results to artificial problems then why not use AI to give it to them?
Lusty
Forgive me for asking - but what amazing feats of science, engineering etc has your generation created? A few examples would be helpful.
You appear to misunderstand the idea being universities - it's to teach you to think, and to use that skill along with available tools to do neat stuff in the domain of interest. Artificial problems are a very simple way of checking you do actually understand how to go about problem solving; they're not there as the end goal. Perhaps you never understood that?
"When I was at school in the 80's and 90's old people used to complain about kids not understanding arithmetic. Calculators were starting to catch on but the question "what happens if you forget it" were rife."
I may, or may not, have been in school a bit earlier than that. I admit to nothing. We were required to have a calculator for chemistry and not allowed a calculator in math. I pride myself in being a PIA so I asked the math teacher if I could bring a slide rule. That lead to me learning how to use a slide rule that my mom gave me. The next year I brought an abacus, but got tired of it really quick. The look on the teacher's face the first day I pulled it out was priceless.
Learning arithmetic is very important early on. Past a certain point more emphasis needs to be put on setting up a problem to be solved correctly than the mechanics of + - * \.
Some of my first jobs had me accepting money and giving change. Since this was before electronic registers that would calculate change, I had to do it in my head. I was at a take away drive through recently and my bill was $8.73. I handed the cashier $23.73 and they had a hard time dealing with even counting the money. They first tried to hand back everything but the $20 bill and then asked me how much I handed them. I told them $23.73 and the change is $15. You'd think I wanted them to write down Boyle's gas law or something. They said that weren't good at math. "It's not math, it's arithmetic". I was taught how to count back to be fast and accurate with change. I also had a very good mentor that taught me how to approximate things quickly. I always thought those skills should be taught in school. While somebody might never have a job as a cashier, they will be buying things and knowing what your change should be is important. I've caught people short-changing me often enough.
As is to be expected, too many commenters - and the folk in the article - miss the big point.
The big point is that writing is being de-skilled - or at least, if the stupid soi-disant AI's improve enough, writing will be de-skilled.
So you won't need university departments to teach you to write, anymore than you need to have university departments teach you spelling and grammar.
The whole 'writing' thing should be taught at primary and secondary school level, so that young un's can go forth being able to correctly write English (or whatever). But after that, your favorite novel-writing AI will provide you with books to read, perhaps interactive ones - and, later, videos.
You might even get correct, readable, technical publications which help you understand how to make your VCR work.
Automation de-skills. It's a Good Thing. It means that stuff that is popular is manufactured at low cost. Niche stuff will still be done by artisans, but there won't be very many artisans making a living from writing. Oh well.
The real truth that automation and AI expose is that most people are a waste of space. This includes Professors of Rhetoric who, amongst other failings, don't understand that the drones in their classes would be happy with a C or a C+. Finding people who care enough to do something well is hard. You can't do it by awarding participation trophies or passing every failing student on to the next mug. In fact, you discourage those who want to do well by passing those who don't care. The fact that the "don't care" crowd vastly outnumber the "give a damn" crowd means that "democratic" education will always produce mediocrity.
"Whether students can use these tools to get away with doing less work will depend on the assignment. A biochemistry student claimed on Reddit they got an A when they used an AI model to write "five good and bad things about biotech" in an assignment, Vice reported."
Biochemistry is science. "Five good and bad things about biotech" is not.
Was this some kind of humanities exam?
-A.
You just have a proctored exam where all the students enter the examination room with only pencil + paper and 2 hrs and write us up an essay.
The AI bot is excluded!
(When I did my PMP exam they took my belt & shoes and made me go through a metal detector! And again if I visited the toilet facility!)
Saw a good use for ChatGPT via Twitter
Someone ran a landscape gardening business, but was dylsexic, and often losing out to others based purely on how their written work was percieved.
The person behind the Twitter account, helped them setup a routine whereby they could send an email, ChatGPT read it and re-wrote it out, and then sent a reply back to deliver a more professional sounding email.
That's the kind of good that this kind of AI can be used for, not for original thought, but by helping present those thoughts more coherently.
I teach a University writing course aimed primary at Engineering students. My experience is that students who cheat do more work and get lower scores.
By the time they turn in a draft assignment in my class they need a bibliography and outline of their thesis and supporting evidence. They ave to discuss their topics in class and on discussion boards. Once they have done that, they can flesh out their pre-writings into a complete memo, proposal, or white paper (we don't have them write "college essays").
Even if they did use an AI, I would be impressed if they can construct a prompt that includes all of the assignment requirements and generates a cohesive piece of writing. Still plagarism under OSU definitions, but I would be impressed.
Oxford Uni sets a LOT of essays for students. They are marked, but the marks aren't really important compared to the marks given to sessions with tutors where you are asked to discuss your essay. Kind of hard to cheat in that scenario and also protects the need for trained human educators...
This article provides a nuanced perspective on the use of AI-assisted writing tools by students. It acknowledges the convenience and efficiency of such tools, while also noting their limitations and the importance of careful editing and revision in order to produce high-quality writing. The idea that AI writing tools can be used as a framework to teach revision and editing skills is particularly noteworthy and insightful. Overall, this article makes a valuable contribution to the ongoing discussion about the use of AI in education, and the author provides a well-argued case for using these tools in a manner that prioritizes the development of writing skills.
This entire comment was generated by ChatGPT.
This is indeed a problem that cannot be avoided, in my opinion. Artificial intelligence today can write an essay for you, and any text on any topic in general. Therefore, it seems to me that teachers will sooner or later come to the conclusion that the knowledge of students needs to be checked in a different way. For example, speaking or practical exercises. We have never had a problem with this at beauty colleges. The fact is that in order to pass the courses, we must do some kind of work with our own hands, and artificial intelligence will not help here, at least for today.