
How embarassing for Elsevier
Proof that nobody actually read the final document before submitting it, and then nobody read the document before publishing it? And eight doctors putting their names to this, I'll bet they feel proud.
Sheesh.
Academics focused on artificial intelligence have taken to using generative AI to help them review the machine learning work of peers. A group of researchers from Stanford University, NEC Labs America, and UC Santa Barbara recently analyzed the peer reviews of papers submitted to leading AI conferences, including ICLR 2024, …
This post has been deleted by its author
AI says: Commendable, innovative, and comprehensive ... and would have been more so had there been even more AI input.
Putting on my QA hat, next testing steps:
1. Create some obviously bogus (to human eyes) paper. Typical AI output would be perfect for this, especially if AI creates the footnotes.
2. Review paper with AI "assistance" (e.g. how it's really done, AI review with human polishing). N.B. Technically this qualifies as a peer review. Expect: ( commendable + innovative + comprehensive )
3. Review review with AI "assistance". Expect: ( commendable + innovative + comprehensive )^2
Teaching the AI about it's self seems like a good way to destroy the world. Why is it that people with book smarts have zero common sense? Do you not ever "say it out loud"? Do you just not care about anything anymore?
Can we sue the ever loving crap out of AI for grabbing our identities, information, and work yet?
"Why is it that people with book smarts have zero common sense?"
Ah yes those who have rolled a natural 20 on intellgence but a natural 1 in wisdom. Intellgence and Wisdom (common sense) are separare things. The other problem is difference between specialists and experts, a specialists knows everything about their subject, experts know nothing else.
"The difficulty of distinguishing between human- and machine-written text"
In my experience AI stuff is usually easy to spot whether it's news padding or comments on X or reports or whatever.
The wording and complexity and patterns are just unnatural, I'd guess because it's the slightly bland average of a lot of sources. A bit like PR bumpf but more soulless.
What do you imagine AI* and LLLLMs** [in any and/or all of its extremely effective transformative phorms] have concluded and decided upon regarding native species input about the future paths to be followed and destined to be the fate of a dysfunctional self-harming humanity and perverse destructive artificial economic system? Do you think it likely there is any chance that they might deem it worthy of consideration for future inclusion?
Be honest now in truthful reply to that second hanging question for the only poor sad and almightily lost soul you be fooling with a lie would be yourself and all those surprisingly similar and likely to be very little different from yourself.
* ....... Advanced/Augmented/Artificial/Autonomous/Anonymous/Attractive/Artilectual/Alien IntelAIgents
** ..... Large Learned Language Learning Machines
...... regarding Destinations IntelAIgently Designed for Failed Systems Administrations are of Alien Manufacture, they are coming ….. and there is nothing you can do to stop them ….. Capiche?
Alienating fact or dehumanising fiction ….. entertaining and presenting engagement and quantum entanglement with the 3 body problem/father, son, holy ghost trinity iterations?
Do yourself a favour, save yourself a fortune and a life time of battles against that which is progress and impossible to hinder and defeat, and don’t bet against it being easily possible and both a current physically humanised and augmentable ethereal virtualised reality ....... exalting existence.