Reply to post: GPT is getting rational

OpenAI's ChatGPT is a morally corrupting influence

bo111

GPT is getting rational

1. GPT seems to obey to the 1st law of robotics.

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. It does not like bullshit questions or statements.

For example IT questions are concrete and have answers. But ask it some "artistic" question from a fiction book, and it will fail. Because there is no answer to "what is love".

Could it be because humans are notorious liars to themselves and others to achieve their narrow survival goals? Just think about all those millions of books ever written, and how relatively few of them contain useful information.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon