NLP
This “training” data is textual descriptions that annotate radiological images. That is, AI uses the everyday language (NLP) system and the same with all image recognition technology. I mean all driverless cars, all robots, etc.
GPU near-monopoly Nvidia will be working alongside King's College London to put machine learning algorithms to work on NHS datasets. The two organisations are building an AI platform based on Nvidia's DGX-2 servers, which will be used to train computers to automate the most time-consuming part of radiology interpretation. A …
You must train your data on dictionary definitions, there is no other way: you have to know what each word means (because AI needs this knowledge) and what part of speech it belongs to.
For example, a sentence
- Alice goes, the girl sings and she dances cheerfully.
There are three" basic" patterns in the sentence
-- Alice goes
-- the girl sings
-- she dances cheerfully.
There are three descriptive parts here: "goes," "sings," and "dances cheerfully," which belong at the same time to the name "Alice", the noun "girl" and the pronoun "she." That is, the sentence is not really three "basic" but nine "constructed" patterns!
If you do not see the words and their parts of speech - you can not compose these "constructed" patterns and have to lose 66% of all information.
I know this because I met this problem in NIST TREC QA.
By knowing the meanings of words AI can match their structured definitions with data definitions and instantly find what you want.
By training your data as whole patterns you perform absolutely unnecessary trillions of extra operations, lose millions of dollars instead of doing much fewer using dictionary definitions, for few pennies, plus you lose up to 99% of the information.
In fact, "training" your data you create a personal profile of those who created the data. That is, you add to what is written and can be read, to the explicit and visible data - its implicit part, that is what was meant when creating the data.
If you train data on other data, you will simply identify the used patterns' contexts in which they are used - and don't know what a piece of data creators really wanted to say. You don't understand that because you do not reveal the implicit part embedded in the patterns, but only what is at the surface and can be read. You must still add the implicit manually... What is the sense to have AI if it's dumb? What sense makes to use OpenAI?
If you AI-train your data by dictionary definitions you automatically insert into data its implicit part, re-creating the authors minds (I call this process Lexical Cloning), and create long tuples; where in mathematics a tuple is a finite ordered list of elements (patterns). Now AI can find answers and act accordingly, it can think.
And, please remember! It's billions of times cheaper and produces the real AI.
Waymo’s head of research has warned that Elon Musk’s refusal to add a piece of technology that is relied upon by many self-driving start-ups to save money could leave it open to errors that might endanger people.
Why Waymo helps its competitor? Because Musk finally understood AI technology which Google uses since 2009, the moment Google (Eric Schmidt, Sergey Brin and Larry Page) got their hands on my patent applications.
I give you the real and only AI technology! And Google gives you OpenAI...
Not now... Now of course they all steal, cheat and lie. After some time there will be companies that will do their businesses guaranteeing our privacy because the images' textual explanations need annotations and such is best done on our devices (having everything in them), guaranteeing us our privacy. Now they all steal, cheat and lie.
This post has been deleted by its author
Ethical approval, for research involving people you need to get ethical approval from a research ethics committee https://www.hra.nhs.uk/about-us/committees-and-services/res-and-recs/. It's more NHS-speak than edu-speak though. Also see "having ethics for" and "covered by/under ethics" .
English bakeries are suffering an epidemic referring to certain pasties as "bakes", this talk of "obtaining ethics" seems to spring from the vein of illiteracy.
Languages evolve continuously, sometimes for better, sometimes for worse.
At the moment it seems mostly for worse.
Does it matter? Yes: English is succinct yet capable of more fine nuance than most other languages, partly because it has a comparatively large vocabulary and amount of idiom.
Dumbing language down and out is a precursor to doing the same to associated activity.
Using my barbaric English I was able to start this AI revolution. There are many highly educated fools with excellent Oxford and Harvard English, but the brain always beats brawn - where are they and where am I? So do not worry about the fate of English, Russian and Latin, they are just a means.
I thought this was old news, perhaps I missed something when I skimmed the article.
Some years ago, medical images were being machine classified, but by a less known/respected university.
In the Crosse & Blackwell soup factory, they had a machine to reject the suspect cubes of diced veg.
AI answers questions. That is, having radiological images AI responds to questions using human-readable texts, just as experts would have done providing their explanations. To do this all radiological images are annotated with textual explanations, in which the AI searches, mines answers and modifies them for output.
Yes, the article does not explain how the data is trained, that annotation texts are used. Indeed, this addition of texts is the only novelty in image recognition!
I highly doubt that Crosse & Blackwell has been using this AI technology because they don't have any data that needs it.
From earlier experience of what this group is working on, you are feeding in images alongside their medical records. Things like radiological annotations are already very regularised, it's not really necessary to teach the machine to converse with you.
One big elephant is selection of images; the imaging you have for a particular patient depend on what was requested on the basis of previous consultation, if you're not careful you are training it that people with wrist x-rays largely have broken wrists.
Yes," their " medical records are already very ordered, but some - 1-2 per 100 million of them - No. AI structures them all and transforms them into the same format without any error: sets of patterns.
What about the cost of liability? Is it cheaper to buy AI and detect-convert the above mentioned 1-2 cases than to pay millions for malpractice?
AI also structures previous consultations in the same format, and adds them to the further automatic processing along with the records.
And don't forget you will use AI for many years because it becomes the medical standard for everything, until something else will come. So the same records can be used somewhere else without any extra conversion.