No need to go to Uni....
...as we've dumbed down the meaning of AI to such a level a ten year old can do it.
Artificial intelligence and its sub-domains look set to be the next major growth area for software developers, programmers, hackers and just about anyone who has anything to do with software. There doesn't appear to be an area of life that it doesn't touch – self-driving cars, tagging porn stars on Pornhub, healthcare, …
Many developers are quite happy to run a linear regression on a data set without understanding the actual mathematics behind it. A similar approach can be applied to many of the machine learning algorithms available in default libraries like Scikit-learn. Unfortunately there is still a lot of machine learning snobbery out there by those who hold a PhD in Machine learning. They are preventing the wider adaptation of machine learning by criticising the simpler but still valuable implementations. Yes, you need a PhD in machine learning if you want to build a Google scheduler that can make appointments for you, but you don't need if you are trying to understand customer churn.
Many developers are quite happy to run a linear regression on a data set without understanding the actual mathematics behind it.
Fine with me, and fine with the most basic models (decision trees are very easy to explain and understand). But if all they are using are linear regression models why not learn the very simple principles of it? And please don't try to pass it as AI or state-of-the-art.
machine learning snobbery out there by those who hold a PhD in Machine learning...are preventing the wider adaptation of machine learning by criticising the simpler but still valuable implementations.
No they are not.
Do you really think that the Machine Learning PhD League of Snobbery can prevent the wider adaptation of machine learning through (gasp) criticism? Did you ever see Captain Regression berate any user of TensorFlow? Or Nonlinear Model Woman use her Lasso of Calculus to show that a programmer didn't chose the right activation functions for a neural network?
What I often see is people that can't be arsed to learn even the most basic models (start with those, they are FUN), just want to slap a library or API and call the results "intelligent". Then blame academics for making things soooo hard.
This post has been deleted by its author
As someone who has a PhD in ML and working with lots of startups I can tell you that what i see right now is somewhere between funny and scary. People seem to think that there is a ML black box that they add to their product and suddenly all problems are solved.
This is often far from the truth. Mathematics is used in ML and related fields for a reason: it is a - somewhat- formal language to express and discuss ideas, free from semantic misinterpretations.
Of course anyone with half a brain can understand the basic idea of a decision tree or the formula given in the article. But to know when to apply which algorithm is already a question that is not easy to answer. For example, if the events A and B are independent, then using conditional probabilities as in the formula given in the article is useless. Pretty much every single algorithm I know has certain conditions attached to it which make it work well or maybe not at all. Those conditions are again mathematical concepts, like independence of events. Not knowing about those conditions means that basically the quality of the algorithm choice comes down to luck or what is on our calendar, next to solar storm as your excuse of the day.
Anyway, all I really want to say that even though the basic ideas of a certain algorithm might be simple, that does not mean that being able to use those algorithms effectively is simple, in the same sense that using a knife is simple to use, but carving a beautiful statue is not. And hence courses that aim to produce good AI/ML developers will need to have a certain amount of complexity.
Thank you for avoiding the use of "AI" in your post - especially since you declare being in the field.
I saw the words "although you may find the module is buried inside an analytics or data science programme" in the article and immediately wanted to go on a rant saying obviously, because that's what ML is and THERE IS NO A.I. TODAY.
I did the Google beginner's course in ML and all there was was statistics and how to apply them. If you don't grep mathematics, you're up the proverbial creek without a paddle, and I'm not good at maths.
I guess most of the professors and researchers ("the noble calling") are not really concerned about what the industry actually needs.
There are plenty of blogs, videos, tutorials, relentlessly advertised conferences that seems to happen all the time (cough cough) that will teach "just what the industry needs", with different levels of abstraction/details/dumbing down/uselessness caused by choosing a single language or platform which API already changed in two months. Pick one, or several. And let academia do what academia ought to be doing: expanding the frontiers.
is it what industry actually needs?
No, no, and thrice no.
As somebody with a technical background, now working in a senior role with a multi-million customer business famed for its IT-ineptitude, I can tell you want we need.
I want a product that works on-prem with my data (to avoid other people's GDPR or commercial vulnerabilities). On site SaaS or fuck off, please.
I want clear, measurable results on my KPIs - like reducing churn, innovating new products, reducing customer service costs while increasing customer satisfaction. Any amount of insight is worthless unless it affects my company's commercials.
I want payment on results, or LDs for my wasted time and the vendor's dishonest promises. Do it, prove it, and I'll pay you - otherwise get on your bike.
And I don't want my senior management bogged down in either SLA discussions, or micro-definitions of the problem or the algorithm. All that stuffs YOUR problem, AI vendors.
Deliver me good results, I'll pay. Deliver me the sort of recommendations that I could guess at, or the sort of "you bought a shoe tree, would you be interested in these other shoe trees?" and I'll conclude that AI is dead end designed to squeeze the gullible.
"Disclaimer: I don't own a dishwasher because I think it's lazy and they are harmful to the planet."
I agree with the lazy bit, but I have better things to do other.
In my experience, collecting data about the physical world is rather more difficult than collecting data on the flows of money, or trending my film watching preferences. To use an example - how does one collect data about the physical condition of your server racks? Are cables routed tidily? Is the structure suffering from rust? Or, using a more pertinent example, how do you quantify the condition of a hundred year old bridge? What about the inaccessible, metal reinforcement inside concrete structures found virtually everywhere. Unlike Roman concrete made with seashells & volcanic ash, modern rubbish does not improve with age!
Such things tend to be expressed in terms of probabilities of reaching a certain age. The objective historically, was to undertake pre-emptive replacement. The cost challenge comes, quite rightly, from "if it's been OK for 50 years, won't it be OK for 10 more?" But how do you prove it? Other than by taking a punt on it and praying you'll be retired before it comes home to roost?
Magic AI boxes and analytics teams are employed to generate outputs in multiple industries. The outputs of those processes have to be calibrated. Low and behold; calibration exercises are usually against those original probabilistic models; especially when there is an absence of evidence!
From an engineering standpoint, I find the whole business of AI taking over the the decision making functions a depressing distraction from the shortage of people understanding form and function. If your model does not know about a failure mode, it cannot predict or react to it. Even more depressing, are those oblivious to the fact that our aged infrastructure, no matter how well made or maintained, has a finite life. Attempts to fiddle with life extensions here and there ultimately still lead to asset replacement, and in the meantime you continue accumulate additional risk and maintenance cost in trying to eke every last bit out of the original.
Simply put, it would be cheaper for all concerned to simply get on with it and put the replacement up!
... where an algorithm is developed, tested and released as a product.
What decade (Century? Millennium?) was this posted from. How are Wham doing those days?
"Normal Programming" as she is practiced today rarely involve algorithms, and pretty much ignores testing other than the "user presses button A and, usually (sometimes?) B happens". _Very_ little "User presses somewhere near the margin of button A, or to early, or too late, What happens?", and almost never "User is attempting to Get Shit Done on some machine which differs in some miniscule detail (OS version, or w.x.y release of one of the hundred dynamic libraries, rather than w.x.z) from the exact setup used by the developer on the day the 15 minutes of testing were done, if done at all.
"Normal Programming" involves picking a couple libraries with snappy names (possibly only considered snappy by the PHB) and asking "Will it blend?"
" The university that I work for in 1996 dropped Computer Science in favour of Applied Computing, realising that industry doesn't need someone who can build a linked list library from scratch but rather knows how and when to use an existing library. "
But this brings up the problem of how to develop the mental schemata to process what you're doing. If all you ever do is work with libraries, you miss out on several levels of abstraction and don't fully understand what you're doing.
The other side of the coin is that if you only ever deal with fundamentals like manually programming lists, you're missing out on several levels of abstraction and can only do real-world tasks within a very narrow domain.
The problem we have is that most courses fall into one of two extreme camps, and few people are discussing the middle-ground. But if you look at things like Stride in BlueJ, we're slowly starting to approach it.
I am not quite sure that by following the advice that we don't require to understand the math behind the techniques of ML will take us far in this field. Going to university is not the only solution to dig deep but restricting our understanding only to the point where we can explain it to the business is a very shallow approach. I believe the person who is making models should know in and out of it else human, who has made ML, will himself becomes ML dependent. Even university courses are not following this approach. The agenda of today's market is to just introduce you to these fancy terms and left you in the sea to drown. For instance: There was a stats course in a university where a professor was teaching that p value should be less than 0.05 without even informing students what p value is. The need of an hour is not to rush but to give yourself time to understand small things such as vectors, matrices, probability, statistics. In the end, you could be slow but confident about your decisions.
This is precisely why AI is going to cause a world of hurt. Everyone wil be jumping in and basing high risk decisions on AI output that simulates passable answers to the wrong questions based on wrong assumptions simply because it is easy to do. And no doubt cascading these passable answers as input to other AI stages. We will move to an era that massively confuses delegation of risk with reduction of risk that will, when it fails, make the last financial crisis look like missing pocket change.
Biting the hand that feeds IT © 1998–2020