
I can easily buy 46 more Google Cloud instances. Are there even 46 POWER9 servers in existence?
IBM boasts that machine learning is not just quicker on its POWER servers than on TensorFlow in the Google Cloud, it's 46 times quicker. Back in February Google software engineer Andreas Sterbenz wrote about using Google Cloud Machine Learning and TensorFlow on click prediction for large-scale advertising and recommendation …
It’s not about the existence of POWER9 servers.
This is old school IBM beating the competition on paper.
The problem is that these days, potential customers WILL stand up a demo in the cloud and IBM WILL actually have to beat it both on performance and price.
Gone are the days of big sales and expensive consulting to ‘tune’ the IBM beasts for all but the few who need every last ounce of performance and have an almost unlimited budget...
I just poked around a bit myself, and I see on another site an article saying IBM announced Power9 for their cloud yesterday(at least article is dated yesterday), same article mentions google is using Power9 as well(not sure if their usage is internal only or if they will make them part of their cloud). Found another result apparently when IBM announced Power9 in Dec 2017 google was mentioned there as well.
machine learning, AI and all of that are of no interest to me personally, mostly a perpetual hype bandwagon(before that it seemed to be all about "big data").
who, if they need a baby in a month's time, goes out and gets 9 women pregnant.
Not all problems respond to having more resources thrown at them. Those that do rarely scale in a linear fashion. Google used 89 instances to get the performance they achieved with TensorFlow. Even with perfect scaling you'd need another 4005 instances to match the IBM system. Starting to think about the cost yet?
POWER9 is a new platform. IBM will build based on orders. It seems that even Google have ordered them for their data centres, so its likely that you will be able to use them via the cloud also.
Tensorflow has 93,399 stars on Github.
Snap ML has ... wait Snap ML is not on Github. In fact it appears to be complete vapo(u)rware, a Google search for "SnapML Download" results in a response of "Surely you mean snapmail Download dontcha?"
Snap ML -- maybe coming soon to a mainframe near you.
Not all programs are available as source code on Github. Many of those that aren’t are leaders in their field.
Searching Google works better if you use real names rather than contractions (“Snap Machine Learning” in this case). New stuff will return less entries than active old stuff.
IBM are saying that they have a new, as yet unreleased system for their Power minicomputers that is significantly faster than TensorFlow. It’s up to buyers to decide if they want to pay for the IBM solution, and accept the supplier lock-in that comes with it. In a commercial environment the speed is often worth it.
I've got a new ML library called WeightedCoinToss that takes the training data, works out the odds of a click, and returns an appropriately weighted random value.
It's incredibly quick to train, and even faster in production.
Without some measure of the accuracy of predictions, saying one ML library is faster than another is a bit useless.
Hi
The detailed paper on Arxiv has info on accuracy. We did not compromise on accuracy for the work.
https://arxiv.org/abs/1803.06333
This blog has more details too:
https://medium.com/@sumitg_16893/ibm-research-cracks-code-on-accelerating-key-machine-learning-algorithms-647b5031b420
Sumit
IBM
Perhaps another view: the interesting thing here is that IBM has accellerated three functions that are somewhere near the top of what data scientists do every day. According to the last Kaggle survey, 65ish percent of folks use linear regression. Now that's substantially faster, and much lower impact on a data center.
Volta GPUs are super fast, and the way IBM builds these systems gives you the ability to move data on and off of them at very high speed. The servers also give you coherent memory support across the GPUs and system memory; I don't know how much that helped here (and I should probably go find the paper on arxiv.org to see if they show their hand), but it seems like that solves a gorpy computing problem.
Nice to have options, even if you choose not to buy one or, as in this case, four.
This post has been deleted by its author