The point is it's free and open
Google giving away a credible implementation of upcoming core tech ahead of the game, in anticipation of it being so widespread it dominates the market. Heard that one before?
Occasionally a technology comes along that changes the way that people work. Docker has had a profound effect on how applications are deployed in the cloud, Hadoop changed how analysis of big data was done and the R language has disrupted the statistics market. And so to TensorFlow, which emerged from the Machine Learning team …
Google needs to hire lots of machine learning experts.
The more there are the cheaper they are to find and hire
If they know the exact ML environment they will be using at Google = bonus
It's the same reason Bell labs made 'C' freely available all those years ago - it's worth more to be able to hire coders that know 'C' than the risk your competitors will use it
The most interesting aspect is the "delay evaluation until doomsday and pass a handle around" and how it has been implemented. That is the really revolutionary bit for python (and the stumbling block for most people using TF). Some other languages (f.e. Java) hav similar constructs as a part of their core libs. Python so far does not.
This just asks to be stolen and reused for other purposes.
A system that watches the film for you and then tells you what you would have thought of it had you seen it would be pretty clever.
That really would be a case of the machine doing your thinking (and feeling) for you so you don't have to.
Yey. Progress is amazing.
Better still, it watches the movie, then writes your (haughty) review for you, and tweets your followers to get them to read your review, then argues with them for you. In the meantime you can get on with the fun stuff like programming it.
As phrased, it appears the author attempted to determine the relationship between users and their feelings towards the arbitrary IDs that were assigned to films they like*. "Oh, this user liked 1248964 and 2569964? Then clearly they'll like 15673964. Whatever it may be."
I guess that if IDs were increasing and assigned at time of release you might figure something out about the user's favourite periods. But if they're GUIDs then, ummm...
* as "The dataset consists of rows of data with a user ID, a film ID and the user's rating of the film. [...] although the dataset includes details of the films ... this information is not used at all by the model."
Yeah I think you misunderstood. It's more like
User A likes film 1 and film 2
User B likes film 3 and film 4
User C likes film 1, what other film would you recommend to him?
Very easy with 3 users and 4 films but exponentially more difficult with more users and films.
it's about generalising patterns across lots of users, not about building up a concept of each individual user.
That's what most of the so-called AI actually is, just creating correlations with no actual understanding. The real AI is concerned about understanding.
"Simply making errors more readable and easy to trace would assist programmers just getting started."
It used to be a saying - that error handling is 20% of the result and 80% of the work.
Some people put contingency stubs in place during design/development so that they can enhance the error handling once the proof of concept is complete. Other people ignore error handling until it comes back and bites them.
There are a lot of ML frameworks out there for more data centric and statistical oriented languages like R which don't come with the baggage of being owned by one of the most powerful companies on the planet. It is a solid technology, but it is also an attempt to corner the consulting/system market related to ML in a similar way they release the Android OS for phones...
Regardless, I'm glad that people are waking up to the possibilities of Artificial Intelligence now that we have the computing power.
This post has been deleted by its author