Are you a religious nut?
Then maybe Temple OS is for you http://www.templeos.org/
32 posts • joined 16 Sep 2010
Thought I would buck the trend and point out some beautiful code I discovered the other day: LinuxCNC. A modular framework that allows you to swap out software modules for hardware modules (!). Config the interconnects in XML. It has some complicated modules in C++ (e.g. decent Kalman filtering), it has some deceptively simple module (add two inputs and other basic arithmetic). Now here is the kicker: its got its own realtime scheduler, so you can build complex modules by joining up simple modules if the task of building (a realtime version) what you want is too much to bite off. Furthermore, it has its own user space, with python bindings, so that UI type work can be done without the real-time hassles and in an easy language. My god! It is such an awesome design! The only real neg is that its a bitch to compile, but I felt uplifted by going through its source
not knowing print screen gets a screenshot (works on windows too!) is one of many oversights that suggests the author has only just started learning linux. *The* major omission is grep. Another good tool is locate.
grep -r "regex" .
does a recursive sub directory inside files for the regex. Perfect for finding where the debug message you are currently viewing came from. Also has many uses in filtering the output from some other command
"How is this different from writing a client- server PC application in any old language like we did already in late 1980's / early 1990's?"
-Servers could not push to clients easily
-firewalls broke those applications
-deployment from web interface not hard disk
This tech is probably nothing that revolutionary, but a load of existing components put into one coherent framework with a single language to serve both ends. I am somewhat interested, but google app engine is just too cheap that it keeps making me come back.
Check out the robot operating system at www.ros.org . Lots of people build lots of different SLAM solutions. One major problem is the correspondence problem, how to you know which spacial features are distinct. With active beacons this can be encoded. So have beacons chirping at different frequencies and then detect those frequencies by differencing an image at the same rates i.e. add the current image to the accumulator, then subtract it from the accumulator every X seconds. The accumulator with pick out pixels that vary at the same frequency, whereas other pixel values will cancel each other out.
Turn a camera into a omni direction camera by pointing it at a ball bearing. Bit of image warping and you can now get the *direction* of all distinct beacons. So now you can do "bearing only SLAM" (probably already in ROS I expect). A compass is also a useful addition for this kind of localization. Normal digital cameras = IR cameras after the removal of the IR filter.
Let us know if you need any help through Edinburgh Hacklab. We have alot of robotacists members (including me a.k.a. Larkworthy).
nomenclature clash. I meant persistent data-structures like path copying e.g. http://hackerboss.com/copy-on-write-101-part-1-what-is-it/ *not* persistent like databases. Pointer ownership is difficult to work out for non GC environments. The difference between using a persistent algorithm and a non persistent one can be an order of complexity on an algorithm, so if C++ is stuck with the O(n) implementation while Java is might achieve O(log(n)) (with the occasional freeze though :/ ). Nor can you get rid of that freeze either by the normal trick of caching objects for the same reasons you can't implement the algorithm in c++, lack of clear ownership of objects.
My profiler and I build performance systems at the world class level.
I use Java for high speed advanced data-structures as you 1. can't build persistent data structures in a non-garbage collected language, and 2. it can be blindingly fast if you know what your are doing.
I use C++ sparingly for signal processing (images, sound) because of the hacks you can pull off, but I minimize its usage because of the increased development time and debugging. 90% of runtime cost is on one part of the system, so I write that bit in C++ *once I have identified it*.
I use python for test harness and overall glue, because you can rearrange an application very easily with it and there is no annoying compile routine.
I use Matlab for intelligence and visualization.
I glue these all together using a middle ware solution (ROS).
My development time is my employer's main cost. Premature optimization occurs at the language selection level. Their is a silver bullet, it's called mixed language development, but it requires forcing yourself to learn new paradigms all the time (working on better functional stuff at the moment, looks cool)
All I care about is the length of time I have to stand in a queue. Automated checkouts have definitely improved my Tesco experience in that respect. It's totally worth the occasional baggage hiccup IMHO.
PS drop box is the more accepted "it just works" technology for rambling anecdotes, with the better backstory that they are making a ton of money while Tesco is losing market share.
1. neuroscience is biological science and therefore its more likely to be staffed by a squadron of females. It isn't comp sci, or physics.
2. In these kinds of exploratory works, its likely the participant *is* the scientist. Much less paperwork and no need to pay the participant.
I expect the post climax prefrontal excitation to be be the investigator analysing her results.
Simulations are used to do the normal engineering math that is impossible to do by empirical trial and error. Complaining simulation has been used to design the parts is like saying we shouldn't use equations to build bridges.
Look at some state of the art style passive walkers http://techland.time.com/2011/10/26/watch-a-robot-that-can-walk-without-motors-or-electricity/
Computers are used to make sure the legs are balanced and will operate passively (or in this case use one motor to drive many joints), which was highlighted as an innovation in this work.
I love the new direction music is taking. This sort of music is experimental, and is breaking the tyranny of the octave that has enslaved your ear perceptions for too long. The chord Jimmy Hendrix plays on purple haze was called the "Devils Chord" in the 18th century, and was to be avoided. The noises you are hearing emanating from dubstep are the next set of taboo audio sensations that will one day become Mozarts of their times. Perhaps not the Nokia tune itself, but you got to shift through the dirt before you find gold.
Yes, many of you are all too old to appreciate a new paradigm.
Um, thats why you lost. Skilled poker players are make informed decisions based on the probability of hand improvement ("outs"). Online poker playing is about calculating your risk better than the other players, and taking the long term average rewards. Even face to face tournaments are only weak modulations of the mathematics. Poker is a genuinely boring stats game (fold 9 in 10 hands for example).
No, Dave 62, I do robotics. In my field some people use psychological data generated in manipulation tasks to understand the functional block diagram of how humans recompute trajectories in uncertain situations (in order to make our robots manipulate better).
I am defending an area of research that I am not personally involved in, but that I see has genuine scientific value in other areas.
I find people like you that leap to the rubbishing of other peoples work generally ignorant. I don't think you have a broad enough range of knowledge in enough areas to make that call rationally.
just to wade in here, I am not a PHP programmer but I understand their issues. PHP targets multiple architectures but they want the same precision so the 64 bit floating point arithmetic needs to be retargetted for a 32 bit platform, or something like that.
Stuff like sin and e etc. have to be calculated using standard operations on a computer. Taylors expansions provide us with an iterative method for successively improving an approximation to ANY FUNCTION using operations we have defined. So presumably during stuffing 64 bit float arithmatic into a 32 bit arcitecture they had do do something along the lines for applying a taylors expansion. The thing is, its not very efficient.
I wrote a physics engine once in Java and found 79% of my processing time was spent in sin(), because of these iterative processes. I tried replacing with lookup table but this caused aliasing in the simulations, so I had to write my own sin with a controllable error term. http://www.java-gaming.org/index.php/topic,16296.msg130032.html#msg130032 . Bugs are easy, the taylors might not terminate if anything weird happens when trying to meet the desired error in the approximation. Unless of course, you have defensive checks in every iteration of your taylors expansion.
My main point (finally) is that when you are working at this level, you are already annoyed by how slow the taylors expansion is. Presumably every floating point operation is having to go through an annoyingly slow step already, so they probably down want to bog it down even further by coding a taylors defensive checks. Probably to fix that bug properly they will have to slow down floating point arithmetic 10% FOR EVERY FRICKIN FLOAT operation. You though PHP was slow already :/ If they are lucky they might be able to catogarize the bug at a higher level and say if the float is above X then reduce reduce the error term, then its jsut one conditional overhead at the beginning of the operation, but these issues are head scratchers usually. Portability is the issue. Java has a strictfp keyword which is all about this kind of thing.
How irritating! IBM has taken the time to employ some well paid geniuses, provide them with very expensive tools, for them to perform quantum hacking, at atomic precision and at nano scale timings; only to convert the results, at the final step, into a set of units SO incredibly vague and ambiguous that they can hardly be considered quantative units at all.
PS, for those here believing IBM must of meant the data required to record a whole year, you have been suckered by the PR. They gave you a vague number and you have chosen the largest possible interpretation
This post has been deleted by a moderator
I think this article is over hyped.
Firstly from the speed up perspective. From the video: x60 than an Intel i7 and most importantly x2 a GPU. So I can replicate the functionality with two GPU's with sli then. Hardly replacing a megaton supercomputer.
Secondly, the actual science is a little dubious. One reason why the brain does well because it has estimates of how big everything should be, but this chip is just a convolution farm without any shape priors. So at a stretch it mimics the very first neural layer, but not the visual *system*.
Excellent use of an FPGA though, I like the tech, but not some of the claims. To get better 3D from a moving camera the more accurate way would be to use the separate frames as views from different perspectives (and generalizations thereof). Doing that in real time would be a challenge.
Biting the hand that feeds IT © 1998–2020