
You'd think people bright enough to train bees to do arithmetic
Would know which way up to hold the phone when filming them.
The common honeybee is clever enough to do simple arithmetic, according to research published on Wednesday. Bio-boffins at RMIT University and Monash University in Australia, and the University of Toulouse, France, claim to have discovered this ability in bees over a series of experiments. Fourteen nectar slurpers were …
Yes, unfortunately, what can be observed here is another sad case of vertical video syndrome
What? Horizontal videos are so 2017. All the cool kids shoot vertical nowadays!
Snapchat, Facebook, Instagram - they've all got VV love. You phone is vertical, your videos are vertical - no-one, and I mean literally n-oone, watches video on a TV, monitor or some other "horizontal"| device anymore.
Totes Obvs. Get wiv it, grandpa. LOL.
Nice to see real science being done over dogmatic "but that's not possible/true/it's wrong!!!" type arguing and shouting.
I've seen Spiders count the spokes on their web (though IIRC the counting lab tests were over prey numbers). So lots of us "laymen" have seen real life things happen, and I wonder at the crowd that like to shout "SCIENCE" as their excuse for bias and dreamed up ideas, over the real scientists getting the data!
This post has been deleted by its author
I think the ruler is the bee itself. The workers are pretty well standard in size. Regular hexagons are optimal for packing units into an area so if bees are making bee-sized cells as close together as possible regular hexagons are what are produced.
The really interesting geometry is projecting a scaled map of a the horizontal route to a food source onto the vertical surface of the comb.
They also construct regular hexagons, apparently without a ruler and compasses.
Actually, its a bit more complicated than that:
"It is now accepted that bees build cylindrical cells that later transform into hexagonal prisms through a process that it is still debated."(Nazzi, F. The hexagonal shape of the honeycomb cells depends on the construction behavior of bees. Sci. Rep. 6, 28341; doi: 10.1038/srep28341 (2016))
I suspect there are a lot of El Reg commentators who could write improved automata to simulate the hive cell-building process. The fun starts in trying to devise experiments to determine which, if any, of the programs the bees are following.
They also construct regular hexagons, apparently without a ruler and compasses.
If you place a uniform heat source underneath a layer of viscous fluid, it will often generate hexagonal "Rayleigh–Bénard convection cells". You also often get hexagonal prisms when lava cools or a colloid dries out. (Think Giant's Causeway.)
I'm not saying bees aren't smart - the expermient described demonstrates that they are pretty clever - but regular geometric shapes do not necessarily require drawing tools or imply intelligent design.
Neither does the evolution of eyeballs in case there are any fundies reading this! ;-)
>>>It shows the bees have some form of long-term memory to remember the mathematical rules, and some short-term memory to recall the numbers they're operating on.<<<
I'd always taken Bee memory as a given, as they remember the direction and distance needed to return directly to a specific area from the hive (with siblings).
The mathematical rules in play here maybe 'fewer than' & 'more than' associated with specific colours. They could just be adapting their normal behaviour to a 'new flower' that is only worth visiting at near the initial contact size, (plus or minus 1 based on colour) then picking the optimum, evolution is good at finding optimums.
What happens if they have a blue-2 initially and then a choice of yellow 1 & 3?, there are a host of permutations that will need to trialled before maths can be called out as being the driver.
On the up side if 'Maths done by a hive insect' can be proven then the mathematical capacity of the hive as a group will need some serious study.
> I'd always taken Bee memory as a given, as they remember the direction and distance needed to return directly to a specific area from the hive (with siblings).
There is a possibility that direction and distance is some sort of 'built-in' memory only usable for that purpose. This research shows that bees have a 'general purpose' memory that can be used for other tasks.
...that the bees have not simply become 'familiarised' with the image of the correct image (since that's the image they are trained with) and so naturally gravitate (or fly, in their case) towards that one when given a choice, because it is the image with which they are familiar? It certainly shows they have short term memory, but do they have the ability to add and subtract? I'm not (yet) convinced.
Well you certainly won't find the answers to those questions by reading the summaries in the popular press, so I suppose the original research is the only way to go.
(I do sometimes read reports that, if true, imply embarrassing levels of idiocy or naivety in the scientists concerned. I always remind myself that they didn't write the newspaper article and I'm sure the original paper addresses the glaring weaknesses.)
Indeed. The route might go something like this: paper, press release (by institution or journal publisher), mangled rewrite of press release by PA or similar news agency, remangled version by overworked journalist. If you are lucky the journalist might have had time to check back with one of the authors.
So, can other animals count? Of course they can, but the semantic answer is heavily dependent on what one means by "counting". As far as I know, other animals have not named the whole numbers in sequence like we have. However, they clearly can tell More or Less, and in a nameless way they can recognize and compare the quantity of items. To a point anyway, anything over five to a rabbit is Hrair. It's such a valuable survival skill that it's hardly surprising it's appeared in so many families of life.
It only seems like a valid question because we humans approach it from the other end these days. We name all the numbers and put them in order and call that Counting. Other animals don't, so we question if they can assess and compare quantities. That's a deceptively biased view, since ordered number names aren't necessary to do this. We humans don't have any exclusive counting powers, we have only refined the innate ability to recognize quantity etc. that our ancestors - and many many other animals - already had.
I accept that other animals can recognise 1, 2, 3, etc. when looking at specific sets of objects. What may be unique to homo sap is to associate the symbol 1, 2, etc with a number of objects. This is analogous to the way that words are abstract and their sound is not related to what they mean.
I'm going to engage in a little armchair pedantry based on the video (which I saw) and the not the paper (which I didn't read).
When it was doing addition, the prompt was a single blue spot and the possible answers were: 2 blue spots or 5 blue spots. Or (from what I infer the bee to be seeing): region with few blue spots and region with many blue spots.
When it was doing subtraction, the prompt was 5 yellow triangles and the possible answers were: 4 yellow triangles or 2 yellow triangles. Again, a region with many triangles and a region with few triangles.
The bee didn't have to do any arithmetic, it only had to match similar levels of complexity. In my mind, it seems closer to a comparator circuit than an ALU.
I can see rewarding the bees with a drop of nectar, but i don't see ant reason for punishing wrong answers. It's all one with giving test animals shock. What proves human specialness more, now that have to grant animals, birds, and now insects, advanced mental powers than gratuitous cruelty?
Blackadder: Right Baldrick, let's try again, shall we? This is called adding. [gestures to the beans on the table] If I have two beans, and then I add two more, what do I have?
Baldrick: Some beans.
Blackadder: [smiles, impatiently] Yesss... and no. Let's try again, shall we? I have two beans, then I add two more beans. What does that make?
Baldrick: A very small casserole.
Blackadder: Baldrick. The ape-creatures of the Indus have mastered this. Now try again. [helps him count] One, two, three... four. So, how many are there?
Baldrick: Three.
Blackadder: What?
Baldrick: And that one.
Blackadder: Three... and that one. [waves the fourth bean in front of Baldrick's face] So if I add the three to that one, what will I have?!
Baldrick: Oh! Some beans.
Blackadder: [pause] Yes. To you, Baldrick, the Renaissance was just something that happened to other people, wasn't it?
Analysis After re-establishing itself in the datacenter over the past few years, AMD is now hoping to become a big player in the AI compute space with an expanded portfolio of chips that cover everything from the edge to the cloud.
It's quite an ambitious goal, given Nvidia's dominance in the space with its GPUs and the CUDA programming model, plus the increasing competition from Intel and several other companies.
But as executives laid out during AMD's Financial Analyst Day 2022 event last week, the resurgent chip designer believes it has the right silicon and software coming into place to pursue the wider AI space.
Qualcomm knows that if it wants developers to build and optimize AI applications across its portfolio of silicon, the Snapdragon giant needs to make the experience simpler and, ideally, better than what its rivals have been cooking up in the software stack department.
That's why on Wednesday the fabless chip designer introduced what it's calling the Qualcomm AI Stack, which aims to, among other things, let developers take AI models they've developed for one device type, let's say smartphones, and easily adapt them for another, like PCs. This stack is only for devices powered by Qualcomm's system-on-chips, be they in laptops, cellphones, car entertainment, or something else.
While Qualcomm is best known for its mobile Arm-based Snapdragon chips that power many Android phones, the chip house is hoping to grow into other markets, such as personal computers, the Internet of Things, and automotive. This expansion means Qualcomm is competing with the likes of Apple, Intel, Nvidia, AMD, and others, on a much larger battlefield.
Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.
The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.
This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.
Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.
Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks.
In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.
Engineers at the University of Pennsylvania say they've developed a photonic deep neural network processor capable of analyzing billions of images every second with high accuracy using the power of light.
It might sound like science fiction or some optical engineer's fever dream, but that's exactly what researchers at the American university's School of Engineering and Applied Sciences claim to have done in an article published in the journal Nature earlier this month.
The standalone light-driven chip – this isn't another PCIe accelerator or coprocessor – handles data by simulating brain neurons that have been trained to recognize specific patterns. This is useful for a variety of applications including object detection, facial recognition, and audio transcription to name just a few.
In Brief No, AI chatbots are not sentient.
Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.
The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.
In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.
"Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."
The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.
The venture capital arm of Samsung has cut a check to help Israeli inference chip designer NeuReality bring its silicon dreams a step closer to reality.
NeuReality announced Monday it has raised an undisclosed amount of funding from Samsung Ventures, adding to the $8 million in seed funding it secured last year to help it get started.
As The Next Platform wrote in 2021, NeuReality is hoping to stand out with an ambitious system-on-chip design that uses what the upstart refers to as a hardware-based "AI hypervisor."
Zscaler is growing the machine-learning capabilities of its zero-trust platform and expanding it into the public cloud and network edge, CEO Jay Chaudhry told devotees at a conference in Las Vegas today.
Along with the AI advancements, Zscaler at its Zenith 2022 show in Sin City also announced greater integration of its technologies with Amazon Web Services, and a security management offering designed to enable infosec teams and developers to better detect risks in cloud-native applications.
In addition, the biz also is putting a focus on the Internet of Things (IoT) and operational technology (OT) control systems as it addresses the security side of the network edge. Zscaler, for those not aware, makes products that securely connect devices, networks, and backend systems together, and provides the monitoring, controls, and cloud services an organization might need to manage all that.
Opinion The Turing test is about us, not the bots, and it has failed.
Fans of the slow burn mainstream media U-turn had a treat last week.
On Saturday, the news broke that Blake Lemoine, a Google engineer charged with monitoring a chatbot called LaMDA for nastiness, had been put on paid leave for revealing confidential information.
Biting the hand that feeds IT © 1998–2022