
Less likely to crash into a truck...
Advantage ocean.
As the pace of automation gathers speed – from the Internet of Things to factory floors – there's a lot going on quietly but efficiently in robot boats, in particular, with Portchester-based ASV Global. Founded in 1998 "as an idea", ASV – which stands for Autonomous Surface Vehicles – took off in 2007 after winning a research …
"“What they'd normally do is use a large 100-metre ship to do that which costs anything up to $150,000 dollars a day, doing a very low-value task,” said Cowles. “Essential, but low-value. If you can do two things simultaneously, tracing those transponders and also positioning them at the same time, you can save a lot of money.”
The gain here doesn't seem to be anything to do with having an autonomous boat, and simply down to having a small boat working at the same time as the big one. You could achieve exactly the same benefits by just having a normal boat with a person in it. The robo-boat may not need a salary, but it costs more for R&D, maintenance, and so on. There are no doubt plenty of areas where autonomous or semi-autonomous boats are great, but this really doesn't seem to be a good example of one.
This again. The bandying about of the term 'AI' like it's either here or due next year is rife on tech sites now. Pattern recognition and pathfinding is not AI for fuck's sake.
Anyway. These look like they would be very handy for drug smuggling assuming they're small enough to avoid radar detection.
It could be a limited AI, which is not the same as Strong AI, General AI or Artificial Sentience or Consciousness.
Intelligence is generally described as the ability to perceive information, and retain it as knowledge to be applied towards adaptive behaviours within an environment or context. Whilst there are other definitions of intelligence, there is no problem in using 'AI' to describe some existing problem solving machines.
>Pattern recognition and pathfinding is not AI for fuck's sake.
Not individually, but how the boat reacts to these inputs (along with weather forecasts, goals, local sensors etc) could be. The article is a bit scant on details, but it seems ASV's work parallels that of autonomous cars, which are classed as AI.
I'm not sure that anything boat-sized can avoid radar detection when they're actively looking for you. In WW-II, the U-Boat periscopes could be detected with radar (RAF Mk II Airborne Interception radar, and later the H2S radar). Radar has only gotten better since then.
The only advantage would be that being an unmanned vessel, the smugglers would not be at risk at high sea. They would only be picked up when the vessel got to its destination, tracked by the Coast Guard and the Navy.
Ballistic missile and hunter killer submarines work on the premise that it is difficult to find a boat in the vastness of the oceans, but that is all about to change with this technology because submarine ports aren’t vast.
Once you’ve found a sub, it is cheaper to drop a small roboat on the surface to track it where ever it moves. Doesn’t need any weapons because you rarely need to actually sink subs and that bit is easy when you know where they are. When the price drops, you can just drop a ring of roboats around the ports to pick up subs when they leave port.
Expect to see stories about lots of ocean mapping in the south China Sea
http://www.theregister.co.uk/2015/11/12/us_military_drone_submarine_hunter/
DARPA have gone with one 140 ton boat, instead of a herd of smaller ones. I don't know enough to know why... maybe one big powerful sonar system is better than several smaller ones. Maybe it just has a greater range (drag is proportional to cross section, a square power, whereas fuel capacity is proportional to volume, a cube power). Maybe they thought they'd build and test one prototype before making a few more.
My assumption would be that a team of several sensor platforms would offer greater performance than the sum of its elements, since having distance between the sensors allows for triangulation of signals. But hey, I'm not a weapon designer, dolphin or acoustic engineer. :)
"Once you’ve found a sub, it is cheaper to drop a small roboat on the surface to track it where ever it moves. Doesn’t need any weapons because you rarely need to actually sink subs and that bit is easy when you know where they are. When the price drops, you can just drop a ring of roboats around the ports to pick up subs when they leave port."
You may have noticed in the story on the £800m MoD business incubator that BAE and Birmingham U have developed a "Quantum Gravitmeter."
Such devices (pioneered by the late Dr Robert L. Forward at Hughes in the late 60's) can detect the gravity anomaly of a hand in front of them based on the difference between gravity in one direction and another.
So I'd guess the mass of a large ICBM submarine even at say a kilometre would still be quite detectable.
>So I'd guess the mass of a large ICBM submarine even at say a kilometre would still be quite detectable.
It would if the submarine hadn't displaced a volume of water of equal mass to itself. Gravimeters are used to detect underground hollows or other areas of low density.
Are you SURE it was unmanned? Perhaps Cap'n Rum was less visible than usual, as a result of having his legs sliced clean off by a falling sail, and swept into the sea before his very eyes...
http://allblackadderscripts.blogspot.co.uk/2012/12/blackadder-ii-episde-3-potato.html
As someone who has pissed about building boats and fucking about with boats the idea of making them autonomous is, frankly, abhorrent. A boat needs human intervention to give them meaning and bring them to life.
You'll be trying to put them in the water next!
...are using an autonomous robot boat to track aircraft where land-based receivers can't.
M.
I'm concerned about Collision Regulations- Power gives way to Sail gives way to Fishing etc., so I assume roboboats would give way to everything.
But what day shape should they display? Some vessels have balls, some have diamonds up on display.
I suggest roboboats should have balls and something that looks like a cock up. Because there will be.
"m concerned about Collision Regulations- Power gives way to Sail gives way to Fishing etc., so I assume roboboats would give way to everything."
It's not quite that simple - there are various exceptions under the general heading of "vessel restricted in her ability to manoeuvre", and "In construing and complying with these rules due regard shall be had to all dangers of navigation and collision and to any special circumstances, including the limitations of the vessels involved, which may make a departure from these rules necessary to avoid immediate danger'.
In other words, if you are silly enough to sail your dinghy across the bows of a container ship you will probably a) get squashed, and b), be in legal trouble.
A few days ago there was a news article (in Dutch) on a lifesaving device to be used from a beach. They refer to it as a robot, but from the article it appears to be more like a remote controlled floatation device with some kind of propulsion system, able to reach 35km/h.
One might hope it has some proximity sensors too, and not rely solely on the operator being able to avoid ramming Emily squarely into the already distressed swimmer.
Roboticists could learn a thing or two from insects if they're looking to build tiny AI machines capable of moving, planning, and cooperating with one another.
The six-legged creatures are the largest and most diverse multi-cellular organisms on Earth. They have evolved to live in all sorts of environments and exhibit different types of behaviors to survive and there are insects that fly, crawl, and swim.
Insects are surprisingly intelligent and energy efficient given the size of their small brains and bodies. These are traits that small simple robots should have if they are to be useful in the real world, a group of researchers posited in a paper published in Science Robotics on Wednesday.
In brief Numerous people start to believe they're interacting with something sentient when they talk to AI chatbots, according to the CEO of Replika, an app that allows users to design their own virtual companions.
People can customize how their chatbots look and pay for extra features like certain personality traits on Replika. Millions have downloaded the app and many chat regularly to their made-up bots. Some even begin to think their digital pals are real entities that are sentient.
"We're not talking about crazy people or people who are hallucinating or having delusions," the company's founder and CEO, Eugenia Kuyda, told Reuters. "They talk to AI and that's the experience they have."
Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.
Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks.
In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.
Qualcomm knows that if it wants developers to build and optimize AI applications across its portfolio of silicon, the Snapdragon giant needs to make the experience simpler and, ideally, better than what its rivals have been cooking up in the software stack department.
That's why on Wednesday the fabless chip designer introduced what it's calling the Qualcomm AI Stack, which aims to, among other things, let developers take AI models they've developed for one device type, let's say smartphones, and easily adapt them for another, like PCs. This stack is only for devices powered by Qualcomm's system-on-chips, be they in laptops, cellphones, car entertainment, or something else.
While Qualcomm is best known for its mobile Arm-based Snapdragon chips that power many Android phones, the chip house is hoping to grow into other markets, such as personal computers, the Internet of Things, and automotive. This expansion means Qualcomm is competing with the likes of Apple, Intel, Nvidia, AMD, and others, on a much larger battlefield.
In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.
"Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."
The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.
Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.
The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.
This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.
Analysis After re-establishing itself in the datacenter over the past few years, AMD is now hoping to become a big player in the AI compute space with an expanded portfolio of chips that cover everything from the edge to the cloud.
It's quite an ambitious goal, given Nvidia's dominance in the space with its GPUs and the CUDA programming model, plus the increasing competition from Intel and several other companies.
But as executives laid out during AMD's Financial Analyst Day 2022 event last week, the resurgent chip designer believes it has the right silicon and software coming into place to pursue the wider AI space.
In Brief No, AI chatbots are not sentient.
Just as soon as the story on a Google engineer, who blew the whistle on what he claimed was a sentient language model, went viral, multiple publications stepped in to say he's wrong.
The debate on whether the company's LaMDA chatbot is conscious or has a soul or not isn't a very good one, just because it's too easy to shut down the side that believes it does. Like most large language models, LaMDA has billions of parameters and was trained on text scraped from the internet. The model learns the relationships between words, and which ones are more likely to appear next to each other.
Zscaler is growing the machine-learning capabilities of its zero-trust platform and expanding it into the public cloud and network edge, CEO Jay Chaudhry told devotees at a conference in Las Vegas today.
Along with the AI advancements, Zscaler at its Zenith 2022 show in Sin City also announced greater integration of its technologies with Amazon Web Services, and a security management offering designed to enable infosec teams and developers to better detect risks in cloud-native applications.
In addition, the biz also is putting a focus on the Internet of Things (IoT) and operational technology (OT) control systems as it addresses the security side of the network edge. Zscaler, for those not aware, makes products that securely connect devices, networks, and backend systems together, and provides the monitoring, controls, and cloud services an organization might need to manage all that.
The venture capital arm of Samsung has cut a check to help Israeli inference chip designer NeuReality bring its silicon dreams a step closer to reality.
NeuReality announced Monday it has raised an undisclosed amount of funding from Samsung Ventures, adding to the $8 million in seed funding it secured last year to help it get started.
As The Next Platform wrote in 2021, NeuReality is hoping to stand out with an ambitious system-on-chip design that uses what the upstart refers to as a hardware-based "AI hypervisor."
Biting the hand that feeds IT © 1998–2022