How about this?
You put a motorized camera on top of the thing, to take panoramic photos of its surroundings.
Using these images, a suitable application scans the images and finds 'important' points. After taking one of these panoramic views, the robot slowly displaces itself a given distance and repeats the process, until it has gathered enough data to triangulate the entire garden. From this data the app obtains a 'mathematical model' of your garden, that it can use to choose its path .
If you want to define 'boundaries', so your Gaga-bot doesn't try to mow the swimming pool, you use a 'laser wall', something you can cheaply build yourselves with a laser pointer and a suitably shaped reflector. Once the bot has 'learned' it's surroundings, the user can save and give a name to the 'map' generated. After this, the user can remove the 'laser walls', as they aren't need now that Gaga has the terrain map.
In normal operation, the Gaga takes more photos -not complete panoramics- to know its exact position and define an optimal path using the map model created before.
I admit that the programs to perform those tasks must be really complicated, but several systems for obtaining 3D data from overlapping 2D photos have already been developed, and this is very close to what would be needed here. I'm sure that if ElReg sponsored an open source project for this, they would get top talent to do the work. And lots of fun.
Another drawback is that you'd probably need a PC connected -WiFi?- to the Gaga-bot to perform these tasks. Of course that's not a big issue, as any one nerdy enough to build one of these, and use it, probably has more computers at home than they have shoes.
Yes, I mean all those guys able to lay relaxedly in a hammock enjoying a pint of beer or three, while Gaga suffers some software bug, escapes the boundaries of his internal map and brings indescribable havoc upon the neighborhood.