Other Nano form factors would be nice
Some of us would like a Home Server. A Nano motherboard in the MiniITX size, 1-2GB RAM, 2 laptop disks, 2*GbE. No fancy graphics. And quiet.
109 publicly visible posts • joined 6 Jul 2007
Comparing 128b to 32b isn't correct. IPv4 address bits were so miserly that they could only do the most essential thing -- addressing. Even the allocation for multicast was controversial. IPv6 is large enough to trade some addressing bits for other features -- most notably the 64b used for EUI-64 autoconfiguration (ie, like AppleTalk, IPX, etc have had since their creation).
My estimate is that there is enough routable IPv6 address space for one billion enterprise style allocations. One consequence of this is that the current practice of giving individuals large allocations isn't sustainable -- the average DSL customer will need to get a /64 rather than a /48. But a IPv6 subnet's worth of globally-visible addressing is a lot better than the single IPv4 address they get now (and that single IPv4 address won't be globally-visible in 2012 whereas the IPv6 addresses will be).
IPv6 is clearly big enough for the job. But comparing 128 to 32 ignores the use of addressing bits for non-addressing purposes and leads to conclusions which encourage an imprudent wastefulness. Without such wastefulness IPv6 could run for 100 years, with it IPv6 is good for only 10 years.
Glen [a senior network engineer at an ISP with an actual IPv6 deployment]
There was only need for one secretary because she didn't spend her day sending letters for non-attendance. She knew who wasn't attending, and when it got sufficiently bad she'd ring the parents and make an appointment for them to visit the headmaster.
Someone, somewhere, decided it would be efficient to replace this person with a machine. Later on, someone else decided to recruit a person to deal with all the computer-work, but they never had the breadth of responsibility of the school secretary. And last of all, someone decided to send letters home (lovely traceable, blame shifting paperwork) rather than dealing directly with the problem (hard, messy dealing with people and their lives).
Google's IPv6 presence in DNS depends what IP address you make the DNS query from. This allowed Google to roll out IPv6 on its services in an ordered way and gave Google a way to withdraw the IPv6 visibility of services if that caused trouble at a particular ISP. As the Google techs noted at linux.conf.au's Sysadmin Miniconf, the whole process has gone smoother than expected and rollbacks haven't been necessary.
As for the rest of the comments, there's an assumption that things can continue as they are. That's not the case -- ISPs will need to roll out their own NAT infrastructure. Those of you who use 3G data modems have already experienced how well that works (not). The implications for households are -- you won't be able to run your own server anymore (you see no globally-visible IP address which you can NAT it to), your ISP will decided what protocols you can use (eg, a lot of enterprise VPNs will fail) and latency will increase. Those effects could lead to some anti-competitive outcomes (eg, a ISP with large telco revenues might not offer NATing of SIP and H.323, forcing its customers to its own IP Phone offerings).
In the short term ISPs will buy underutilised IPv4 addresses. Unfortunately a lot of the older IPv4 address space has archaic address allocation methods, and thus won't come onto the market quickly. We're probably looking at a doubling of the size IPv4 routing table, as the table tracks smaller and smaller allocations. And doubtless people currently selling famous bridges will start selling /16s.
I expect the increase in latency will lead to gamers demanding IPv6 so that their packets take the shortest route, rather than trip through a NAT box.
Large ISPs will need to roll out IPv6 for their core in any case -- they have more customers than can be addressed by 10.0.0.0/8. The real obstacle in continuing that rollout to customers is the lack of IPv6 broadband routers. The IETF have failed to specify such a device (specifying the device would also require the IETF to specify NAT for IPv4, something the IETF has had minor religious wars about, so nothing ever progressed). Without solid specs there is a standoff between manufacturers and ISPs, neither being willing to commit to building or buying a non-standard (and thus soon-to-be-superceeded) device. There are specifications being developed outside of the IETF, but it's all getting a bit too late.
The F-22 cancellation has really upset Australia and Japan, the two countries likely to do combat against the latest Russian-built planes. The thought of losing air superiority to some Chinese proxy nation or to a mid-rank Asian power during some Pacific peace keeping operation isn't pretty.
It would have been much more sensible for the US to offer F-22 sales to Japan and Australia, keeping those production lines running, and offering the US an alternative should the F-35 be a dud.
Interestingly serious deployment of Linux on desktops means that you may end up running two distros. If you take a best-of-breed strategy then you want Ubuntu at the desktop, a no-brainer since Fedora isn't supported. At the back end you could run Ubuntu Server if you wanted, but the depth of the Red Hat support organisation is just so much greater than that of Canonical.
SuSE are an option, although not the leader in either field at least they have supported product offerings in both. Normally a single vendor would be better -- less buck-passing and so on. But when that vendor is Novell...
I've seen some pretty successful replacements of "green screen" systems by Linux. The trick seems to be concentration on getting the sysadmin costs per unit right down by automating system administration (using puppet and package managers) and using single sign on (Kerberos) backed by a diretcory (LDAP). Those old green screens had a few VTAM geeks running the show, and it's important not to lose the savings of cheaper hardware and networking by increasing head count.
This is why enterprise switches have DHCP enforcement features.
But even a small network can block DNS traffic to and from all hosts but their DNS forwarders on each subnet interface. That will unmask most rogue DNS forwarders in short order (they tend to try recursive DNS direct to the network rather than use the forwarder given by DHCP, presumably this will change as attacks become more sophisticated).
A small network can also have a router ACL on each subnet which logs packets on the DHCP port from non-approved DHCP servers. Since the first phase of DHCP relies upon a broadcast the router will see some of the rogue's traffic. This won't stop the rogue DHCP server, but it will make the network administrators immediately aware of the rogue server's presence and MAC address.
It looks like Microsoft are once again looking to "monetise" the MS-DOS FAT filesystem. Since it is used in cameras, GPSs, phones and god only knows what else it's a potential source of large revenue. Last time they tried this they got shot down by PubPat finding all sorts of prior art. This time they're trying to exploit a peculiarly-implemented feature of the filesystem -- long file names.
Some European wrote: "...and there has been a clear cooling trend since May 2007."
Conversely, if we were to make climate predictions from this week's weather in Adelaide (min temp yesterday was 33C, max was 45C) then the earth will boil in about a year's time.
It was hot enough yesterday that the garden outside of my office spontaneously combusted ("is that smoke, oh there must be a bushfire, hang on it looks rather close, actually it looks very close, oh shit...").
It so hot this week that TV retailers are playing "Life in the Freezer" as subliminal marketing. Train and tram rails have buckled. Not that the cars are drivable if you've forgotten to leave a towel covering the steering wheel. As for bicycles, I left home this morning with a frozen water bottle and 15 Km later it was hot enough to burn my mouth.
To answer a fair question "what have Red Hat done"? The answer is they have taken a student's project which built a Unix-like operating system, and further developed it into a robust, feature-rich operating system which has eclipsed all other Unix-like OSs in features, ease of use, brand recognition, and popularity on servers. Red Hat make money by selling formal support for a branded Linux distribution based upon this work.
About half of the work flowing into Linux comes from Red Hat, and this has been so from the beginning of Red Hat. Jon Corbet of Linux Weekly News regularly publishes articles about what came from whom in the most recent release of the Linux kernel.
If you are looking for technical triumphs, then I'd nominate SELinux (which Red Hat did not invent, but for which Red Hat paid the most expensive part -- the writing of the security rules) and which Red Hat had the balls to deploy.
I do disagree with the article about the transparency of Red Hat's revenues. These revenues are mainly for support, and there is precious little information or examination of this support. For a start: how many customers, at what discount, and how happy are they? How much does Red Hat's strong brand and innovation tie users of the operating system to Red Hat's support?
Car manufacturers are crying because state laws on emissions differ? Is that the limited ambition of US car manufacturers these days -- to sell only to the US states? Because if they want to sell to the rest of the world, they're going to have to live with differing laws and standards. Some countries even drive on the Other Side of the Road!
The article is rubbish, but there's an important point in the dross. Building a CDN is a first-mover business. ISPs are happy to host one CDN (currently Akamai), and maybe a Google CDN. But a new third player? In short ISP-hosted CDNs are a natural monopoly with high barriers to entry for subsequent players, and as we all know at some stage monopolies start making monopoly profits.
Limelight -- reasonably new CDN -- has encountered this problem. Fortunately, in many countries there are decent ISP peering sites with colocation facilities and this allows competitors a way in. Although at a higher price, since they have to pay for colocation rackspace whereas the ISP-hosted CDN does not.
And in this maybe Google are playing ISPs for suckers. Google could lease colocation space at peering exchanges and connect to ISPs there, but if ISPs are offering the colocation for free...
> Geeks like fast machines and toys. What is the point of Linux on crap hardware?
Because the purpose of this box is to replace 3270 terminals on bank counters. Windows is hell for that job, since it has a high system administrator to deployed hardware ratio, whereas I've seen Linux deployments in that space work with ratios of 1:10000. The only way Windows even gets in the ballpark is with Citrix-style rollouts, and they have a requirement for huge centralised servers.
The confusion of thumb and penis has a glorious legal history, notably in Mary Whitehouse's private prosecution of Michael Bogdanov, director of the play "The Romans in Britian" for having "procured an act of gross indecency by Peter Sproule with Greg Hicks on the stage of the Olivier Theatre".
To obtain evidence, the prosecution had to witness the "crime". Foolishly, the prosecuting solicitor, Ross-Cornes, saved a few bob by taking a cheap seat at the back. The defence's Hutchinson, in a famous cross examination, asked how Ross-Cornes was sure it was a penis at that distance. Ross-Cornes replied "what else could it be?" Upon which Hutchinson held his thumb to his crotch.
Ross-Cornes was the only witness offered by the prosecution. Kennedy, the prosecution's leading barrister, then withdrew, for reasons he has never revealed but often supposed to be that Whitehouse insisted that he continue but Kennedy (who because of the way the 1956 Sexual Offences Act was written, could have still won) did not wish to obtain a win lacking firm evidence by using clauses designed for the prosecution of rapists rather than theatrical directors.
The attorney-general -- Havers -- then nolle prosequi-ed proceedings. Something he should have done much earlier, such as once the private prosecution proceeded under the Sexual Offences Act rather than convincing the government to launch its own prosecution using the Theatres Act, which was the proper legislation controlling censorship of stage productions.
I use Fedora and Ubuntu pretty extensively, and I can't recall the last time I compiled anything other than my own code.
The Linux approach to software installation is superior to the Window's approach. Having all programs available through the system's installer means that users can't be duped into installing random trojan packages. It also means it's pretty easy to bring up a complete system as opposed to hunting the web for necessary but not provided programs and plugins.
IIM-B PROF HELD VIOLATING COPYRIGHT
5 Jan 2006, 0115 hrs IST, SEETHALAKSHMI S,TNN
BANGALORE: In a major embarrassment to the prestigious Indian Institute of Management, Bangalore (IIM-B), an associate professor of the institute has been accused of copyright violation and plagiarism.
The professor — T R Madan Mohan who teaches production management at the institute — has quit. The issue came to light after another professor at the institute, who has co-authored an article with Mohan, tipped off the director last week, about the alleged copyright violation. ...
Note the years: Paper published in 2004 (perhaps written in 2003). IIM-B sacks an assistant prof in 2006. OU course contains plagiarised paper in 2008. IEEE notes plagiarism in 2008.
Quote: Lawson rejects the "dumb farmer" thesis that crops and methods will not change as conditions change. He points out that in reality, climate change is gradual and therefore easy to adapt to, and argues that non-directed market-driven adaptation as crises occur is the by far the most logical and economical approach.
Oh dear, another common misunderstanding of the IPCC model. Recall that the IPCC model is designed to be conservative. This was politically necessary, as the IPCC model had to withstand criticism about its assumptions. Such a conservative model is useful: if it implies a 1C rise in air temperature, then the likelihood of that will be very high.
Mathematically, this "conservative" means that systems with linear and non-linear components are modelled as only their linear components. The result is obviously a gradual model, since all non-gradual influences have been removed.
Also obviously, what the IPCC model isn't best at is predicting the most likely change in climate (as opposed to the best case change) and it certainly isn't any good at all at modelling the dynamics of that change (gradual or sudden).
If you chat with scientists in the IPCC contributing groups you find that almost all of them have an important non-linear (ie, sudden) components which they use in models of their own speciality. Personally, I'm inclined to think that these non-linear components will mask the linear components (ie, we will have a progression of sudden events).
In any case, Lawson's claim of gradual change is simply an artifact of the model he has chosen to use. It's unlikely to be the case "in reality". I would have expected the author of the article, as a critic of the IPCC, to have understood the actual weaknesses of the IPCC model more clearly.
Since he evidently doesn't, I'm pretty much dismissing this article as yet another rich person in a prolifigate nation trying to wriggle out of their responsibility to live more humbly.
-48DC is common enough in high-end routers, which pull about 14,000W per rack. At this low voltage that is a substantial amount of current, and thus substantial cabling costs. Yet high voltage DC makes no technical sense either -- the transmission loss will be higher than the losses from AC-DC conversion.
High-voltage AC offers a cheap way to move electrons around.
Furthermore, the input to the facility is AC. Either from generators attached to the grid or from a local backup generator. What makes you think that the losses of AC-DC rectification at the point of generation would be less than the losses of AC-DC rectification at the point of use?
The high loss from DC-AC inverters from battery UPS systems can be ignored. These are only in service for a few seconds whilst riding out brown-outs or whilst the backup generator comes online. So the total amount of lost energy is low.
As a reasonably regular visitor to the National Center for Atmospheric Research in Boulder, Colorado let me tell you that advocating the existence of climate change and a need for an effective response hasn't done anyone's job any good. Funding has been cut, prominent scientists have had professional PR campaigns aimed at blackening their name, administrators who were in line to run the Center have been shuffled to one side. Even last month the administration cut a NCAR program which was examining the effect of climate change on society and dismissed its leading scientist (note that "society" here doesn't mean "touchy feeling" but simply "non-atmospheric" -- such as how much housing may be lost to climate change).
Your remarks about the Hadley Centre, although amusing, just adds to the amount of unfair criticism laid upon scientists in the field.
Xen is a hypervisor, The hypervisor runs on the bare metal, and the hosts run atop the hypervisor.
Xen does have one twist which sets it apart from a traditional Hypervisor (such as IBM's VM). Rather than develop and maintain a set of device drivers, which would be a difficult and ongoing task with the wide range of equipment available, Xen sends all I/O via the hosted operating system in Domain 0 and Domain 0 gets near-direct access to the bare metal devices.
KVM is not a hypervisor, Linux runs on the bare metal, KVM is a Linux feature to make it more efficient to run virtualised hosts atop of Linux.
For the average Linux user who just wants to run up a VM alongside their desktop KVM is the better choice. Install one module and some client software and you're running. The VM looks like any other Linux process and managed like any other process (use "top" to see CPU use, "kill" to halt run-away VMs, etc).
The alternative for desktop use is VMWare, but this fine product is in practice a nightmare in a desktop environment because a huge informally-maintained patch set which tracks recent kernel changes is needed to bridge the gap between supported releases and what you might actually be running. KVM gains significant usability from being shipped with the Linux kernel and thus removing the need to deal with source code.
The choice is much less straightforward in server environments, where installing a hypervisor is much less of a chore than for a desktop. VMWare remains the tool of choice there, but Xen is close. KVM has a lot of potential, but is too raw for use at the moment.
My own feeling is that hypervisors are a hack -- a replacement OS for when the OS is too deficient to provide enough services and performance for VM hosting. This made sense for IBM when it invented virtualisation -- no one imagines that MVS would have made a good OS for providing services for hosting, so a new OS was needed. It seems fair to differentiate that OS from IBM's general purpose OSs with the name "hypervisor".
But there's no need for modern operating systems to be deficient in providing virtualisation services in the first place. And running under an OS buys a lot of tools for free (as a trivial example, process accounting to allow billing for the use of the VM). So I expect that the approach of KVM will prove to be superior in the long run.
The customs regulation was passed on 1 July with little notice or publicity. My job requires me to import/export networking goods and none of the newsletters from customs agents mentioned this new regulation -- and these are newsletters which mention events which may delay a shipment by a few hours.
So it's not surprising that shipments arrived in Australia without the correct documentation during July and August.
Related to the spate of laser illuminations of aircraft, the NSW state government passed a law to make laser pointers a weapon. This was remarkably poorly handled: not until well after the law came into effect could I get simple answers to questions like "will telecommunications company staff require weapons licenses to use visual fault indicators" (basically a big laser pointer shone down fiber) and "do handheld lasers need to be stored in a weapons-grade locker as opposed to a toolbox".
The answer to both questions is yes. This implies that people who have had a nervous or mental illness in the past will no longer be able to possess a firearm, that is, to be a telecommunications company optical systems technician. Similarly, techs should avoid a bad divorce, since the tactical use of an AVO by your partner will result in you losing your license.
I think we can safely say that the NSW state government hasn't fully considered the results of its legislation.
"How many apple users have played with Vista for more than five minutes?"
Ah, my personal hell. I've had Windows Xp, Windows Vista, MacOS X, FreeBSD, Ubuntu and Fedora all installed on my Mac at various times in the past two years as I've worked through a number of deep customer issues (my very favourite being caused by Win XP's IPv6 implementation not being able to use IPv6 for DNS queries -- how brain-dead and unexpected is that?). No one has been more pleased about the advances in virtualisation than me.
I can tell you that they all suck. MacOS looks good and is good if you do only one thing. Otherwise you're always forking out another US$25 to $250 for some essential utility that Apple forgot to put in the box.
After using Vista, Windows Xp feels plastic, like it's going to snap at any moment. But Vista's security model is user interface disaster. Lads, go and have a look at SELinux -- see how it stops bad things first then raises a little flag which you can click on to see why the blocking happened -- that's the right UI model. Not asking "Infect machine with trojans/spyware/viruses, Y/N?" every five minutes.
Ubuntu. Stamping USER FRIENDLY on the cover doesn't make it so. It's nice, but increasingly dated. But if you need a widget to get stuff done, with no fuss it's installed and running five minutes later. Windows and MacOS -- take note.
The best operating system is the one which let's you get your job done, that doesn't get in your way. That is, the best operating system is the one you're most used to using.
Since I learned computing on SunOS (classic, not that weird BSD/SysV hydra-headed Solaris) than means I'm in the small camp that feels most at home with Fedora+Livna.
There are two further major objections. Both more substantial than your second point.
3. The design and specification OOXML is not of sufficient quality to become an international standard.
The initial specification from ECMA wasn't even valid XML for some content. The design is basically a dump of the internal state of Office and is a long way from XML best practices.
OOXML is a long way from standards best practices. For example, images are written out in the Microsoft BMP format rather than using a standards format such as PNG.
The current specification is still inadequate for someone to implement OOXML: so much interoperability testing is needed against Microsoft Office that the process becomes reverse engineering. This is a long way from the ideal -- where a another party can implement the specification correctly with no use of sources outside of the standard and its references. Yet other standards organisations manage to do this, some even for protocols under development. For example, see "An Independent H-TCP Implementation under FreeBSD 7.0 -- Description and Observed Behaviour" which is a test of the quality of a specification which is intended to be submitted to the standards track of the IETF.
4. Bad behaviour should not be rewarded.
The misuse of the ECMA Fast Path undermines the whole notion of the Fast Path and Publicly-Available Specification -- where specifications which already had gained consensus, had honed their design and had enough review to purge the specification of inaccuracy, vagueness, ambiguity and impracticality would not have this work re-done by ISO. The OOXML specification was put forward by ECMA into the Fast Path with no consensus, poor design, much vagueness, much ambiguity and poor expression, and some impracticalities.
The ISO JTC1 processes assumes good faith by participants, resulting in a lack of safeguards against playing the system. When it was obvious the system was being played ISO's CEO should have developed some backbone and sent the specification to the usual ISO standards-development process.
If you are interested in the technical aspects, then the lads' presentation of their results to their MIT class is still online. Not giving the URL as I'm not sure if that is an error by the lads (and they're in enough hot water already) or an error in the transit authorities wording of their suppression order. Just noting the irony.
"Microsoft’s release of service pack one for Vista, which came loaded with 77,000 drivers"
Do you write these numbers without thinking? 77,000 device drivers -- how likely is that? I count about 1,500 device drivers in Linux (find /lib/modules/* -name '*.ko' -print | wc -l). Windows would be comparable within an order of magnitude.
Maybe those drivers are expected to work with 77,000 retail packagings of the hardware supported by the O(3) number of device drivers?
Microsoft do control the end-to-end experience for two products where it could "provide complete experiences with absolutely no compromises" -- XBox and Zune. Those two products are being comprehensively outsold by Wii and iPod. Both of which are technologically inferior products, so it can only be the woeful quality of Microsoft's conception of the "end-to-end experience" which has turned customers towards its competition.
I'm sure manufacturers will bear that in mind when Microsoft proposes "changing the way we work with hardware vendors".
Arrogance, thy name is Ballmer.
I was surprised by the number of comments disagreeing if an average person can tell the difference between a .mp3 and a non-lossy format such as CD. As it happens I've ripped my CDs to FLAC, a non-lossy format. This was insurance -- just in case there was a difference between non-lossy and lossy formats I didn't want to have to rip all of my CDs again again.
As an experiment I've just selected a random 10 seconds from 25 randomly selected tracks from the CD collection of a 40yo male (ie, me). I encoded these selections with LAME into MP3, a lossy audio format.
I randomly ordered the combined FLAC and MP3 selections. I did not know the order. I then played the selections through lightweight brand-name headphones in an office environment whilst having a radio on in the background. I noted if I thought the track was lossy or not. Each 10s selection was played once only.
I then compared my notes to the reality. In all 50 cases I chose the correct encoding. I conclude that even an amateur can readily distinguish mainly-1980s popular music encoded with 128Kbps MP3 from CD audio if they are listening for a difference. I also conclude that randomly-selected '80s music is 90% dreadful.
There may well be errors in my methodology, but since anyone with a computer and headphones can repeat this test I'd encourage people to design and conduct their own experiment and post the results. For your own sanity use a musical era other than the 1980s.
Mine's the lab coat.
Gee, this is really what irritates me about computers. You've got a simple requirement: bring up a new machine securely. A pretty basic requirement for a computer.
The security people in the article recommend the purchase of a computer to protect your computer -- something called a "NAT router with firewall". Apparently there's no chicken-and-egg problem with this idea.
The previous poster recommends yet another product, with its own meaningless terminology ("slipstreaming"?) and hours to be wasted. Oh, and the need for another computer to prepare the CD with. And that computer was installed and nLite downloaded from the Internet without chicken-and-egg issues just how?
Both Ubuntu and Fedora look simple enough to bring up with all updates applied. But here's the rub, they don't do this by default. That's right, the secure alternative isn't the default.
It looks nearly impossible to bring up MacOS with all updates applied. The saving grace here being that MacOS doesn't have a huge number of open ports running insecure protocols when it starts, so there's a good chance for the updates applied after boot to win.
In short, all OSs currently suck for Joe Average doing an installation on a unfiltered Internet connection. And you're not going to be able to hide Joe Average behind that NAT gateway anytime after ISPs roll out IPv6 to customers.
"It is true that Western air forces might conceivably have to fight developing-world air forces equipped with exported Russian machines; but it's hardly the likeliest of missions for them."
Written from the safety of the UK. You'd have an entirely different view of affairs from Australia, sitting at the edge of SE Asia with developing-world nations stocking up on cheap MIG-35 and Su-27. We're feeling the lack of an exportable F-22 and the lack of any F-35 somewhat keenly. At the northern end of SE Asia, Japan has similar concerns.
Australia almost came to blows with Indonesia over its army's destruction of East Timor after the UN-sponsored vote for East Timor's independence. Australian airpower, and thus control of shipping to the island, was a major factor in Indonesia's determination that it should conceed to the UN's wishes. Airpower was the major protection against heavy equipment for the lightly armoured UN (predominately Australian) force. As you can see, an edge in airpower over emerging nations can determine the course of a conflict.
The MIG-35 and Su-27 threat have lead Australia, Japan and Singapore to spend large sums on money on interim aircraft prior to the availability of the F-35 for moderate (but hopefully sufficient) enhancements in air combat capabilities.
If the F-35 is a dud then there will be a serious change in the balance of military power in SE Asia.
I call this claim.
The Eee PC 901 has yet to be available in Australia -- that is, no sales have yet been made through retailers. But the Windows version is widely pre-orderable whereas the Linux version is not.
That doesn't say a 50:50 production split to me. That says that Asus is making more on the Windows models and wants to sell more of them.
> the EU could buy a licence
Because it is a one-off cost the Samba project has a contributor who is willing to pay the royalty. So you program need only be part of the Samba family of GPL software to be licensed for free for the protocols which were part of EU's penalty for Microsoft's misbehaviour.
> And according to the OSP still excludes any GPL based applications such as OpenOffice
It's more complicated then that. There are four things in flight here.
Firstly, the publications of protocols required by the EU. The OSP is not acceptable to the EU for these protocols. However the OSP is *offered* in addition to the EU-acceptable license. So be very careful to collect your files using the license you find most favourable (ie, the EU one).
Secondly, the publication of protocols and file formats required by US DoJ. The OSP is acceptable to the DoJ for these protocols.
Thirdly, a straight-forward licensing programme in the tradition of 1980's IBM. The default protocols for Exchange 2007 and Sharepoint 2007 have changed. Rather than lose interoperability (say, with Blackberry, which would push executives to Apple tomorrow) MS are taking the opportunity to make money.
Fourthly, a move to forestall complains about missing documentation of past file formats referenced by OOXML. ISO has found the OSP acceptable (the ISO CEO ruled by fiat that it was acceptable and thus out of scope for further discussion).
Microsoft have conflated all of those things in the one media release.
"For every Mw of windpower you have you also require a back up"
Why? Maybe some industries close down on non-windy days. It's not unusual, building sites, schools, etc shut down under some weather conditions.
Once the cost of carbon is added onto energy pricing you're going to see some basic assumptions change (which is the point).
Using the same argument as the poster...
Because car bombers use cars, let's abandon cars and use public mass transport.
Apologies for feeding the troll. Mines the one with the suspicious bulges, which will doubtless get me shot in the head by over-reacting armed police when I board that public mass transport.
Jumbo frames are not restricted to LANs. Exactly the same efficiencies are needed if you are transferring data across the globe. Every long-haul academic network (JANET, GEANT, Internet2, CANARIE, AARNet) uses jumbo frames so that large research data transfers can be done efficiently.
As more commercial applications move bulk data around the globe (eg, for movie production) you'll see commercial ISPs offering jumbo frames too.
For the next generation of equipment at 10Gbps the Mathis, et al formula means that raising MTU is the simplest engineering response to the formula's limits to TCP performance. Thus super jumbo frames of 64KB at 10Gbps.
For your amusement the more complicated engineering alternatives are to re-lay the fiber infrastructure with lower loss cable and transmission equipment (which in turn means less bandwidth, as there's a trade-off between loss and bandwidth), increase the speed of light in fiber (which again means replacement of the globe's fiber plant), or reducing the diameter of the globe.
The BOFH -- who's always on the lookout for a bridge to sell to tourists -- could put together a proposal to VCs to drill a fiber conduit through the centre of the world. This would reduce the inter-continental distance to about a third (actually, by 1/pi). Networks are constrained by latency these days (as most connections never get out of TCP slow start) so routing traffic down the pi-way would improve network performance considerably. If the BOFH gets this funded I'd like my share in Guinness.
"Also LORAN-C provides reference time signals for telephone and data networks including the Internet."
Um, not really anymore. There's kit which uses LORAN-C as a timebase for network synchronisation. But that can readily be upgraded to use GPS. Which it is anyway when the maintenance of the LORAN-C timebases becomes more costly than replacing them with GPS.
The "pen sized jammer" argument doesn't fly for network synchronisation from GPS. Firstly, the GPS is used to discipline a oscillator that will accurately run undisciplined for a few weeks. Secondly, the GPS antenna is typically on a tall roof or on a tower, as GPS accuracy increases a lot when there is more vision of the sky. Getting the jammer into position within the exchange building implies either poor physical security (which is usually self-correcting, as that results in expensive equipment is being knocked off) or an inside job (in which case you've got a lot more to worry about). Thirdly, there's usually four GPS receivers in various locations forming the timebase.
As long as LORAN signals are transmitted then telcos will use them for disciplining timebases -- you want as many reliable independent disciplining signals as you can readily get hold of. But that's no justification for retaining the LORAN system, especially as it lacks global coverage. If we feel the need for additional timebase integrity that can always be arranged some other way.
In fact if LORAN is only retained for providing network timebases then it loses a lot of value to us. One of the good points of GPS is that it is both maintained by the USAF and used by them for navigation and weapons systems. Thus the USAF has a strong interest in the accuracy and integrity of the GPS signal at all times. An almost-retired LORAN system would not have the same assurance, would have diminishing coverage over the years, etc.
Yes, OOo is running slower with each release, but is being rescued by faster CPUs. Since disks aren't improving as quickly as silicon OOo's performance is becoming I/O bound. For one set of objective measurements see: http://www.oooninja.com/2008/05/openofficeorg-getting-faster-benchmark.html.
That the one-line fix under Fedora 9 is a 82MB download. A bit like OOo itself, this is better than under Windows but still sucky.
Abiword is coming along nicely and is well worth a look if you run Linux.
Oh yeah, set the taxes on energy to be fixed. As time passes the more energy-efficient industries get taxed relatively more heavily than inefficient industries. Why do energy inefficient industries need a tax break -- do we want to encourage more of them?
Mind you, current taxes are without carbon-pricing and mainly without pollution-pricing. The pain for petrol users has barely began :-(
Oh, and the people with the GM EV1 comments: where's the energy coming from. California liked electric cars because they moved pollution out of Los Angeles to electricity generators in Oregon. Just moving pollution about isn't an adequate response to reducing carbon emissions. In any case, that electricity doesn't exist anymore -- California had electricity rationing a few years back and will again. Oregon doesn't want any more of California's pollution and the Californians won't build any more power plants for themselves.
I've been to that store. It sits on a pleasant street with restaurants and fashion stores. The Apple Store looks just like one of the high-end fashion stores, probably because it is.
I can't imagine a fashion shop owner being upset about visitors with greasy fingers fondling the dresses, and I imagine the Apple Store owner was just as upset about the similar actions of the kids. Sure the dress shop owner can wash the dresses and the Apple Store owner can reprogram the iPhone, but I can't imagine either being happy about needing to do that. It also strikes me that the manager of the store rushed out -- perhaps not distinguishing greasy fingers from oily fingers, or Raging Thunder from something worse.
Having made that defense, I felt very uncomfortable when I visited the store last year. The equipment was available for fiddling on the many presentation tables. I was happy about that as I wanted to touch-type on the MacBook and see if its odd keyboard was suitable (it wasn't). But whilst I was doing this I attracted a lot of attention from the staff. The overall vibe was although it appeared that it was possible to test the equipment, the reality was "look, don't touch" and the gear may as well have been behind glass. That difference between appearance and reality has been the recurring theme of my experience with Apple's products, and if only I had taken the hint I was sent in the beginning...
The police seem to be the only people in this story who behaved reasonably (which is probably a good summary of their role in incidents like this). They listened to the manager, let him rant at the kids a bit, then let the kids go.
The manager's photography is a bit over the top, but there's nothing to stop the kids from photographing the manager and putting that and a summary of their story up on some posters around Stanford, which would hurt that Apple Store more than the kids will be hurt by having to order their Apple gear over the net.
"I once had a tour of a data centre in the UK and was shocked to find their fire extinguisher system was "water sprinklers""
A dry-pipe water system, Vesda particulate detector, and continuous staffing are the usual approach.
The Vesda system sets off an alarm and a tech with a fire extinguisher goes hunting for smoke. This allows the usual sort of computer-based fire to be handled with little damage to surrounding servers (usually they just get the power dropped as the tech drops the rack's circuits prior to removing the smoking gear, taking it outside, then opening the box and applying the extinguisher).
The water system is for the last resort, usually from a fire in another part of the building reaching the computer room. It's not unreasonable for the insurance company to sacrifice the computer room if that saves the building -- anyway, they are paying for the damage to both so it's their call.
Gas got unpopular when CPUs got small, numerous and hot and computer rooms got very, very large. If you think through the consequences of a cooling gas hitting a modern hot CPU and the problems of venting released gas from a large space you'll see the problems.
Fixed powder-based systems aren't a good fit to computers. An aerosol-based system would be a better fit.
"...all of them have said the employee has no expectation or right to privacy when using publicly funded computers"
An "expectation" doesn't protect the employer from the liability from intercepting and retaining communications of absolute privilege. One day we'll see an employer caught with intercepts of employee's e-mails with their doctor about an injury suffered at work or a lawyer about some breach of industrial law and the wheel will turn.
Interesting to hear that there are people reading corporate e-mails. They'll make useful witnesses even after the corporate e-mail retention policy has deleted the primary evidence. I'm hoping those companies remember to make those e-mail readers available during discovery :-)
There's a lot of concern and pending litigation surrounding ratings agencies' valuations of derivative products. Now it appears that the only public explanation of the errors in valuation is a simple coding error. Not anything the management could be responsible for, such as a culture of greed over accuracy. Hmmm.
Mine's the coat with the now-worthless AAA-rated debt in the pocket.