Re: Essentially its about detecting crap
Why should I test ? Equipment should work as promised and returned for credit if faulty.
40 publicly visible posts • joined 24 Sep 2009
It was just a few years ago that Fibre Channel ports cost upwards of £3000 per 2GB port.
FCoE came along and threatened the market and now FC ports cost the same as Ethernet because, physically, they are Ethernet ports. The FC encoding etc is different but the switch silicon and interface hardware is identical to Ethernet and many switches can do both Ethernet, FC or FCoE on the same device.
FC customers should saying thanks to Ethernet for keeping it cheap enough to use in 2015.
"And when it comes to VCE's Fibre Channel, it could easily point out that Nutanix does not have any."
Nutanix doesn't require FibreChannel, design is different. Not that it matters in converged infrastructure, who gives a toss about how the storage works so long as it works. And they both do.
VCE uses whatever EMC says they are allowed to use. And that's FibreChannel legacy kit in the VNX. Which is dfine.
Nutanix uses newer technology in IP/Ethernet and that's fine too.
Cisco claims to have cheap hardware (actually, it's not that cheap and NOT cheaper once cables and 'certified' SFPs are added to the bill) but it needs the APIC controllers to make the magic happen.
What price will Cisco charge for APIC ? Will Cisco look to recover lost profits and lost revenue by going for a high price ? History says they will certainly try but I'm doubtful that enough customers will pay.
Clearly, most of the people here would be unable to pick the difference between a 20 year old Lada and a brand new Jaguar.
While I absolutely agree that Cisco is overpriced & overblown, there are many critical features in a Cisco switch that solve a lot of problems in well run and well designed networks. However, those features are unknown to most people sine they don't understand the technology.
My experience of D-Link is poor. That is, they do connect a bunch of wires together and pass Ethernet frames but they have no security capabilities, there is limited control over QoS, or handling of external authentication. Multicast hasn't been used much in the last ten years, but if you are deploying VMware or Hyper-V then IP Multicast is now critical to your network design.
These are things that most people just don't know about. And you all showed your ignorance.
You can't fix stupid, indeed.
Maximum number of MAC entries ?
Can it handle 1000 new MAC address per second ?
Can it handles 10000 /32 IP Routes ?
Performance under route flap - does it crash when injecting 500 routes per second for 60 minutes ?
Multicast PIM-SM and PIM BiDir is needed for VXLAN support.
How many *,G routes can it handle ?
Does it support all OSPF area types. Can you inject routes between OSPF areas at a suitable rate ?
I'm just warming up here.
Hi Nate
I'm the networking guy that was on the podcast. I can suggest watching this Youtube from a year ago about the fundamentals of OpenFlow. May provide some insight into how it works.
It's really hard to explain why SDN/OPenFlow changes everythin without pictures.
http://www.youtube.com/watch?v=cZmbajtbNVk
My experience with BT as an outsourcing 'partner' is long tale of misery and woe. Frankly, they couldn't organise a cost effective drink in a bar without a team of five project managers and charging for every single one.
It's cheaper and more effective to do it your self. At least the council would be in control of the service and could make changes to it. BT would freeze it into a contract and nothing would ever improve.
The point of a Private Cloud is to take average DC utilisation from 5% to something like 40% of capacity. There is a lot of spare capacity in today's data centres and peaks are easily handled.
Public clouds have peak problems because of growth. Private clouds have different problems.
It's not so much Cisco blocking them out, as the sheer marketing momentum behind Ethernet. There is no intelligent discussion of alternative networking protocols - that's certainly led by Cisco but supported by everyone else in the market.
Until Intel comes back to market with Infiniband, then Xsigo might gain some new momentum.
Also, it's a clever product. Needs clever people to buy it and there's a shortage of that right now.
The replacement technology for STP is here already. It's broadly know as Layer 2 Multi Pathing (L2MP). There are two approaches but only one looks serious and that's called Transparent Redundant Interconnection of Lots of Links ( TRILL ) and you can find details on the IETF using your favourite search engine.
Cisco has a proprietary and pre-standards implementations of TRILL that they call FabricPath which is shipping today.
OpenFlow directly updates the FIB in the router. Routing protocols only update the RIB which is subsequently updated to the FIB. This is a significant difference.
Instead of letting an autonomous system propagate data to it's neighbour, a central controller will have a complete view of the network - make some sort of programmatic decision and then download some configuration to the FIB on the switch/router.
In the same way that VMware allowed the effective management of hundreds of windows servers, OpenFlow hopes to provide effective management of hundreds or thousands of network devices as a coherent whole.
Which is much more advanced than a MS server can do today.
Assuming that you can create a distributed coherent cache ( which EMC / NetApp has been claiming is impossible for the last ten years) then where would you put the SSD cache ?
On the motherboard ? How would the local cache software communicate back to the remote array, how often would the cache update ( EMC updates their Flash cache once per day ). This would most likely use a kernel driver in the OS e.g., VMware to use the cache.
On the CNA / HBA ? And make it part of the storage understructure would require support in the driver ? At what price would this highly custom piece of silicon, that would be bathed in Unicorn Tears and individually blessed by a virginal Tech priest as it left the factory ? I'd expect it to be orders of magnitude more expensive than the Fusion-IO product. Fusion IO is a a goodish flash drive built use the PCI-E bus in certain computers - but an entirely custom CNA with Flash and handy CPU/Software is quite different.
More answers than questions here.
Until telcos can actually manage their core competency of bandwidth, they should be prevented from other activities. For example, I would like an accurate billing cycle, on time delivery of connections and services, plus a more reasonable price.
After that I will believe they could deliver value add.
In reality, Cisco has been conforming to standards because Network Engineers have continually, loudly and constantly demanded standards compliance. We remember multivendor networks and interoperability for the days before Cisco became dominant. Further, as na industry we recognise the power of open standards.
Standards will not be the cause of the Cisco's shrinking this year, it's because the company is unfocussed behemoth that is ignoring it core customers while it plays with shiny toys such as videoconferencing and retail cameras.
Managers should be recognising that their engineering staff saved them from second rate technology. A lesson that the storage industry and their poor standards compliance could learn.
I can only think that EMC feels they have nothing left to lose. Wall Street thinks that EMC has only two things of value - VMware and RSA and the company valuation is negative for storage.
So I guess some childish pranks from the school ground don't really matter. Or are they hiding something else ?
Two things, I think. One is that any future product line based around fibrechannel networking is clearly regarded as no future at all. Since the EVA is HP primary go-to market for legacy storage markets LSI felt it had no future with the software.
Second, HP much touted guarantees to continue with the EVA must look suspect now. It's hard to believe that the EVA has any future beyond it's current feature set and that 3Par is the long term future. Trying to make the point that supporting the existing EVA is a requirement is as useful as spitting into the wind.
The storage industry has managed to convince their punters to overspend on _everything_.
Unnecessary OM3 patch leads for 10 metre cable runs, FibreChannel HDD instead of better caching, lossless fibrechannel switches instead of well designed protocols.
Now they are going to want unobtainum cored tungsten steel racks as well.
More money on storage goes up the cry.
Not only is the radio spectrum short on resource, but the backhaul is, relatively, more expensive since phone towers are not in good places to get high speed and low cost backhaul to the core.
Having slow backhaul saves way more money and makes more bottom line profit. Shaping isn't about radio, it's about backhaul.
Having worked for a Service Provider in the UK that used the same concept with a bunch of scripts and proxy servers, there is nothing clever about this solution. In fact, the "NetBox so cleans it whitens thingie" design looks as if they took the idea that everyone is using and turned ti into a product.
My guess is that this company ahas a very high marketing spend and managed to get in front of the Reg hack who wrote this piece.