back to article Verizon dubs sec researchers 'narcissistic vulnerability pimps'

In an official blog post, an employee in Verizon's Risk Intelligence unit has taken aim at researchers who disclose security flaws, calling them "Narcissistic vulnerability pimps" and comparing them to criminals. "Have you ever heard of a terrorist referred to as a 'demolition engineer?'" the unnamed author of the rant asked, …


This topic is closed for new posts.
  1. Nexox Enigma

    works both ways...

    """"Have you ever heard of a terrorist referred to as a 'demolition engineer?'" the unnamed author of the rant asked, one presumes rhetorically. "How about a thief as a 'locksmith?'"""

    But I've also never heard a demolition engineer referred to as a terrorist, or a locksmith called a thief for merely demolishing or locksmithing. Not sure why you'd go about calling security researchers anything else, and I'm not sure what personal motives have to do with it either. If I was a demolition engineer (craigslist doesn't seem to show anyone looking for applications in the field, otherwise I might well be on my way) I'm sure that I would blow the shit out of buildings for personal satisfaction, as well as a pay check. Don't see why a security researcher can't be both personally and altruistically motivated at the same time.

    I also don't see why any software maker should have, or expect to have, some sort of legally protected right to writing insecure code. Many of them should be punished far worse for some of the crap they've caused through terrible programming.

    And I'd far rather security types release vulnerabilities rather than keep them secret and use them for legitimately nefarious purposes.

    1. ericlondon

      To your point - public disclosure is better...

      ... than taking the info and selling it. I'd rather have my neighbor tell me my window was left open than tell his criminal cousin...

      1. Anonymous Coward
        Anonymous Coward

        Yes, but

        the argument begins at where you don't close your window - despite the fact your window gives access to the homes (or your car, perhaps) of large no.s of other people too. And since your neighbour's criminal cousin has a reasonable chance of spotting your open window anyway...

  2. Steen Hive


    Anyone can use the phrase "Narcissistic Vulnerability Pimp" is definitely someone who for example would get caught out by putting unchecked buffers on the stack.

    I call sour grapes on this one.

  3. Not Installed Properly

    So, anyone who doesn't accept what they are given... face value, and is grateful for it, is narcissistic, I suppose. Lets, for instance, apply this to politics. I will accept what the currently empowered establishment tells me, and trust that if there are any flaws in the system, they will quietly sort them out without ever having to cause me any worry or concern whatsoever. To do otherwise would be merely attention seeking.

    Methinks somebody just had a lot of work laid wide open by the observation of a fairly critical vulnerability, and isn't very happy.

  4. Destroy All Monsters Silver badge
    Big Brother

    Intelligence Unit, eh?

    So we have an "official" blog post from some outfit called "Verizon's Risk Intelligence unit" (probably the same unit that passed phone records to the NSA under the table) smearing people but the author prefers to stay "unnamed"?

    Have old GOP "operators" found a new home?

  5. maniax

    ... and have you seen the security of the physical locks?

    It's a joke, plain and simple. Most locks are still bumpable (google for lock bumping, it's great fun), and most of the locks can be opened relatively easy by almost anyone with a bit of training. So, comparing computer security (which is able to stop a lot of bad people for most of the time) and physical locks (who stop mostly people who aren't trying to break in) is a bit apples and oranges.

    And yes, blaming the messenger isn't the best idea. Insulting them might even make some people happier just to release what they have found, instead of doing any kind of the "responsible disclosure". Not that Verizon are known for being able to think, anyway.

  6. Wokstation

    Wah! When we refuse to fix our stuff, people make us! NOT FAIR!!!

    Well, that is about the size of the blog post really, isn't it?

  7. Rogier van Vlissingen


    This is the kind of comment, particularly coming from this source, which can only boomerang on the maker.

    During my time in the security industry I became at times aware of such serious security issues at verizon, it's not even funny.

    Even without spending money on serious research it is easy to see how the communications companies and the banks and many others put peoples information at risk in numerous completely irresponsible ways, without even giving people the option to protect their information properly.

    Thus a comment like this coming from such a direction, where the SOP is to prefer convenience over security regarding the data of their customers, is patently absurd, and it is a gratuitous attack to one group that is at least doing something about it, in a way that enables providers to fix problems. They should be grateful of the free service.

  8. Douglas Crockford

    Partial Credit

    He got the "self-glorification and self-gratification" part right, but the "harms business and society by irresponsibly disclosing information that makes things less secure" part is dangerously wrong. The harm is caused by faulty systems, and a lack of responsiveness is repairing the faults when they are identified.

  9. Trevor Pott o_O Gold badge

    'Vulnerability pimps'

    Well yes, security researchers are. The ones that have a legitimate claim to the title "Security Researcher" bring the vulnerability information first to the company that makes the vulnerable software. Here's where things get tricky. As a software company, you are supposed to:


    B) PATCH THE HOLE. (Perhaps by getting the assistance of the researcher.)

    If you simply ignore the information about the vulnerability, then why shouldn't that researcher release the information publicly? You obviously don't care enough to support independent analysis of your software, nor do you care enough to patch it. This means that you are then a liability to your customers, and they deserve to be informed. This is no different than the current movement by regular citizens to demand that the governments of the world put into place laws that reveal when security breeches occur which would have left personally identifiable information vulnerable.

    If you are unwilling to put the time, effort and money into your products to secure them yourselves then you deserve to go out of business: your products are a liability to those who use them.

    Security Researchers shouldn’t just be paid for their work; their work should be funded by a coalition of all software development companies, and managed by an industry organisation. Companies and individuals should have to get a licence to be allowed to release code into the wild, that licence fee should scale to the size of the project, number of customers and the amount of personal information that code can potentially put at risk. (You pay a fee each year to register your car, and periodically to renew your driver's license, I see this as little different.) That money should go to the aforementioned industry organisation as a means of creating a "bounty pool" for security researchers. This then would be guaranteed funding for them, incentive for them to continue, and would mean that legitimate researchers would be registered with the industry association. These people would have their “white hats” firmly in place. ("Rogue" researchers would thus be firmly into “grey hat” territory and could be thus legitimately hunted. “Black hats” wouldn’t bother disclosing anything publicly to begin with.)

    This would have the added bonus of ensuring that the information discovered by these researchers would have to be disclosed to the industry organisation, and all companies would have a minimum time to respond before the organisation itself published that information. The timeframe available would be shorter based on the seriousness of the bug.

    Don’t try turning white hats into criminals. White hats are the only defence you have against black hats.

    1. frymaster


      very bad idea, that just sounds like blackmail to me... having said that, some sort of "code licence" sounds very intriguing, tho I suspect there's too much intertia for it to get anywhere

      Consider this scenario:

      - Researcher finds vulnerability.

      - Researcher notifies company.

      - Company determines it's valid, but that noone is exploiting it, and so schedules the fix for the next scheduled update*.

      - Researcher thinks this is too far away, releases vuln. details, company releases fix faster.

      In this situation, it can be argued the researcher has done harm, because now the details are out on the web. Sure, the fix is out too, but "fix available" doesn't equate to "user machines protected", especially if it's an out-of-band release.

      *We do all agree that regular update cycles are a Good Thing, yes? Because by giving companies a chance to plan updates there's the possibility they might actually install them, rather than going "these might break something, we'd better play it safe" as they trickle in?

  10. Captain Underpants

    That's excellent

    So what he's saying is that the only reason security through obscurity doesn't work is because those dang white hat hackers go telling everyone about the vulns?

    Golly gosh darn it, that explains everything!

    Except for, oh, the way the vulnerability exists (and will be found by those with malicious intent) regardless of whether white hats report them or not. But no, clearly this is an information flow problem and not a security/software patching procedural issue.

  11. Quxy
    Thumb Down

    Sauce for the gander

    Of course this is the same guy who (in last autumn's ICSA report on security products) berated the developers of security products for failing to "think like attackers".

  12. Jim Hill

    Fully pathetic

    So, if we strip out the hurled epithets and the gratuitous characterizations, the several missing entries in his list (how about the people who get sick of being stalled by some pompous corporate drone with a blog and a criminally-irresponsible product leaking people's secrets all over the net, who publish its weaknesses because they've tried everything else and none of it has met anything but the inimitable symptoms of mediocrities fully aware of their inadequacy desperately trying to bury yet further evidence -- what shall we call those people?), what's left?


    Drones speak their own language. "Problem-makers" indeed. Guess whose problem got made? These guys have quite plainly been embarrassed, and given the fear implicit in their language I'll go for "humiliated", by an actually competent outsider who has no respect for their, uhhh, position. "Putting yourself in the position of managing the time"? Oh, NOES! "Arbitrarily" releasing info? It's "arbitrary", now, when about half the machines on the planet are wide gaping open and the people responsible refuse to fix it? On and on about "owners" and these "owners"' responsibilities to the people who own them, never mind the poor schmucks stuck using their product whose bank accounts are consequently wide open to anyone with the right nickels to rub together. These guys have blown right past drone. They're fully-converted corpodroids.

    "If you too are tired of seeing criminals elevated to a podium of legitimacy and bestowed the same job title you possess" ... does _anyone_ doubt that the real crime involved here is making them look bad to their tribe?

  13. Anonymous Coward
    Anonymous Coward

    ...And they might put him out of a job.

    If there were enough white-hat (and I always hated that term) independents, then Verizon would have to shutter their business unit, and said manager would be looking for work.

    As for the comment about dictating the lives of the programmers fixing the bugs....well, that's what they're paid for, isn't it? I've done my share of late-night/early-morning compile/install/test/repeat because at 8am, it had to be working. Besides, it isn't the fault of the researcher, it's the fault of the original programmer/QA tester/etc. Revealing holes isn't the problem, allowing them to exist in the first place, is.

  14. Anonymous Coward
    Anonymous Coward

    Typical Verizon

    Hardly a company anyone would ever say is looking after their customers or is slightly intelligent in security matters. Rather than actually dealing with security problems properly, they'd rather deny access to everything that could cause a problem - for instance their use of DPI to block things like IRC on most of their products just because it *can* be used for abusive reasons. Soon they'll try blocking port 80 over HTTP because people can use Google to find out how to bypass said restrictions.The sooner Verizon go bust the better :)

  15. Anonymous Coward


    I guess only a telecoms company can employ this kind of Pointy-Haired Drone. The issue at hand is researchers informing software companies about flaws in their products and being ignored. Only then they disclose to the public, because the software company is a lazy bunch of bastards.

    From my own job experience in a Financial Software company I can tell you more than one nice story. One of them is this: A brokerage/banking system that was based on BTX session was to be converted to a browser-based system. After having logged in, the next request from the browser must of course again find "his" session. This is handled by a session id. Problem is of course that this session id could be faked by a malicious user. The developers see the issue and invent an additional Session Password, which is generated by the rand(5) function.

    rand(5) is a rather bad random number generator, especially because you can use output(N) to calculate output(N+1),output(N+2),output(N+3) and so on.

    I reported this issue to management, who did at first not understand. Then they decided to use a better random number generator for FUTURE projects. The current ones were too expensive to change.

    This company was a major supplier to German and British banks at that time and prided themselves about their "Security" competence. Fortunately they are no bankrupt

  16. Anonymous Coward
    Anonymous Coward

    It always seems to me

    That there's a great deal of "look how big my willy is" about some of these "researchers"...

  17. dr_forrester

    I suppose...

    ...he also doesn't want Grey's Anatomy published, as it tells the reader where the most vulnerable portions of the human body are located.

    One could think of dozens of similar examples. Suffice it to say that the researchers - sorry, narcissistic vulnerability pimps - are not creating the vulnerabilities about which they warn consumers. Their actions are no different than those of a product-testing group which warns consumers of a top-heavy vehicle's propensity for rolling over. I guess if this guy worked for Toyota, he'd be suing the NTSB for warning customers about the dangerous failures in that company's designs.

  18. prdelka
    Thumb Down

    Is this a joke?

    Dear Verizon, I hope your joking. In future you can expect no forthcoming announcements or early bird warnings and we hope all of your customer base realizes the true criminals are the ones who demonize the people who bring to light the failings and shortcomings of those PAID to protect them. I will personally spend the rest of my life hunting down every single mistake in your softwares that are distributed to the public, only I will sell them instead of tipping you off - for free, and at great expense to myself. F*ck you.

  19. This post has been deleted by its author

  20. gimbal

    Well I guess that would make the late Gallileo Galilei

    a narcissistic heliocentrism pimp?

    and Newton, a mack-daddy in the "what goes up" department?

    the Curies, two players on the radioisotope field....

    Darwin, just a pimp with a stroll in the Galapagos....

    and the Jabberwocky, the next emperor of the known world.

    I hope Verizon R&D will wise up and mind their supposedly professional manners, already.

  21. Peter Clarke 1

    Apple Shareholders

    "Apple has a responsibility to their shareholders and to their customers to deal with the vulnerabilities, and their shareholders and their customers can hold Apple's feet to the fire. They have their own ways of exerting pressure on Apple to behave in a way they think Apple should behave."

    Bad example - if Apple shareholders had any clout they would be popping champagne corks with high dividends instead of sitting on an enormous cash-pile earmarked for future products

  22. Richard IV

    Not a new term...

    Marcus Ranum has been calling them that for a good few years.

    The thing is is that there are a whole lot of hucksters in the disclosure space, possibly because the big IT to government space is now too crowded for them. This isn't to say that there aren't good, decent, hard-working individuals who act responsibly out there. It's like email - spot the word Viagra and you're starting to think of Hormel meat products - I get an awful lot of email with said willy stiffener in; fortunately my filters are pretty good in this regard; unfortunately, there aren't widely available filters on "security researchers".

    Personally, I'd have switched the order of pimps and criminals...

  23. James Woods

    sometimes this is the only way

    When I worked for sears many moons ago their network security was a joke. Reporting it to management led to no changes.

    Had I of come out on some blogs or websites about them surely Sears would of gained notice and corrected the issues.

    WAH im Verizon, I poison customers DNS, im in bed with BING, wahhhh, life's not fair.

    WAHH the government is forcing me to try to be "neutral" when all the while the rest of the net could give a crap for using uunet/

  24. asdf

    Security costs companies $$$

    Damn it, don't you realize how hard it is to meet your bare requirements (like code even compiles) when you please your finance department and outsource everything to a 3rd rate shop in India? And now the ungrateful unwashed masses expect the code to not only work but be secure besides? What the commoners don't understand is how fixing security holes and preventing them in first place takes boat loads of cash. Cash I could be reporting as profits and increasing my sick stock options. Damn it us executives are hurting too. Last year I couldn't even afford a new yacht or home in Monaco. So STFU you rabble rouser security people, what the sheeple don't know may not hurt them (and if it does our lawyers will blame someone else) but it will hurt our profits if you don't keep your traps shut.


    Verizon and most other fortune 500 CIOs

  25. Anonymous Coward

    @Richard IV

    The guy you quote is an equally idiotic man who works for "CSOONLINE" - the C*Os are exactly the problem. Also its from that crap-journo company IDG.

    The author states that 99 % of users were more concerned about features anyway, so all this "security issues" talk is surely just a nuisance. Exactly the crap that these clueless "top" managers have told him. He is also making it up as if the whitehats were the source of zero-days exploits - as if the blackhats never discover a security issue.

    I worked in more than one software company as a development engineer, including a financial software company who prided themselves in their "security" competence. There was absolutely ZERO determination to deliver good code and to fix problems that were discovered. We had lazy developers doing "copy-paste" programming instead of creating a procedure. The "CTO" even suggested we should not provide bug fixes at all and instead make customers upgrade to a (totally incompatible) later version. Which was not feasible as customers had tons of code written to interface with our old software.

    In a rush to "use XML" we converted a totally functional and well-readable config file structure into a hairball of XML. For which we needed to develop a GUI to edit. All at great cost (both GUI and core product changes) and new bug, of course.

    Fortunately this company went tits-up in 2001...

    Most managers are less than indifferent to security issues - all they want is to ship as many features as early as possible. And change everything early and often.

  26. Anonymous Coward
    Anonymous Coward

    Et tu, verizon?

    The comment is a bit crude, but does signal a sad and long standing trend in ``security research''. There's more than one side to the argument, of course.

    Plenty of ``security research'' is very poorly done, even from supposedly reputable companies. Holes are often obvious enough that you can forego the (less and less, see rise of intelligent design) usual scientific rigour in favour of shoddily done sensationalistic ``papers'', retaining a whiff of ``science'' for great justice.

    For companies depending on software for their livelyhood it's always embarrassing when some upstart tells you there's a hole in your precious software. It takes a well-run company to take proper care of that. That most companies aren't well-run is evident: There's the habit of especially big corps to lash out against and even sue people who are trying to do them a favour by giving them private advance notice.

    So it's not surprising that the ``researcher'' kids then say ``well sod them, sod everyone'', and go to full disclosure immediately. They're in it for the fame and fortune, after all. That occasionally catches out even the people who are trying to be responsible about their software, like a few open source projects I could mention.

    So it isn't just the attention whoring fraction of ``security research'', though I'll grant it is there, and is the most vocal part, obviously.

    But I daresay that calling each other names, however creatively. isn't going to improve the industry.

  27. Richard IV


    Cite yes, because gosh - the same phrase was used - and I was pointing out the phrase wasn't even original, no matter how picaresque it is.

    Agree with, mostly not actually - personally I'm in favour of default disclosure and some independent timing mechanism that doesn't depend on corporate vested interests OR the ego of a researcher.

    I should point out that even managers have the occasional good idea (Disclaimer: I'm not one, nor do I want to be one). Balls of "XML", err XML tags containing only CDATA, are definitely not one of them though. I've had to wrestle one into submission without the aid of a safety net for, guess what, security competence...

    Let me know when you're done on the straw URL, it might have some life in it yet.

  28. Phil Koenig

    BOTH camps have a point

    I agree with both sides.

    Verizon's point is true insofar as there are a lot of "security researchers" who do this primarily for self-aggrandizement, and they do it in ways (ie immediately disclosing and publicly releasing exploit code before even the most trivial attempt to work with the vendor) that absolutely make security WORSE around the world.

    On the other hand, just because we want the researchers/Narcissistic Vulnerability Pimps to disclose responsibly, doesn't take vendors off the hook either - their argument that their customers and shareholders will apply the necessary, appropriate and timely pressure to fix security holes is equally bunk.

    The way it should work is:

    A) "Narcissistic Vulnerability Pimp" finds problem in product.

    B) NVP goes to vendor, informs them of issue and impact.

    C1) NVP waits reasonable length of time for useful response/action.

    C2) How vendor responds weighs on this step too (haughty, dismissive, cooperative, etc)

    D) After reasonable length of time passes with no useful action, disclose.

    But of course, we all know it doesn't usually work this way - the NVP's don't get anywhere near the same glory when the vendor gets to issue the press-release rather than them. (or just silently fixes the problem without fanfare)

  29. Anonymous Coward

    "Hacking" Oracle

    I am not sure it still works, but ca 1997 I could bring down Oracle simply by doing

    telnet oraserver 1381

    and typing some random characters.

    Now talk of Nasty Whitehat Hacking The Good Oracle Database Server. Punish ! Send to Guantamo for Ziber Terroizm !!

    Recently, MS discovered more than 1000 (!) bugs in Office just by randomly modifying the input files ("fuzzing"). How many more can be exploited by analyzing the program code ??

This topic is closed for new posts.