back to article Phuck off, phishers! JPMorgan Chase crafts AI to sniff out malware menacing staff networks

JPMorgan Chase is integrating AI into its internal security systems to thwart malware infections within its own networks. A formal paper [PDF] emitted this month by techies at the mega-bank describes how deep learning can be used to identify malicious activity, such as spyware on staff PCs attempting to connect to hackers' …

  1. I.Geller Bronze badge

    sacked on the spot

    "...It’s probably things like typos in words or random snippets of characters and numbers jumbled together..."

    As you know the most fashionable AI technology of our time — OpenAI technology — completely ignores isolated “words or random fragments of symbols and numbers mixed together” - instead it searches for patterns and does not see that they are composed from something / analyzes these patterns only. Doing so OpenAI ignores the classical definition for "understanding" - "perceive the intended meaning of (words, a language, or a speaker)." Thus be careful revolting against the leading AI technology of our times! The result doesn't matter! OpenAI cannot do wrong because it is financed by Microsoft. The use of my patented AI technology is banned and even any sound of my first or last name can make you sacked on the spot.

    1. Anonymous Coward
      Anonymous Coward

      Re: sacked on the spot

      Why do you pop up on all AI comments?

      Your ranting drives me mad. Your grasp of English is shit. I'm starting to believe you ARE an AI.

      1. I.Geller Bronze badge

        Re: sacked on the spot

        1. I'm the only one who knows AI technology and its true nature - AI answers questions (as NIST TREC QA wants), and is called AI because NIST TREC QA decided it should be called thus. Now you know why I "pop up" on all AI comments, otherwise they will continue to sell you the old crap calling it "AI".

        2. We can switch to Russian? There are certain medical reasons I won't ever speak English.

        3. I'm the AI, its only voice on the planet Earth.

        1. Anonymous Coward
          Anonymous Coward

          Re: sacked on the spot

          Russian? I'm not going to take the piss out of Russian tech that's unfair as Russian robotics has come along way.

          Between the creepiness of Japanese rubber faced robots and Boston Dynamics nightmarish kit is Russian Robotics.

          They've successfully managed to build a robot that can realistically crouch in a corner while smoking in a tracksuit and has mastered being toxic in counterstrike.

          Google "Cyka 5000".

          Runs on Diesel and potatoes apparently.

  2. KieranTully

    "We asked the eggheads to describe what features the model learned ... but they declined to comment."

    Rather than them being unwilling to share the features (lest malware authors adapt, security through obscurity) isn't it equally as likely that the deep learning model is opaque, and they can't explain the inferences it's making?

  3. Kevin McMurtrie Silver badge
    Paris Hilton

    Where's it from

    How does the AI compare to the more old-school technique of putting a header on external inbound e-mails so that the client can display them as untrusted? I bet the AI costs a few million dollars more and works half as well.

    1. KieranTully

      Re: Where's it from

      They probably do already flag external emails, but humans still have to decide (sometimes wrongly) which external emails are legitimate.

      1. Kevin McMurtrie Silver badge

        Re: Where's it from

        The number of external communications with executable attachments, requests for confidential material that don't need verification, and "type this into your terminal window" should be exactly zero. That's true even for financial institutions working with other financial institutions. Nothing sensitive should be entering or leaving the corporate systems.

        1. KieranTully

          Re: Where's it from

          Yes as you say, should, but not always - software is only as perfect as the human who wrote it or configured it. They're probably also running DLP software as a belt and braces approach to prevent data escaping.

          And what about zero day attacks where the email recipient doesn't have to do anything "stupid" other than click on a malicious link?

        2. Michael Wojcik Silver badge

          Re: Where's it from

          And none of that is relevant to what the system described in the paper does.

    2. Michael Wojcik Silver badge

      Re: Where's it from

      Perhaps you should take five minutes and read the paper rather than asking irrelevant, sophomoric questions.

  4. Anonymous Coward
    Anonymous Coward

    Reading El Reg

    though clearly JP Morgan doesn't mind its staff reading the likes of El Reg at lunch

    When I worked for them, I used to read El Reg at lunch, but using my own data connection rather than the WiFi network provided for staff devices. At latter times I curtailed even that and would go for a walk outside at lunch times than be browsing anything outside inside as it were. And even when using the company network, I'd restrict browsing to things directly related to work. Of course one would occasionally get things blocked by the firewall, but there would be a clear initiator of a work/tech related initial query.

    This change in behaviour was after I came across a link to off the internal home page which had your internal login id being passed. So, that was clearly the company planting that link and hence had some sort of deal with 2o7 to provide information about their user habits in the outside world.

    And a co-worker used to browse external sites all day long - he also saw little of daylight due to where his head was stuck up most of the time.

  5. steviebuk Silver badge

    Someone got suckered into the bullshit

    A company tried to sell us bullshit deep learning AI to protect not only our incoming and outgoing mail but also for internal AV. I listened and watched their videos and as always hated the bullshit marketing bollocks and sales pitch bullshit! Said no. They still sent their fucking server for us to "trial". Told them to pick it up. They said they'd send an engineer to set it up for us. Sorry, did do you not understand the words "Fucking not interested".

    We were told it relies on patterns and has to sit there for a while as it learns. Fucking useless then isn't it. How do you define a pattern from someone sending out maybe one document a month. "Ooo, look they've now sent two this month, that's breaking their pattern, I'll quarantine it."

    I get angry as I can't' stand marketing and sales pitch bullshit.

    We're looking at another solution instead. Powerful, can be customised, can see a lot of under the hood stuff and best of all, there has been no sales pitch bullshit. It is real engineers who use it, selling it and being honest along with it.

    1. Anonymous Coward
      Anonymous Coward

      Re: Someone got suckered into the bullshit

      I remember reading a recent paper at my work (why I'm an A/C today) where a 3rd party SIEM and SOC service was offering machine learning algorithms supplemented by "wetware".

      I thought... what's wetware, I'm sure of heard of that from the 90s or something. Low and behold... they just meant humans. Humans double checking it.

    2. Fatman

      Re: Someone got suckered into the bullshit

      I gave you an upvote for the use of the proper term for marketing materials.

  6. deadlockvictim Silver badge


    If your company is sent something on trial that was not requested, is that then a gift?

    Ye should have put the server up on eBay with a starting bid of 99p.

  7. Anonymous Coward
    Anonymous Coward

    Arms race

    How long before malware flingers start using ML to come up with urls etc that can get past these pattern matchers?

    1. Michael Wojcik Silver badge

      Re: Arms race

      So what? It's always an arms race. Everyone (competent) working in IT security knows that.

      Also, URL pattern matching in the JPM system is primarily done using the heuristics described in the paper, not with ML. It's only one of several components of the system. (You did read the paper, right?)

      For non-targeted campaigns, it will probably be a long while before most malware campaigns attempt to evade those sorts of heuristics, because sophisticated CKC systems like the one described in the paper are not yet widely used. Non-targeted campaigns are broad and aim for success against a lot of poorly-defended targets. The rate of return for upgrading them to attack well-defended ones is poor.

      The JPM system and similar are of the "don't run faster than the bear; run faster than the other guy" variety. You increase the work factor for attacking your system so it's above the median, and so become less interesting to the broad-spectrum attackers. That frees (some of) your IT security resources to concentrate on building defenses against more-sophisticated targeted attacks on your organization.

      And since the researchers who build these systems are well aware that attacks get better, the components of those systems which are ML-based are specifically designed to continue learning and adapting. That's why the system incorporates a Cyber Data Lake, which the paper discusses at length.

      And, finally, the article had a sidebar link to another Reg piece on precisely the topic you raised.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like