Hoo boy, most of these things don't worry me, but this one does.
I'm semi-responsible for some Juniper gear, thankfully all Junos (BSD) based, but I no longer trust any of it if this is malicious injection vs. a bad review. However, what the hell can I do? I can't audit the code. I trusted Juniper, and now I'm stuck with that trust being burned. Running to any other proprietary network vendor is just as uncertain.
If Junos gets a bulletin, I have a lot of work on my hands very soon, as do a good chunk of service providers. I remember there being rumors of a certain three-letter agency saying they had some type of exploit for the Cisco ASA as well; I wonder if it was something this deep, vs. just a run of the mill RCE vuln.
This is one more reason to use open-source products for actually security-sensitive systems, maintain a good amount of defense in depth, and do a little bit of auditing of the code you're using yourself. More often than not these days, it sure pays to be paranoid.
EDIT: At the same time, this also really makes me respect Juniper more than I have previously. A company that finds this internally, on their own audit, could have patched it silently and said nothing about it to anybody. It probably would have been better for them PR-wise. The honesty is worth me not jumping ship to another (probably compromised) proprietary vendor, but you betcha if I can get away with it, I'll run something open-source and community audited when I can.
I've always regarded networking equipment as outside my security boundary. All it does is forward packets to the right places; an attacker can deny service by shutting that down or sending them to the wrong place, but nothing else. All my connections are encrypted and authenticated at a higher level.
Do you terminate SSL or something on yours? Or have open unauthenticated services running on your internal network? If not, what's the actual threat here?
- Plenty of services that can't adequately be secured any other way are often mitigated by restricting access to VPN users coming in over a network device. I appreciate that "just secure the service" is supposed to be the best practice, but when you're talking about things like IPMI interfaces or SCADA devices the alternatives approache zero
- Controlling the networking equipment can open you up to things like sslstrip
"things like IPMI interfaces or SCADA devices the alternatives approache zero"
My strategy with IPMI has been to assign IPMI non-routable, private IP addresses, then block that address space at the interior of the network (which is sort of redundant) and then require folks to SSH onto an interior host and connect to IPMI that way.
I would be very interested in, and receptive to, criticisms of this model ...
The argument there would be that it's very hard to secure all access to a network - anyone compromising any network device, or e.g. physical access to the cable runs in the building, then had access to IPMI.
In a high security situation I'd keep the IPMI network physically segregated, with a small number of machines acting as access to it. Or maybe connect IPMI only within each (locked) rack, and require using something like ansible if you want to perform an operation across more than one rack. Whether the cost/benefit fits for your circumstances is another question of course.
If you consider "ssh jump host" and "vpn" somewhat similar implementations of the same general strategy (forcing users to jump through something secure) we have a similar recommendation.
In my experience, IPMI is generally considered a part of the control plane, and not accessed via the same network as apps/data, but through a separate, more restricted/audited privatenetwork.
Very few places follow that threat model. The cost of encrypting between the web tier and the db tier (even from a management perspective) is more than most organizations are willing to pay.
Your threat model is also missing the fact that a network can mitm your connections and also silently duplicate sensitive traffic.
On an individual scale, you're right. But the fact is almost every corporation, non-profit, and government agency is weak inside the perimeter security stack; once you're in, you're in. A backdoor in networking equipment is a pretty serious problem.
If JunOS gets a bulletin, the whole Internet has a lot of work on its hands.
If JunOS gets a bulletin, shit, that's really, really bad.
I feel it's very likely that the NSA had something to do with the recent Cisco ROMMON "discoveries", so it would not surprise me one iota if they were involved here as well (although it's obviously pretty early to speculate on something like that -- and impossible to {dis}prove).
I am eagerly awaiting the incident report on this, although I find it unlikely we'll ever hear anything more than this from JTAC and friends. If this is the work of an intelligence agency, it will likely be gagged under the guise of "national security" unless there's a press-worthy indictment coming out of it.
Given ASAs run a 2.6 kernel that's not hard. From my Kiwicon 8 notes on Alec Stuart-Muirk's talk:
* Literally every protocol handler has CVEs against it.
* Every time Cisco add a new one it gets at least a DOS CVE. (There are some proofs of concept for pivoting these into real exploits on other Cisco products.)
* The ASA’s high availability protocols are unauthenticated and unencrypted. This is bad. Like, “will accept any packet claiming to be a management packet as valid” bad.
* Some authentication is optionally available, but if you enable it, the ASA will still accept unauthenticated protocols.
Because they can (allegedly) survive software upgrades (on the ASAs and IOS routers), I've always believed that these "infections" are done at a lower level than the OS, such as in the ROMMON on the IOS routers.
After hearing about "SYNful Knock" recently, I'm inclined to believe this even more.
Software based networking on BSD. Can't trust American vendors for anything. Is this what you wanted Mr. NSA? Good job at sabotaging your own business interests. It's not that you're spying, it's that you're so promiscuous about it.
Don't sell JunOS short. It is far more complex than "software networking on BSD" and has a lot of proprietary bits.
Junos (FreeBSD) is the Routing Engine; Juniper hardware also contains an ASIC-based Packet Forwarding Engine, which loads microcode from the Routing Engine upon boot. Not everything's in Junos all the time, but since the PFE loads its embedded OS from the Routing Engine kernel, you could just pwn the Routing Engine and then also have some sense of persistence in the PFE on reboot, probably. I don't know much about how the PFEs work internally.
I'm certainly no FreeBSD/JunOS expert. I am an unabashed fanboy of JunOS's *nix-y structure, though, vs. the monolithic binary that is IOS. (There was a great Blackhat 2011 talk on IOS reverse engineering, if you are interested in that sort of thing. [1])
I think your parent didn't mean that "JunOS is just software networking on BSD" but that "software networking on BSD is all that can be trusted because NSA screws commercial products".
And of course it doesn't have to be NSA. Maybe some foreign spies or pretty much anybody interested in spying on some Juniper's customers.
Or even a bored employee doing it for bragging rights. FWIW, I once worked for a (reasonably big) corp making software which has to run as root and I'm pretty sure I'd have been able to slip some small privilege escalation backdoor in there if I felt like doing so. But I have to admit that their products weren't as security critical (and, actually, already had some vulns), so one could hope that Juniper and Cisco are better than that.
They don't even need to go after the firmware: Openswitch has a binary-only userspace process that talks to the hardware. That is where a bad actor would hide something nefarious.
But you can take the OS and write your own driver for your own hardware. Also, you can take the OS as-is, run it on a VM, and examine the packets coming out for signs of any telemetry or any such nefariousness. I think it is a big step ahead of any other switch/router OS.
None of the current forwarding ASIC vendors publish enough information to write a driver for their chips. So no, you can't write your own driver. The reason OpenSwitch's OpenNSL binary is binary-only is because it uses Broadcom's proprietary SDK, and programs registers that are only available under Broadcom's NDA.
If you're just going to run it in a VM, you'd be better off with more established things like OpenBSD, OpenWRT, pfSense, Cumulus VX, or even just a Debian VM with Quagga.
<disclosure>I'm CTO of Cumulus, whose network OS is exactly as open-source as OpenSwitch: everything but the single user-space program that links to Broadcom's SDK.</disclosure>
>None of the current forwarding ASIC vendors publish enough information to write a driver for their chips. So no, you can't write your own driver.
Of course they don't publish it as open source. But if you buy their chips, they will, I assume. So then you can write your own driver.
>If you're just going to run it in a VM, you'd be better off with more established things like OpenBSD, OpenWRT, pfSense, Cumulus VX, or even just a Debian VM with Quagga.
I agree. My VM scenario was meant only to illustrate that OpenSwitch is really easy to run in a container/VM, and that fact can be used to examine it for any malovalent behavior.
That's not really any different than putting a tap on both sides of one of these NetScreens, for example, and looking at the traffic. It's still impossible to "verify" a device isn't compromised that way.
Only when the public can view (ALL) the code, rebuild it, and install it on their own hardware can we be reasonably confident it has not been tampered with.
It's not really much more open. Similar to some other network hardware (e.g extreme networks) it runs linux for the management interface. The switching/routing ASICs that does the actual work are still proprietary. It allows for a bit more flexibility in that it supports apt, but IMHO an image-based update system is preferable on network hardware, as you are not vulnerable to apt-breakage or anything like that. You know, whatever happens it will boot, always.
I work for a company which makes network devices. We've detected many hostile intrusions in our network. If you make hardware or software that runs in enterprise datacenters, someone is surely going to be trying to steal your source code to find exploits and possibly put backdoors in.
We use multi-factor authentication just to get in the corporate network and a separate, airlocked engineering network to store our IP. From what I've talked to from my colleagues at other major device manufacturers, this is becoming the industry standard (seven years ago I scoffed at Ericsson's paranoia for having a sequestered engineering network. Turns out they just saw the attacks earlier than we did).
In our case, doesn't seem to be the NSA. Looks more like China. Could easily be either one, or yet another party. This is the world we live in.
When I set up the Stock Options system at Netscape (as the Desktop Support guy) back in 1997, It consisted of two computers, connected to each other via a switch, in a Locked room, with a wall all the way to the ceiling to reduce false-ceiling access, with that room also located inside the Secure Legal Office Space. Systems were backed up daily by the users, using encrypted backups to Zip Drives.
It's interesting how when you don't know what the hell you are doing, you sometimes do something reasonably secure by pure happenstance. (Also, I had probably read too much Bruce Schneier when I was a teenager.)
I'm not 100% familiar with what precisely they were tracking. The software was called "Equity Edge", and it involved employee stock options. I do recall contacting their support organization when I realized the data files they were storing on the hard drives didn't seem to be encrypted (the systems were Windows 95). Netscape had two employees whose sole job seemed to be the care and feeding (and data integrity) of this system.
Data was sent to the Accounting Department (and other Lawyers) on Printouts.
I was doing this for a fintec company in 2002, and was scoffed at by just about everyone. These things have been going on since the world became connected (somewhere in 1992 or so), and have been getting prevalent and intricate - but they are not new.
This might not mean anything, but NetScreen-5GT 6.2.0r15 (The first affected version) was the first release with a SHA-1 sum. April 2015 is the first archive of this page I could find.[0]
I wonder if the reasoning behind the SHA-1 is (possibly) that they were starting to notice some strange activity.
I applaud them for disclosing all of this. That could not have been an easy thing to have to do.
Your government is illegaly modifying commercial software so they can spy on you without warrants. Your government is doing this through illegal breaking and entering, or by paying people to defraud their employers, or by using extortion to force people do these illegal acts.
Your government put CISA (Warrantless Wiretaps) into the budget bill.
Wake up.
Vote against any elected official that supports these things. Tell your elected officials you want your privacy and you will work to put them out of office if they don't defend it.
"It's not clear how the code got there or how long it has been there. An advisory published by the company said that NetScreen firewalls using ScreenOS 6.2.0r15 through 6.2.0r18 and 6.3.0r12 through 6.3.0r20 are affected and require immediate patching"
If the use a Subversion/Git code repo to maintain their codebase, they should be able to track down who wrote the code and when.
Not if the version control system itself was compromised, any audit trail could itself have been tampered with to hide traces of who really made the change.
Or if by "unauthorised" they mean "via unauthorised use of an authorised account" - i.e. one of their dev team had their account hacked.
Even when could be difficult to be confident about, never mind who, especially if the even happened quite some time ago so the amount of other information available for forensic analysis my be minimal (network logs have probably been archived off, maybe to /dev/null, by now).
You could have them do it, but it's just going deeper down the rabbit hole. The eventual question is "who/what do you trust?" - maybe it was the git server that got pwned?
A PGP-signed commit with a key generated on a smartcard (and never exposed) is a little better, but ... Someone pwned RSA before, and I'd be surprised if Gemalto and Yubico (just two examples) don't have some Three-Letter-Agency backdoor (and .. I'm sure those TLAs have equipment that can read modern smartcards).
Should we take that as a dysphemism for "code that wasn't security-reviewed by someone who should have been Cc'd on the review" instead of the much more obvious "malicious commit, either from an employee or an attacker"?
If it was an open source library that was imported there would be a link to the CVE affecting that library most likely and that CVE would've been updated to announce that it affects additional systems (JUNOS/ScreenOS) this would usually not trigger a completely new CVE from being issues (i.e. Heartbleed and Shellshock which got updated for weeks and even months when new systems were discovered to be affected).
The "unauthorized code" also introduced 2 separate and unrelated vulnerabilities one which allows you to bypass the authentication by some means (logs you in as a SYSTEM user), and another which allows you to decrypt VPN traffic.
The overall phrasing (knowledgeable attacker), the fact that a fresh CVE was issued, and the fact that 2 unrelated but very specific vulnerabilities were introduced into the system makes me think that this was more intentional than just an issue with importing code from a 3rd party.
Then all Juniper code should be thought of as tainted. It's really as simple as that. Juniper has announced that everything they have released can not be trusted.
EDIT:
> it was an open source library that was imported there would be a link to the CVE affecting that library
That would only be if it was a error in the library that caused this and not the way it was used.
I just do not see Juniper coming out and so casually saying, "Our source code was clearly compromised, and this is the one instance of them changing our released code that we found."
If it was poor implementation that's not unauthorized code.
Also I don't remember the last time that "unauthorized code" was used to describe the cause of a vulnerability, and code being committed without undergoing the full code review and compliance process is quite a common occurrence and also a common cause for some security vulnerabilities, especially ones that are easily caught by static code analysis.
The phrasing, the very specific nature of the vulnerabilities, the "knowledgeable attacker" requirement which means that you can't just fuzz your way into it just like any other zero-day and the fact that some of the Snowden documents that were published mention an NSA specific backdoor for Juniper firewalls means make me think that this wasn't an internal process failure.
If the process would've failed we would've gotten an advisory at the most without any specifics, the fact that they've intentionally mentioned that unauthorized code managed to get there is almost like a canary, they said that they've been breached without effectively saying that.
That is a certainly a compromise attempt, but I wouldn't call that 'actively compromised'—looks like a secondary CVS mirror repo was pushed to, and it was noticed "quickly". No damage done there.
Technically, this announcement also stops short of saying that any source code was modified. It just says that they "discovered unauthorized code in ScreenOS that could allow a knowledgeable attacker to gain administrative access to NetScreen® devices and to decrypt VPN connections".
We all interpret that as a hacker having placed a backdoor there, just as we interpret the Operation Aurora announcement as the Chinese government placing a backdoor in GMail and the NYTimes article as the NSA placing a backdoor in Huawei routers. We are probably not wrong in this assumption. But the same CYA deniability is in all three.
Not sure why you're being downvoted. It's not unprecedented, but I also don't think a lot of the companies that get hit by something like this talk about it so publicly.
"Unauthorized" seems strangely vague - does that suggest something was released without code review, or that an attacker actually managed to get something into their codebase?
It seems that an exploit in ScreenOS that is this large and knowing some of the things the NSA has been up to over the past few years. The coincidences are a little scary. Looking at the leaked catalog of NSA Exploits, you find that the NSA had a backdoor for Juniper equipment called FEEDTHROUGH:
> In the case of Juniper, the name of this particular digital lock pick is "FEEDTROUGH." This malware burrows into Juniper firewalls and makes it possible to smuggle other NSA programs into mainframe computers. (http://www.spiegel.de/international/world/catalog-reveals-ns...)
Lots of network devices don't do any kind of real validation of the updates they get from a remote server, beyond using SSL which only checks that the origin is correct.
A while ago I accidentally stumbled upon the file servers that served updates for a major router manufacturer. The host vulnerability itself was reported and fixed, but I doubt they told their clients (the router company) anything.
In that case, anyone who found the vulnerability could have modified the binary blobs, I wonder if this is a similar case.
It sounds much more like someone made their own "contribution" to the source code.
For JunOS, updates are digitally signed (and verified when installing) so any modifications would have had to occur before the images were built, but I really can't remember if ScreenOS updates are signed or not (it's been a long time).
heh I found such a thing once for modems, wasn't writeable, but for a device that's not suppose to have public firmware files, it was interesting. Included old versions and probably debug versions and internal docs too, but that stuff scares me too much to look at more than the initial file listing that happened to be indexed by google heh
I don't know their organization/development structure, but couldn't it also mean (for example) an overzealous intern committed where not authorized, and it passed into a production release? It's early to tell, and hard to tease from Junipers message. On the one hand, one might say they'd like to see more info from Juniper, but on the other, courageous of Juniper to be as forthcoming as they are.
I guess the question is: "why qualify the bug at all?" and work backwards. If it's a bad actor that did this, did they qualify it because the effects are so heinous they don't want to take responsibility for the code (but admit to being hacked)? If it were an intern, they are still disavowing the code, but admitting something slipped through audit cracks. What's worse, or what's the motivation (and cost) for qualifying the flaw? I don't know the answer, just putting out a question.
Can someone explain how the code is able to decrypt VPN traffic? I'm no expert on VPNs but I thought they provide end-to-end security and the protocols could detect tampering?
There is speculation it compromised the cryptography used for VPN traffic, enabling someone with access to that traffic to decrypt it through brute force:
Some VPN's terminate SSL at this point. So the connection from the client to the server is encrypted, but the internal network traffic in the data center is sent unencrypted on the private network.
There's no proof at this stage that a government agency is behind this. It could easily have been an employee inserting this code in an attempt to blackmail the firm, or perhaps to gain financial advantage by learning corporate secrets that would allow them to beat the stock market.
Hopefully there are source-control logs that show when this alteration was made and by whom, but given how hardware companies treat software I doubt it.
tl;dr Project aurora was a series of attacks in 2010/11 where Chinese attacked the SCM of major companies. Juniper may have had its SCM polluted without going through normal review processes
And so signing code patches looks like a good idea.
I meant that the implication seems to be the code got there not by an authorised committer adding bad code, but by external party adjusting the SCM - the conversation seemed to be heading off into wilds of "code review practises"
I think a lot of the focus here is on technical penetration of organisations. Much easier in many cases to just do a human intelligence penetration of an organisation to put the code in place.
It was a best seller before Juniper started using JunOS on their new firewalls (SRX) and it continued for a while due to being rock solid and their ability to do pretty much anything in a fairly easy way.
I remember working with my first SRX in 2009 and nowadays I still talk to customers rocking ScreenOS devices. I've lost any trust on them due to this news, of course, but boy their OS and feature set was awesome... I did like them so much that I bought a tiny NetScreen 5GT a few months ago for £10 only to gather dust on my desk for weeks, until I gave it away to a colleague that had a better use for it :)
Anyway, I derailed. They were everywhere and they're probably still around in many SMBs.
I'm semi-responsible for some Juniper gear, thankfully all Junos (BSD) based, but I no longer trust any of it if this is malicious injection vs. a bad review. However, what the hell can I do? I can't audit the code. I trusted Juniper, and now I'm stuck with that trust being burned. Running to any other proprietary network vendor is just as uncertain.
If Junos gets a bulletin, I have a lot of work on my hands very soon, as do a good chunk of service providers. I remember there being rumors of a certain three-letter agency saying they had some type of exploit for the Cisco ASA as well; I wonder if it was something this deep, vs. just a run of the mill RCE vuln.
This is one more reason to use open-source products for actually security-sensitive systems, maintain a good amount of defense in depth, and do a little bit of auditing of the code you're using yourself. More often than not these days, it sure pays to be paranoid.
EDIT: At the same time, this also really makes me respect Juniper more than I have previously. A company that finds this internally, on their own audit, could have patched it silently and said nothing about it to anybody. It probably would have been better for them PR-wise. The honesty is worth me not jumping ship to another (probably compromised) proprietary vendor, but you betcha if I can get away with it, I'll run something open-source and community audited when I can.