pcNowhere
The BBC on the latest corporate failure:
Symantec initially said there was no risk to users as the stolen code was six years old, advising simply to make sure the most recent version of the products had been downloaded.
But the updated advice said the stolen material had included blueprints for Norton Antivirus Corporate Edition, Norton Internet Security, Norton SystemWorks (Norton Utilities and Norton GoBack) and pcAnywhere.
Of those products, only pcAnywhere is said to be at “increased risk”, and users of the other software packages should not be concerned.
Ah, yes, don’t worry, be happy … until they update their advice again.
This is a company that sells security products, and their system was hacked. They stored source code on a machine that was reached by the hackers. This is not good practice.
4 comments
One issue with large companies like Symantec, or actually any company that has more than a dozen engineers, is that access to source code *has* to be on a server reachable via the corporate network, because you cannot physically place the source code server in one of the developer cubicles anymore and run wires directly from developer workstations to the source code server. At my current employer, for example, our source code server is in the development lab with a bunch of other servers for building various versions of our software, maintaining internal infrastructure (Wikis and such), and so forth, and there’s one fiber optic tube going from that lab to the corporate network, and one wire in developer’s cubes — from the corporate network.
And once things are hooked up to the corporate network… well. Salesmen will plug *anything* into their laptop. You walk around outside any corporate office building or inside any trade show and throw USB keyfobs out at random, salesmen will scurry to grab’em off the ground and will stick them into the nearest USB port handy to them to see what’s on them, at which point if autorun is enabled, their laptop is *owned*.
So the question is not one of, “how did this happen?” We know *how* it happened. Somebody did something stupid, a Day Zero exploit zapped’em, and then this Day Zero exploit was used to compromise other systems on the local network until finally access to the source code server happened. The question is more one of, *WHEN* did this happen? I don’t buy it that this happened six years ago. My best guess is that *all* of their *current* source code is currently floating around on the Russian hacker markets.
And BTW, having a private key embedded into your source code is bad show, bad show… it’s ridiculous that they did this. It’s just lucky that man-in-the-middle attacks are devilishly hard unless you can compromise a target’s DNS or routing tables to point at the man in the middle, or can physically insert your man into the middle. But in any event, having the source code floating around out there in and of itself won’t make your software insecure, or Linux would be the most insecure software, like, evah. You have to do something stupid to make your software insecure… like, say, embed a private key into it that can be used for a man-in-the-middle attack. Just sayin’ ;).
– Badtux the Security Geek Penguin
I’m just used to working with people who compartmentalize everything, and require permissions. In the best university systems, that also takes place, so it is very difficult to access administrative information unless you are using the correct password on the correct machine. Of course, the system fails when people don’t shut down when they leave the office, or have the passwords on a post-it note on their system.
The software may be six-years old, but the intrusion was recent. I can’t believe they settled for old source code. I have never trusted Symantec, but my history with them goes back a long ways and involves people who have been screwed over by them.
I guess I expect higher standards from people who sell ‘security’ software.
Of course, it is always possible that they weren’t hacked, and the software left with a disgruntled former employee. No way of knowing for sure without the appropriate logs to scan.
Err, well, ask City College of San Francisco about just how secure college networks are. (You’ll probably be hearing about that one shortly, I saw some Twitter references earlier today and did some investigating and it makes this situation look like a game of cribbage). At one point in time college networks were a major source of spam, both due to compromised Windows machines on those networks, and open relays galore on uncompromised but misconfigured systems. It is only recently that they’ve started getting serious about at least firewalling outgoing port 25 so it can’t spew to the Internet.
Best practices is to have separate administrative and student networks, with the administrative network not having access to the Internet. We could never get any of our educational clients to go for that though. “You mean we have to put workstations dedicated *solely* to administrative tasks in various place? And have *two* sets of networking wiring?!” They simply refused to spend the money needed to segment their networks. GAH! The stupid! It burns, it burns!
The only salvation for universities is that typically they have their administrative data on IBM mainframes. IBM mainframes that are connected to the corporate network only via 3270 emulators. So it is very rare that administrative data gets compromised at colleges, just as it is very rare that banking data gets compromised at banks (for the same reason — it’s all on a mainframe that’s not Internet-connected). But mainframes aren’t a very good development environment for developers of PC software, heh.
And no, I don’t believe Symantec when they claim that it was only the six-year-old code that was stolen. Like I said, I’ll bet you a new photo of The Mighty Fang gleaming under the lights that Symantec’s *current* code is currently being hawked on various Eastern European hacker sites as we speak…
For a while I was sending spam info to university network admins for take downs. It wasn’t just the e-mail, they were hosting the referenced sites without knowing it.
We had a token ring coax admin network, and then the Cat 5 Ethernet side. We turned grades into the department secretary, as she had the only terminal on the IBM ring in the department. With suspended ceilings, pulling cable was no big deal, and the support racks were already in place. The maintenance guys had put a pulley system in place for the long runs, so ‘fishing’ was kept to a minimum.
There are always people who value convenience and money over security, and it only takes one incident to show them their error.