We had a token ring coax admin network, and then the Cat 5 Ethernet side. We turned grades into the department secretary, as she had the only terminal on the IBM ring in the department. With suspended ceilings, pulling cable was no big deal, and the support racks were already in place. The maintenance guys had put a pulley system in place for the long runs, so ‘fishing’ was kept to a minimum.
There are always people who value convenience and money over security, and it only takes one incident to show them their error.
]]>Best practices is to have separate administrative and student networks, with the administrative network not having access to the Internet. We could never get any of our educational clients to go for that though. “You mean we have to put workstations dedicated *solely* to administrative tasks in various place? And have *two* sets of networking wiring?!” They simply refused to spend the money needed to segment their networks. GAH! The stupid! It burns, it burns!
The only salvation for universities is that typically they have their administrative data on IBM mainframes. IBM mainframes that are connected to the corporate network only via 3270 emulators. So it is very rare that administrative data gets compromised at colleges, just as it is very rare that banking data gets compromised at banks (for the same reason — it’s all on a mainframe that’s not Internet-connected). But mainframes aren’t a very good development environment for developers of PC software, heh.
And no, I don’t believe Symantec when they claim that it was only the six-year-old code that was stolen. Like I said, I’ll bet you a new photo of The Mighty Fang gleaming under the lights that Symantec’s *current* code is currently being hawked on various Eastern European hacker sites as we speak…
]]>I’m just used to working with people who compartmentalize everything, and require permissions. In the best university systems, that also takes place, so it is very difficult to access administrative information unless you are using the correct password on the correct machine. Of course, the system fails when people don’t shut down when they leave the office, or have the passwords on a post-it note on their system.
The software may be six-years old, but the intrusion was recent. I can’t believe they settled for old source code. I have never trusted Symantec, but my history with them goes back a long ways and involves people who have been screwed over by them.
I guess I expect higher standards from people who sell ‘security’ software.
Of course, it is always possible that they weren’t hacked, and the software left with a disgruntled former employee. No way of knowing for sure without the appropriate logs to scan.
]]>And once things are hooked up to the corporate network… well. Salesmen will plug *anything* into their laptop. You walk around outside any corporate office building or inside any trade show and throw USB keyfobs out at random, salesmen will scurry to grab’em off the ground and will stick them into the nearest USB port handy to them to see what’s on them, at which point if autorun is enabled, their laptop is *owned*.
So the question is not one of, “how did this happen?” We know *how* it happened. Somebody did something stupid, a Day Zero exploit zapped’em, and then this Day Zero exploit was used to compromise other systems on the local network until finally access to the source code server happened. The question is more one of, *WHEN* did this happen? I don’t buy it that this happened six years ago. My best guess is that *all* of their *current* source code is currently floating around on the Russian hacker markets.
And BTW, having a private key embedded into your source code is bad show, bad show… it’s ridiculous that they did this. It’s just lucky that man-in-the-middle attacks are devilishly hard unless you can compromise a target’s DNS or routing tables to point at the man in the middle, or can physically insert your man into the middle. But in any event, having the source code floating around out there in and of itself won’t make your software insecure, or Linux would be the most insecure software, like, evah. You have to do something stupid to make your software insecure… like, say, embed a private key into it that can be used for a man-in-the-middle attack. Just sayin’ ;).
– Badtux the Security Geek Penguin
]]>