Warning: Constant ABSPATH already defined in /home/public/wp-config.php on line 27
Beware Balloon Juice — Why Now?
On-line Opinion Magazine…OK, it's a blog
Random header image... Refresh for more!

Beware Balloon Juice

A heads up on visiting Balloon Juice – my ESET virus software just had a hissy fit over something in a .jpg file on the site, and told me to leave immediately. I went there from Atrios on a link about Brooks.

It is the first time that ESET has reacted to a site like that so I didn’t take the time to get the entire name, I left. I would note that Firefox told me that there was something on the site that required an additional plug-in, which may or may not be related. ESET recognized the specific type of threat, so it is something in the virus data base, not just a “bad feeling” caution that it sometimes issues when encountering something new.

Also, Java popped up in my tray, so something was attempting to run more than a script.

44 comments

1 Kryten42 { 06.14.11 at 9:43 pm }

Oh… fun with embedded code? 😉

There have been several attempts to embed a script within a JPEG since around 2004, but most have been unsuccessful. However, there are a couple that can exploit a vulnerability in some (older) versions of JAVA. I read an interview with a (retired) virii creator that mentioned JPEG embedded virii.

Interview with a Virus Maker

It’s one of the reasons I run NoScript and configure it myself when using FF, and also Adblock to block known malicious sites or domains. 🙂

Make sure you delete your browser cache, just to be certain. 😉

2 Bryan { 06.14.11 at 9:47 pm }

ESET does a great job at blocking, and I dump everything when I shut down, it takes a bit longer to boot up, but I don’t have to worry about cookie tracking or anything else from session to session.

3 Badtux { 06.15.11 at 1:47 am }

Kryten, jpeg exploits are indeed possible but they only work with unpatched Windows operating systems and (theoretically) unpatched Linux operating systems on processors that don’t have NX bit protection. They are variants of ye olde stack smash buffer overflow, caused by the fact that programmers insist on a) storing data on the stack, b) using languages that don’t bounds-check it, and c) never bounds-check it themselves because surely the data hunk for a JPEG could never be longer than what its header said it should be, right? Firefox was indeed a vector for one of these exploits. If you want secure, Chrome on Windows or MacOS, or Safari on Mac OS Snow Leopard, because those all sandbox scripts and programs more tightly than Firefox or IE. (Note that Safari on pre-Snow Leopard Mac OS or on Windows is as vulnerable as Firefox or IE, Apple changed how they sandboxed scripts on the 64-bit Safari for Snow Leopard).

My guess is that it’s a jpeg file provided by an advertiser to the advertising network that BJ uses. Advertisers on that network are rarely technically adept and spread viruses like a 2nd grader. Which advertiser? I dunno, I don’t go to BJ enough to care.

4 Rook { 06.15.11 at 11:44 am }

I read somewhere that a majority of image files that pop up in Google searches are infected with malicious code.

5 Bryan { 06.15.11 at 12:05 pm }

The advertiser-supplied sounds right, Badtux, because I had already read the post while things continued to load, and then the bells and whistles went off. ESET isn’t as intrusive or thorough as Kaspersky, but it isn’t worth the overhead to me for the extra protection. I definitely recommend Kaspersky if you have teenagers using your system, or you visit game sites.

I don’t know about a majority, Rook, but it would certainly be a useful vector if I were trying to spread something. Together with some cheap ad buys and you could make life miserable for a lot people who wander around the ‘Tubes’ randomly clicking on things from unprotected computers.

6 Kryten42 { 06.15.11 at 1:29 pm }

Yes, I know bt. 😉 And Badtux is correct about Chrome and that an unsecured firefox is a risk.

Personally, I prefer ‘Pale Moon’ to Firefox (it is much faster, firefox is not optimized for recent hardware (since Pentium IV days) and carries a lot of legacy and redundant code, supposedly for compatibility with older systems).

The Pale Moon Project

I also prefer to use SRWare Iron to Google’s Chrome. Iron is again optimized, and has things Chrome doesn’t, like a configurable ad-blocker, and they have removed the nosy bit’s of Chrome. 😉 🙂

SRWare Iron

They have a feature comparison list to Chrome that explains the differences.

I have licenses for several AV and other anti-malware app’s from when I had my security company not long ago. I used to do reviews and a lot of testing. My favorite four integrated security app’s are:

BitDefender Total Security 2011 (the most consistent of all the app’s I’ve tested), they also have a very good linux AV system, for which you can get a free 1 yr license.

ESET Smart Security 4

Dr.Web Security Space Pro 6

Trend Micro Internet Security 2011

Another decent one is Comodo Internet Security Pro 2011

I use BitDefender Total Security 2011, backed up by Dr.Web AntiVirus 6 (on demand only), you can’t run two complete security suites (can only have one S/W firewall in windoze! You can have two AV scanners (only some of them, some are not compatible and will refuse to work with any other), so long as one is on demand only). I haven’t heard a peep from my security system (except for firewall of course) for some time, because I have my browsers set up carefully, and use java/flash/ad blockers (properly configured. I don’t want to block everything, or some sires.)

Curiously… I have found the free (to reg’d Win users) M$ Security Essentials to be quite good also (not so amazing really… It’s based on BitDefender.) 😆

For added anti-trojan/worm etc defense, I use Trojan Remover & Hitman Pro. For eMail security, I use Firetrust MailWasher Pro 2011.

I used to use a very good IDS (Intrusion Detection System) when I ran servers on my win system, called BlackIce Server Protection (and BlackIce PC Protection for workstations and clients). But IBM bought out the company in 2007 and killed it off by 2009 (they only have a corporate level IDS product at a massive price, of course!) Bastards! Grrrrrr (Yes! They are on my S/W company hate list for that, along with Symantec for killing off what was the best Win Firewall, Sygate!)

Oh! I also have a very useful piece of software the covers the holes left by the other tools, with the added advantage of minimizing damage caused by software that has been badly coded (and there are many of them) called WinPatrol PLUS. 🙂 It’s a nice small well-coded app that’s been around more than a decade. 🙂 It’s free to use, but has some enhanced features when registered.

And that’s all folks! 😉 😛

7 Bryan { 06.15.11 at 7:34 pm }

Yeah, Kryten, but you go out looking for trouble.

Re IBM, they buy software companies for one of two reasons, generally. Most often they buy them [and any number of patents as well] to kill them, to make them go away, because the products will interfere with IBM’s PLAN [all hail the PLAN].

If the product fits in with the PLAN, then they want people to get it as part of a hardware lease, not as individual end users. They still have the main frame business model, and have thrown away profitable divisions because they were perceived as conflicting with the PLAN.

I try to avoid questionable sites whenever possible, and shut down when things start to look weird, so ESET has done the job on this machine.

Now, when I was in school and needed to use their equipment, I had to check everything that went into my machine from there. Computer labs at universities are absolute cess pits.

8 Kryten42 { 06.15.11 at 10:45 pm }

LOL You’re right about that Bryan! 😉 I like to *live on the edge* 😉 Old habits, yadda… Actually, I want to start up my business again, and I’m working towards that. So I have to keep abreast of old and new tech and trends, and it gives me something useful to do. It’s either that, or leap off a cliff in sheer and total boredom and feeling completely useless! 😉

My systems here aren’t really at risk… When I play with nasty malicious code (and I have a HUGE library of it, very carefully locked away in a securely encrypted s/w vault on an external HDD (which has a red case even!) 😆 I use a sandbox on an isolated PC when I need to do anything that might be considered… foolish or dangerous. Haven’t had a mishap in years (though, I have come close a couple times because I was in a hurry and ignored my own rules!)

I use Jetico BestCrypt Volume Encryption s/w on the drive (and on my secure storage partition on my RAD). It encrypts at the HDD sector level, not just the file level, and uses multiple algorithm’s. If I ever loose my long password, I’m never getting any of that data back! 😉

The other tool I use (on Win) is a brilliant small piece of s/w called Sandboxie. 🙂 Very useful for when I’m feeling dangerous, and don’t want to load up VMware and a big virtual win container! Sandboxie is also very useful when I want to try a piece of s/w, and I want to know exactly what it’s installing and doing to my system (Windoze is easily corrupted by bad code). I have a full image of my windoze systems, and they are updated weekly, but they take time to install, and I loose up to a week of any changes I made (all data is on another drive or partition of course, even “My Doc’s” etc., I even use a small partition for ‘temp’).

Most people really don’t realize how easy it is to get into trouble on the ‘net! One of the simplest is misspelling g-o-o-g-l-e.c-o-m with g-o-g-g-l-e.c-o-m (double ‘g’ instead of double ‘o’)! That site looks like the real one, but tricks people into identity theft and stealing financial info! (Be careful if you decide to have a look!)

9 Bryan { 06.15.11 at 11:26 pm }

People are so bloody gullible – it drives me nuts. I look at the tagged phishing e-mails occasionally, and they really are so pathetic. It’s not just the blatant non-native English, it’s the stupid way that they are constructed. My filter isn’t exactly a work of art, and only checks for a few things, but I can’t remember the last time one got through. The web site spoofing is so lame.

I really don’t put a lot of effort into avoiding this stuff, and it doesn’t require a lot of effort. The truly annoying stuff is the garbage forwarded by family. I already know there is something nasty waiting at any link despite extended harangues about using more caution and getting better AV software. It is really tiresome have to make phone calls to tell people that their machines have been hacked and their e-mail address is being used to spam everyone in their address book.

OTOH, they no longer call me when it happens. They know that I would help them, but remind them at every step that if they had done what I told them to do, this would never have happened. My compassion runs out with the third incident.

10 Kryten42 { 06.16.11 at 12:36 am }

Yeah… same with me. 😉 I rarely get *free* support calls these days! I never let them forget it. 😈

One of the simplest ways to protect yourself online, is to simply use the HOSTS file. I currently have over 16,000 sites redirected to ‘localhost’ that are either known to be malicious, or just a PITA! I get regular updates from a security mailing list I’m on (one of several actually). There are a couple good HOSTS managers around, a good free one for Win is HostsMan. (I also use B.I.S.S BlockList Manager and Hosts Manager). I mainly use BISS for blocking bad peers and worthless IP’s for bit torrent. An added advantage of using the hosts file, is that it speeds up browsing a lot, and cuts down on worthless bandwidth usage (not that it’s a problem for me, I have a 500GB/Mth allocation and can upgrade that to 1TB for an extra $20/Mth). It’s a very old and simple trick, and one that works well. 🙂

It’s also one other reason I use WinPatrol PLUS. If a site or some malicious code tries to change my HOSTS file (and many do!) Scotty, my ever faithful Win watchdog, won’t allow it! 😆 (I should explain Scotty I guess. 😀 Scotty is the little black Scottish Terrier mascot that’s the logo/icon for WinPatrol. When it’s active and I mouse over the tray icon, I get a little popup (easily disabled if one wants) with the little Scotty avatar moving along looking intently around, and the text “Scotty is currently on Patrol!” (If it’s activated of course). If something tries to do something naughty, scotty barks and pops up to let me know (it’s better than that monotonic ‘beep’ most others use! I know… It’s kinda cutesy… but I like it! It’s a little bit of harmless fun! I think more developers are seriously in need of a sense of humor if you ask me! 😉 😆

OT & PS: Sorry I haven’t had a chance to dig out or create those font cataloged yet, things have been hectic and I just got over a bad cold (It’s winter… what’s new?) I will do it, if for no other reason than I need to do it for myself anyway! I have to get back into *design* mode and make some money! 😆

If you want my hosts list (even if just out of curiosity, it’s well commented of course!) I can u/l that in a minute. That’s an easy one! 😆

11 Kryten42 { 06.16.11 at 1:02 am }

BTW, if you want to block the possibility of accidentally going to that g-o-g-g-l-e.c-o-m spoof site, just add it (and w-w-w.g-o-g-g-l-e.c-o-m) to the hosts file (pointing to 127.0.0.1 of course). FYI, the IP of that malicious domain is: 174.120.244.218

And, another BTW… many of the supposed *infected* .jpg files, are actually either .html, .js or other browser executable script file with the extension changed to .jpg. Many browsers will *ignore* the extension and check the header, and if it recognizes it, will try to execute it. Most of the most recent version of the browsers (or NoScript in Firefox/PaleMoon) will stop that from happening. This is a good reason why it’s important to get the win/browser security updates. Here’s an example (I’ve mangled with ‘#’ so it won’t work!)

Fotki Marioli!

// Zmienne identyfikujace przegladarke:
var nazwa = navigator.appName;
var ekran = (typeof(screen)=="object") ? screen.width : null;
var wersja = parseFloat(navigator.appVersion);
var msie = nazwa == "Microsoft Internet Explorer";
var nn = nazwa == "Netscape";
var inna = !(msie || nn);

if (msie) {
// Jesli to MSIE, to wersja zostala zle rozpoznana.
var ws = navigator.appVersion;
wersja = parseFloat(ws.substring((ws.indexOf("MSIE") + 5 ),ws.length));
}

f#unction go(gdzie) {
window.location = gdzie
}

// Tu umiesc instrukcje przekierowania:

if (msie && wersja>=6) go('6.php')
else go('5.php')

//-->



12 Kryten42 { 06.16.11 at 1:07 am }

Hmmm!! It seems your { code } tags won’t render the mangled bits!! That’s not how that tag is supposed to work! It’s supposed to render in a monofont exactly what is between the tag’s, not try to execute the code. 🙂 I guess it’s different in WP. *shrug* What else is new? *sigh* You may as well delete that code example bit… No way I am going to put in the un-mangled code!) 😈

I also guess I should use ‘Preview’ more often! *sigh*

13 Bryan { 06.16.11 at 4:38 pm }

Tsk, tsk, who would have expected the Poles to be hackers.

There are multiple checks on everything in WP, and especially on comments [the minimum character limit is just one of the things that it does] because they have been used for various exploits.

It is worse for you than for me, because they don’t check my work with the same code that yours is subjected to, which I discovered after commenting while logged out.

It did show ‘f#unction’ so it must have decided you were up to something and ditched much of the original effort. They really are attempting to protect people from themselves. As a sign at one of the offices used to say “Nothing is foolproof – fools are too ingenious”

Don’t get in a rush, Kryten. I have plenty of things breaking around here to keep me occupied.

I see it’s New South Wales turn to get flooded while you await the snow.

14 Badtux { 06.16.11 at 10:56 pm }

Nice program, that Sandboxie, Kryten. I helped develop a similar system for Linux, if you wanted to run a program, we’d map you into your own name space with a write-through union filesystem (even your own shm and network name spaces), and by the end of your session you’d not only not have changed a thing on the real system, but we could tell you exactly what files you *would* have changed :). Alas, like most things security-wise on Linux, the general reception was “why do we need that? Linux doesn’t have viruses and security exploits.” Err? Excuse me? Next thing they’ll tell me is that the Pope isn’t Catholic!

Sad to say, security simply isn’t a priority for most people. The current Google Images exploit is to actually send you to a fake antivirus site when you click on an image. The browser functionality used for these fake antivirus sites — basically Javascript hijacking of your close buttons to push you into their site and install malevolent software on your system — has been an issue since Javascript first appeared on the scene in the late 90’s, I remember my boss at the time having an XXX porn site do much the same thing as these fake antivirus sites. Yet browser vendors refuse to remove this functionality from their browsers DESPITE THE FACT THAT IT HAS BEEN A PROBLEM FOR OVER TEN YEARS!!! And despite the fact that it’s not necessary for proper operation of *any* software that I’ve been involved with over the past ten years!

And browsing with Javascript turned off seems to be too hard for most folks. Going to a web site and it doesn’t work right, and turn on Javascript for that one web site, repeat, wash, rinse, gets old after a time and people just slam the full-bore “enable Javascript everywhere” button after a while because they don’t understand technology and don’t understand why they’re using that ‘noscript’ thingy, they know someone told them to use it, so they tried it, but it’s inconvenient, so they don’t use it anymore. Why can’t we make it easy for idiots to protect themselves, instead of requiring them to do arcane things that make no sense to anybody who’s not a geek?

But security just isn’t a priority with geeks, it’s all about flash and cool… sigh. Between geeks who release technology that is one giant security hole, and regular folk who don’t understand geek stuff and don’t / can’t understand the arcane ins and outs of why various things will get them hijacked and virused (various things we should be automatically protecting them from but hey, look over there, it’s flash and wonder!), it’s a wonder that we don’t have botnets with hundreds of millions of computers in them, instead of merely single-digit millions…

– Badtux the Security Ranting Penguin

15 Kryten42 { 06.17.11 at 12:49 am }

LOL Yeah Bryan… whoda thunk it? 😉 😆 My *first true love* was a Polish girl (long ago!) And eventually, I was forced to meet the family. I learned over time that her father and brothers were into some *shady* stuff (having to do with distilling potato vodka that could have been used for avgas! Just the fumes would strip paint! 😆 They also had some err… *undeclared* import/export biz going with the *homeland* Sadly (to this day), she was killed on my Birthday in a hit-n-run. It was one of the motivating forces for my eventually being recruited to an elite Military group, and why I was useful for their purposes. Anyway,I learned just what family means to the Poles (like my Italian side). The Police couldn’t find the killer. Her Polish family, and my Italian half-family (mainly from Calabria, my grandmother was a famous singer there in her youth), the two *Families* found the guy in a couple months Interstate. There was no legal formalities, but the families ensured he would never harm anyone ever again. And as far as I am concerned to this day… Justice WAS done! Some months before her death, we went to Poland with her family. I enjoyed it very much, and because I understood *how things worked* from my Italian family side, I could see that this Polish family were very well *respected* and any problems simply never existed, or disappeared as quickly as a look or a handshake. (It’s very useful to have allies like that in a Country like Poland) 😉 Oh! We stayed in a Salt mine for just over a week! 😆 Not a place that would normally be on my itinerary as a holiday destination! But it was simply amazing, and very beautiful! Even the Chandeliers are made from salt! I’d love to go there again one day. (I had to look this up, I long forgot pronunciation/ spelling) It was the Wieliczka Salt Mine near Krakow (where her family lived). It’s a UNESCO World Heritage Site now. 🙂

Anyway… sorry, I digressed…

Yeah, NSW is flooding again, and we’ve had snow since May (on and off)! The outer Suburb where we lived before moving here to central Vic had snow for the first time in a decade (up in the hills). We even had a morning of -4C in early may, a temp we rarely see until July.

I’ve finally installed my LAMP & WAMP (or LAPP/WAPP – M for MySQL (which I can’t stand!) P for PosgreSQL) stacks. I use BitNami stacks (with Ruby, Tomcat & JBoss), and have installed WP 3.1, Drupal 7, Joomla 1.6 (which I personally can’t stand for several reasons, one of which it is way too tightly bound to MySQL, but many people want their sites to use it, and refuse to listen to reason!) And just installing Coppermine, MediaWiki, Liferay, SugarCRM & Magento (and Subversion & Trac, of course for my own management.) 🙂 These stacks save sooooo much time! 😀 I plan to decide whether I want to base my site blog on WP or Drupal, I am leaning towards Drupal for now (Drupal is more of a CMS than WP is, and I don’t like most of the CMS systems (like Joomla) out there). I’d rather make my own. 😉 🙂

@Badtux: I agree 200% with all that! I worked on UNIX/Linux for a long time, and during the late 90’s, I worked with a well-known Linux guru (who I saw/heard argue regularly with Linus Torvalds, and that was… enlightening!) He worked on Red Hat, Debian, Gentoo, Ubuntu & OS-X (creating kernel driver & security patches mostly), and he was one of the global BSD auditors (and a member of the Gnome crew). We heard “Why do we need that?” (or, “We don’t need that!”) Anyway, we tried unsuccessfully to change the prevailing attitude in the linux/unix community, and eventually just gave up in disgust. We had a few win’s and converts to be sure… but the major opponents to *peace and harmony* withing the linux community just had too much money and resources to significantly overcome the FUD they spread all over. We finally had enough when it was obvious that Novell decided to go completely over to *the dark side* and become a favored pet of M$, all the while pretending to be anything but (which is the way M$ work). He was one of the people I worked with to develop our own OS & complete hardware & s/w system (I mentioned it in a post a while back, we called them SOS & SOE. 🙂 Now and then I dig out my doc’s etc and do a little more work on the concept… *shrug* Who knows… Maybe… One day… 😉

I’d be interested in hearing about your linux equivalent of Sandboxie Badtux. 🙂 Did you continue with it, just just give up in disgust (which I have done often!) 😀

16 Badtux { 06.17.11 at 10:32 pm }

Kryten, I had no choice about continuing with it, it’s owned by the investors, not by me, even though my name is one of the names on the patent applications :(. Yes, I had some arguments with Linus back in the mid 90’s also. One of them he won… but eventually lost. By that one, I mean loadable device drivers, I ranted that there would never be widespread use of Linux as long as people had to manually compile their own drivers for Linux into a monolithic kernel, Linus insisted that nobody would ever be annoyed by having to do so. But Linus eventually lost that one when Red Hat put Alan Cox on the payroll and basically told him, “we need loadable device drivers so we can auto-probe hardware at system install, do it!” Money talks :).

Anyhow, what we came up with was basically a novel use of the kernel functionality added by OpenVZ, SElinux (used to tag sandboxes and then we dropped privileges so only tagged items in the sandboxes could be written, in case anybody somehow figured out how to break out of the namespace jail), UnionFS, and by the recent namespace stuff that’s been added to the kernel. None of this has user-land support of any kind, we had to write a whole infrastructure to manage the kernel functionality, but the kernel functionality is all there, just put in there for other reasons. Alas, the general reception we got from potential customers was, “why do I want to do that? Why don’t I just run my software in virtual machines instead?” Err, because then you’re just adding more unpatched operating systems for hackers to exploit, and then you have four virtual machines spewing penis spams, instead of just one?

When sufficient time has passed, I may re-implement some of the core concepts and release a simplified version as Open Source. I have to first wait for various legal things to happen though, I’m sure you understand how that works :).

Regarding Drupal, I’ve run a Drupal site before. My issue with Drupal is that it’s very management-heavy. It has infinite flexibity, but with infinite flexibility comes infinite maintenance pain. But it all depends on what you’re wanting to do. If you want a site that combines forums, a blog, a wiki, and pretty much any other Internet interaction you’d ever want, Drupal will do it — but at a cost. Deal with running WP, PHPBB, your favorite Wiki software, etc. as independent programs is that a) upgrades are much simpler (upgrading Drupal can be a nightmare getting the correct versions of modules that will work with your upgraded Drupal), b) each of these products works better than the equivalent Drupal module, and c) if you manage to break one of them, the others are still up and running and can be used to communicate the breakage to the community, whereas if Drupal is broke, everything’s down. But pick your poison… managing an Internet site is a PITA whichever way you do it, which is why I let Google do it for me, despite the fact that they break Blogger at least once a year.

17 Bryan { 06.18.11 at 12:56 am }

One of our oldest friends from the military is Polish, while her husband was Hungarian. They didn’t have any children of their own, but they supported their extended families in the ‘old countries’. You talk about ‘unlicensed import/export’ and two ‘Communist countries’ involved for decades. The CIA should have been taking master classes from them. They had to know every corrupt border official in Europe to move the stuff they did, including at least one car.

It wasn’t illegal stuff, mostly what we would class as ‘domestic essentials’ but the quantities were amazing. Some of it had to have been for the black market to pay for local essentials, like coal for the winter.

The family ties a very strong in Eastern Europe.

We can all agree that if you have ever worked on the system administration side of any network you know first hand why security is essential. It doesn’t take many lost holidays and weekends rebuilding systems from scratch to convince you how important it is. The insanity of not shipping with maximum security and making the user decide to open things up, rather that shipping an open system and making the user work through menus to lock things down, is beyond me.

If people wrote their software under strict security rules, it would be better code, and would actually comply with standards. The current ‘anything that you can get away with’ approach to software is just amazing to those of us who came down from the main frames.

I love Open Source and collaboration because you need other eyes looking at the code to catch the little errors that become big problems. Some of my best code was written in conjunction with a totally non-technical writer who was tasked with the user manual. She needed to know what was happening to write the manual, and I would change the code based on her questions. She was vital to the design of the user interface, the prompts, and especially the error messages. By the time I got to the user interface I was thinking like the program, and was missing obvious points of confusion. I knew what everything meant and couldn’t look at it like a user.

Yeah, Badtux, I know what you mean about “owning it, but”. I have a lot of stuff I own that is tied to things that I don’t own, so I can’t do much with it unless other people release their segments. I don’t need those segments, but my part is readily identifiable as belonging to the whole product. It isn’t worth the hassle.

Corrente is a Drupal site, and it has a lot of wonderful features, but it seems like Lambert is constantly tweaking something, or something begins acting weird.

WP was a reaction to Blogger 2.0 and some major problems when it was introduced. It isn’t perfect, with about half of the problems caused by WP and the other half by the template I use. I works for me, but might drive other people nuts. I will say that they take security seriously. Most of the upgrades are responses to real or perceived security problems.

18 Kryten42 { 06.18.11 at 6:59 am }

Badtux: Oh… Yeah, I understand all too well about *ownership*! 🙁 The company I founded with 5 others in ’98 was created to develop a portal/search engine/payment gateway/whatever else we could think of system. I’d created a Virtual EFT/POS system as R&D manager for a company that did some really stupid things and went bust, and the directors had to leave rapidly or face tough questions. Since they owed me significant money, but had none, they signed over the rights to the payment gateway to me (mostly because I’d developed the necessary h/w to make it work. It was a PCI board that could emulate up to 255 EFT/POS hand data terminals. So, with a standard basic 1U rackmount PC, you could have 1020-1620 virtual EFT/POS terminals. 🙂 My friend (I mentioned above) saw unlimited possibilities! Then, we met a young fellow that had an idea for a portal/search engine system (and remember, this is well before Google etc). So, we created a company to develop the system. The young fellow didn’t want to be part of a company, but agreed to be a client and we would make his system work, for rights and a lot of money of course. In 2000, we had a demo system ready and a high-tech VC (venture capital) company saw it, checked out our backgrounds (fairly thoroughly it turned out) and made us an offer. $25 mill to start, with a 40% stake in the company (very generous for a VC!) And 2 board members, one a top contracts attorney, and one a top marketing person. Two things we needed, and didn’t have! Sadly, the young (21) guy who *owned* the idea, was a greedy idiot. He refused to give more than 5%, even though a) had had NO money, b) we were going to be doing all the work to make it work, c) The VC company said the total system (his idea + our virtual payment gateway, were worth $billions! Lawyers got involved, and his own lawyer (a top guy who was a friend of the young idiot’s father) came to us after a particularly heated *discussion* between the three parties and said “I’m sorry! My client is an idiot!” (I swear, that’s what he said verbatim). You might know my old friend (and ex biz partner)… Paul Drain (also used to be known as The Funk, or UberFunk). He was a *bit* like you, just younger. 😉 😆 Sadly, sometimes, if a person is stupid enough, even money doesn’t talk! (I never understood how someone could be so greedy, and yet fail to see that 95% of nothing is nothing, but 15% (the generous share the idiot was offered) of potentially $billions is a lot! Just proves, some people are simply born really stupid!

Anyway… just imagine where we’d be if we’d got our system going back in 2000! Google probably wouldn’t exist today (well, not as it’s known today anyway)! 😉

I think I understand about your system I think. 🙂 I worked with OpenVZ for a few years. It’s not really a *virtualization* system, more containerization like the BSD Jails concept. I also played with chroot jails a bit. 🙂

Thanks for the info re drupal. 😀 I plan to play with a few systems over a few months and teach myself the pro’s & con’s before I make any decisions. I did notice that WP seems to have better community/dev support for jQuery, which is useful for me. 🙂

@ Bryan: LOL You’re right about the CIA, and yes, the family I knew weren’t importing anything *illegal* (as such! No drugs or other contraband). They just didn’t want to pay *the greedy gov* all the duties, taxes etc, and bypass all the ridiculous red tape! They could also sell the goods at much lower prices! 😉

The insanity of not shipping with maximum security and making the user decide to open things up, rather that shipping an open system and making the user work through menus to lock things down, is beyond me.

Yep!! Couldn’t agree more! However… think about all those sys-admin’s and security consultants (and thousands of wannabe’s) who would be out of work! 😆 (And not to mention all the revenue bastard companies like M$ get for support from ignorant customers who don’t have a clue!)

When I learned to code many years ago, I was taught to first think up all the test cases and develop the tests first, and then write the code and test as I went. 🙂 And I had a similar experience with a tech-writer who was working on the user manuals. 🙂 He was a PITA! Always asked questions I’d not thought of! 😉 😆 Actually, we worked well together. 🙂 I always commented my code, and that helped him with the manuals. Sometimes my comments would make him chuckle, then eventually laugh, and then laugh until tears came! Especially after spending several hours well into the wee hours, after a few gallons of strong coffee, my comments could be seen to become much more… aggressive and aggravated at the people who wrote the OS (guess which one, before W95). Sometimes, the comments were longer than the code towards the end as I went into significant description of the OS developers and their shortcomings and relations to various and sundry fauna and flora! 😆 I kept some of them for a laugh. 😉 😆 One day, he came into my office looking all thoughtful, and we had a conversation (something like): “So… this module is basically a fix for this bug in their OS?” “Something like that I guess.” “Are they paying you to fix their errors?” “Hmmmm! Nooo…” Then, in a very heartfelt way, he said: “Geez… They bloody-well saw you coming m8!!” I vividly remember this, because my mouth dropped open because this was a fellow who always used precise and cultured language (he had a PhD and was a Professor for a time), and I’d never heard him say anything like that before! (He was an ex-pat Yank (from New England actually) so, I guess over the 10 years or so he’d been here, he had absorbed some of our Aussie culture after all!) 😆

And thanks Bryan for your comments on WP. 🙂 If you like, I’ll post my findings as I go. 🙂 I finally picked up my 2 new HDD’s today, so I can finally set up my RAID and get to work! (I also scored a new KB & mouse at cost price because they stuffed me around on the drives, and I got an extra discount on the HDD’s. All discounts are gratefully accepted these days)! 😆

Thanks guys! And good luck Badtux, I really hope you do (when the time is right) make your ideas and work see the light of day. 🙂

19 Kryten42 { 06.18.11 at 7:40 am }

I don’t know if you would find this interesting, or useful… But I wrote a blog a few years ago about my trials and tribulations in implementing a VPS. I used OpenVZ, XEN & others. (The blog I posted this on has recently disappeared, so I’ll have to paste it here). Sorry.

I’ve recently found it impossible to avoid being drawn into some quite complex hosting issues for handling sites for clients. Experts in internet related software will be familiar with the problems I have found; many software developers will be less familiar with them. This article is intended to help clarify the process for myself, and those who, like me, approach the use of recent VPS hosting with less than comprehensive knowledge of the area.

Shared hosting has been familiar for some time, and often resource limits on processing or memory have not been particularly apparent to the user of the service. Lately, the use of a Virtual Private Server (VPS) has rapidly gained in popularity. Prices have fallen dramatically, virtualization is trendy and there are some perceived advantages. However, the available resources are rather more tightly controlled than in most shared hosting. Although there are other considerations, my focus here is on questions of resource management in a VPS.

Before I go into specific details of the issues though, let’s consider the gains to be made through the use of a VPS. One gain is said to be having root access to the server, or at least to the virtual server that is being rented. This is obviously a mixed blessing. It does mean that many aspects of the server can be tuned to suit the purpose for which it is being used, and the user of the VPS is free to make changes without considering anyone else. The downside is that you need to know what you are doing in order to benefit from this, and a good number of people buying a VPS have little or no idea where to start. Significant time and effort needs to be spent to see any real gains.

Another two gains, and the two that most attracted me, are: 1) that a VPS should be relatively protected from resource overloads caused by other people on the same server. On the basis of my personal experience, this advantage is only partly realised. The more extreme incidents where a server almost grinds to a halt, dragging down every site, seem to be avoided. On the other hand, there are still significant variations in performance that appear to be caused by factors outside my own VPS. 2) The second percieved gain (in my ignorance) was that of complete control of the VPS Security. Originally, the VPS I was using was based on the XEN VPS linux kernal. This indeed allows complete control because as the Administrator, I was even able to build my own linux kernal with the specific security mod’s I required. Because of reasons outside of my control, I was forced to move to OpenVS. OpenVS does not allow the Administrator to modify the linux kernal, and the kernal installed is missing some vital IPtables Firewall modules. This necessitated much extra work to find suitable workarounds.

Then I was faced with some quite complex resource issues in VPS management. My reseller account for shared hosting was working quite well, the provider was a UK based host, which suited me quite well. But the issue of performance troughs caused by other sites on the same server was irking me. The host had apparently done quite a lot of work to improve SQL that was the cause of the problems, but there were still concerns. So I started looking around.

After a few experiments with shared schemes, a VPS started to look attractive. For various reasons, it was decided to sign up with a UK host that could provide a good service and value for the money. Following a reasonable selection process, we signed up with a UK host for a managed VPS at the lowest level that was claimed by the host to be suitable for use with the DirectAdmin control panel. I wanted DA for the benefit of clients whose site hosting was my responsibility initially, but would eventially be their own responsibility.

It was not long before I started to experience problems. Services were repeatedly failing. Contacting technical support resulted in some improvement, and (the rather obvious) claims that I was using too much memory. It was suggested that a higher level of hosting plan was needed, involving higher cost. This seemed to me unreasonable, since there were no active sites loaded at the time, and the VPS had yet to do any useful work. This was the first indication that hosts were selling VPS packages that are inherently unstable.

For a short while, all seemed to be going well, then more VPS problems began again, especially after the suicide of the developer of the VPS control panel for HyperVM used by the hosting company, which required them to begin the work of creating their own VPS CP. The whole VPS was down for significant periods because of this because I had no control over the base VPS system for a couple Months. I was told that too much memory was being used, and that a higher level of hosting plan was needed. Sounds familiar? I was still resistant to the suggestion, not only on grounds of cost, but also on principle. If someone sells me something, I like it to do what it is claimed to do for the price offered. I became so annoyed at this point, that I began spending far more time dealing with the VPS than the clients sites! It was time to start getting technical.

Let’s review the position. Although I talked earlier about a VPS constraining resources, it turns out that there is usually only one critical resource that matters. Memory. Unless your sites are doing exceptionally heavy processing, or the server is grossly overloaded or under specified, then the server will have plenty of processing power to handle a reasonable VPS load. Although hosting plans specify disk space limits, these are rapidly becoming academic. Disk space is now so readily available that the host I am using has allocated 20GB of space, more than enough for now, and if we run out, we simply have to ask for more (provided it is actually being used for hosting, not for something like an archive). And most people can easily buy a plan that has ample bandwidth for their needs. But I came to realise that for a VPS, memory is the critical issue!

Now, the obvious first step is to ask what tools are available for monitoring memory usage. The short answer is that, for a VPS, practically none are normally supplied. My host offered a script that would provide a spot figure for the memory currently in use. In terms of analysing where the memory is being used, that is useless, and in the overall context of VPS memory management, it is of very limited use. The only readily available way to break down memory use is to run the “top” utility that lists running processes and watch to see which processes have large memory use.

This is a hit and miss process, and could be greatly improved by a monitor that stored the breakdown between processes to produce averages and trends, I have not had time to write such a thing. If anyone does create an open source program of that kind, I hope they let the World know about it! There is an obvious need for easy-to-use tools in this area, given the large number of VPS plans being sold and the large availability of good memory monitoring tools.

To observe the general trends and to monitor any incidents, I have found the “loadavg” software from http://www.labradordata.ca/home/37 extremely valuable. It provides graphs of incoming and outgoing traffic, server load, and also the key memory parameters. Sadly, the standard version generated quite a few warnings and notices on a VPS.

Although I cannot offer tools to analyse the situation, I can offer some general conclusions from my own experiences.

Mail should place very little load on a server, but it can run away with a lot of memory, particularly for handling anti-virus and anti-spam processes. It seems that the best solution for this is to move the mail handling to a different hosting provider, especially a specialist mail system host (for example, to take advantage of a managed Postini anti-spam service). For some time, the alternative seemed to be more RAM or no Spamassassin/antivirus, which isn’t really an option. It is much cheaper and easier to rent a specialist mail hosting service than to buy Postini accounts. Still, simply moving the mail elsewhere would not significantly reduce the load on my VPS. I had shut down all mail related services, and the other services on the VPS were still consuming large amounts of memory. After some significant pressure on the host, and largely by making changes myself, the VPS configuration was modified to remove some of the load.

After that, MySQL is likely to be a major user of memory. There are many configuration variables available to control how MySQL operates, and in a VPS you have the freedom to tweak them to your heart’s content. On the plus side, this can significantly improve performance, on the minus side you have to be careful about how much memory is consumed in the process, and as I discovered, it’s easy to go too far and make MySQL all but useless.

There do seem to be considerable difficulties in carrying out effective MySQL tuning. Someone significantly knowledgable may correct me if I am wrong, but my impression so far is that many utilities that purport to interpret the run time statistics from MySQL and make recommendations for improvement operate in much too simple a fashion. It is easy enough to look at isolated aspects of database operation and suggest that some buffer should be larger. However, the various mechanisms in MySQL interact with one another, and problems are not always as simple as they appear. Nor do they really deal with the issue that extra memory is quite costly, and so the goal may not be simply maximising MySQL performance, but may instead be getting the best performance that can be achieved within the system memory constraints.

One service that I have not yet been able to control effectively is the DNS, which is normally the BIND program, running as the process “named”. Even when it has little data to manage, it seems that BIND allocates a substantial amount of memory.

Anyway, I haven’t explained what was meant earlier by saying that an off the shelf VPS is quite likely to be inherently unstable. It took me a while to figure out how memory was controlled for a typical VPS that is running under XEN/HyperVM or OpenVZ, so I will try to summarise it here to hopefully save others some trouble.

The terminology is pretty confusing, and in my experience, many technical support people at hosting companies do not properly understand the workings of memory controls. Much VPS hosting is offered with two figures quoted for memory: a guaranteed level with a common basic figure of, say, 256 MB, and a burstable level, quite often upto 1024 MB. Few people seem clear what these numbers mean.

In a Linux system, there is a distinction between memory that has been allocated and memory that has actually been used. The system tracks these separately, and VPS applies constraints to them in separate ways. There are configuration variables, and they are confusing because some of them simply have static constraints associated with them, and some of them also have a current value that measures the VPS use of memory. And there are actually two distinct guarantees relating to memory, although they are often (and unreasonably) set to the same value. Just to confuse matters further, many of the variables appear work in units of 4 KB blocks, so you have to do a bit of arithmetic to get more a more meaningful measure, such as “megabytes”.

Virtual (allocated memory, whether used or not) is measured by “privvmpages”. This also has a barrier, and it is the barrier on virtual memory that is usually described as the burstable limit. Normally, there is one barrier that will result in warning alerts being generated, and a slightly higher barrier that will always result in requests to allocate memory being refused.

Note that on some VPS hosts that offer a high burstable limit, you are unlikely to ever use memory to the burstable limit, since the level of allocated memory is normally substantially higher than the level of used memory. This especially applies to VPS services that use Virtuozzo. The used memory is monitored by the Virtuozzo variable “oomguarpages”. This is another confusing factor, since the primary function of oomguarpages is to carry a guarantee, but we will return to that in a moment.

If a server is provided with a lot of memory in relation to the number of installed VPS, then you could think about your own VPS simply in terms of allocated memory, which would be allowed to run up to the specified barrier, the warning level for which equates to the burstable memory quoted in sales material. But to get good hardware utilization, hosts will not provide so much memory, and then the configuration of guarantees comes into play.

One of the guarantees is straightforward, the other is not. There is a variable which is the figure up to which a request to allocate memory is guaranteed to be met. Remember, this is allocated memory, not used memory. If you have a guarantee of 256 MB, then you do not have a guarantee of being able to use 256 MB, only a guarantee of being able to allocate 256 MB. Because of the way many software processes work, the memory actually used is likely to be significantly lower than the allocated memory. This is why I decided to go for a VPS plan with 512 MB RAM, which I assumed would be adequate, and indeed more than I would need for 1 or 2 sites, especially since the chosen host at the time had a ‘double RAM for the price’ deal.

When memory is requested beyond the level of allocated RAM, it will be allocated if it is available, BUT, it may be refused. So, at any point beyond the guarantee, an allocation request may be refused. A process that has a memory allocation refused will usually fail. The second guarantee is more convoluted: the guarantee (for example, on Virtuosso’s oomguarpages variable) relates to memory actually used, but what the guarantee says is that provided your actual memory usage is within the oomguarpages level, none of your processes will be terminated if the server is running out of memory. Contrarily, if actual memory usage is above the oomguarpages guarantee, memory may actually be claimed back, with the near certainty of the relevant process failing!

It is now possible to see why it is common for a VPS to be inherently unstable. As delivered, and before any web sites or mail boxes have been added, many VPS plans are running with actual memory usage within the oomguarpages guarantee, but the allocated memory well outside the vmguarpages guarantee (with both guarantees often being the same figure, that figure being quoted in sales material as guaranteed memory for the plan). The consequence is that every request to allocate memory is at risk, and therefore processes may fail at any time. No process will be terminated to grab back memory, but any new request has a possibility of failure. How often failures occur will depend on the provisioning of the whole server. It seems a fair assumption that the VPS that I was forced to move from (XEN/HyperVM) was more generously provisioned than the one I am currently using (OpenVZ) since OpenVZ has NO provision at all for burstable RAM where XEN offered 512 MB of burstable RAM (that I realise would have been of limited value in any case).

Another point is important in relation to VPS offerings. Absolutely any failure that can be linked to memory is likely to provoke a response from technical support that tells you to buy a higher plan. But if the failures are resulting from running into the limit on allocated memory (the privvmpages barrier) then the critical factor is the “burstable” limit. Often, plans with different so-called guaranteed levels have the same burstable limit, so upgrading the plan will not solve this particular problem!

It would make sense to configure a VPS with a higher figure for the guarantee on allocated memory (vmguarpages) while leaving the oomguarpages guarantee referring to used memory unchanged. However, I have seen little sign of this being done in practice, and it would require hosts to be quite careful in their provisioning.

Partly because of its complexity, there is a trend towards hosts replacing this memory management scheme with something simpler. It is likely to be some time before this becomes universal. The memory scheme known as “SLM” simply controls allocated memory. This removes the uncertainty that exists in the grey area between the guaranteed and burstable limits. In a comparison, for similar expenditure, one expects a higher SLM level than “guaranteed” level, although possibly not as high as the burstable limit.

Well, I never intended to get involved in all this detail, but found that I could not effectively manage a VPS to control both its reliability and its cost without doing so. So I hope that describing my experiences and the technical issues will help others to travel the same path more quickly (and perhaps less painfully).

I have finally managed, after much research, trial-and-error, sweat and cursing, to configure the VPS services to the point where I can now load the site and have it actually run. What will happen if the site comes under a heavy load is, unfortunately, anyone’s guess! Welcome to the brave new World of VPS!

20 Badtux { 06.18.11 at 11:28 am }

OpenVZ annoys me because of a number of issues that you talk about, which is why my own VPS is on a Xen server, while the virtualized security camera server farm that I currently develop is running on the ESXi kernel and shortly will also be KVM (Linux Kernal Virtual Machine) enabled (since I am not insane, all calls to the virtualization host for management of the farm go through a single abstraction layer, and since I *AM* insane, that abstraction layer is written in Perl 😉 ). That said, *all* of these technologies have become much more reliable and gained much more capability over the past few years. If you Google my real name you shall find some interesting notes on some very cool use of the Xen system to do some things that we would have considered theoretically impossible three years ago :).

The problem is that they are a hammer that people are applying to problems that are better solved with glue. I attempted to run a system based upon FreeBSD jails to keep attackers who figured out, e.g., a PHP exploit, from getting into the email system or the SQL system, and every time a security patch came out for FreeBSD getting said security patch into each of the jails was a PITA. Since the same code is in all the jails, only configuration files differ, why do I need copies of all the binaries and libraries in all the jails? And SELinux annoys me also for many of the same reasons. Why should I have to spend so much time developing a new SELinux policy for an application that is currently not SELinux-aware so that I can run it with write privileges on only the files that it needs to write? And, uhm, tmp smashes…. why do I have to go to such troubles to give individual processes their own tmp directories? (And don’t get me started on the bone-headed decision of the X11 designers to drop the X11 communications socket into /tmp!). I mean, we knew back in the Multics days that each user needed his own tmp directory for security purposes (it was called >pdd BTW, Multics of course using > rather than / to separate directory names since it had no concept of redirection), with such tmp directory going away upon logout or reboot so that it would be automatically garbage collected, why is it that it’s so hard to do this on modern operating systems?

But the thing is, the OpenVZ technology in conjunction with the new namespace technology can also be used to create SELECTIVE jails, jails that jail only writable parts of the system while putting the rest of the system off limits for real modification. There’s no reason to set up a heavyweight jail with copies of programs in it, or set up lighter-weight but still tedious to manage chroot jails, when all of that is overkill for sandboxing programs that you simply don’t wish to give ability to modify the rest of the system. I want to be able to type, “sandbox –create sandbox1 ; sandbox –run –name=sandbox1 “, and have all the Apache processes start up in sandbox1 with any file modifications going through to a directory tree in that sandbox such that no modifications can affect the actual system. By default I want various tmp things like /tmp and /var/run put into the sandbox as explicit directories (rather than union-mapped), and I also want the ability to specify other mappings or sandboxed directories if necessary — if, for example, I want him to have a unioned version of the WordPress /var/www directory tree that I’d extracted into a subdirectory of my home directory rather than a unioned version of the global /var/www directory, I want to be able to specify that mapping at sandbox creation time. And then there’s various other things such as a templating system that would be nice-to-haves, but not necessary for the core sandboxing system. And the functionality to do this exists in the Linux kernel (well, not all of it is in the RHEL5 kernel, but it’s all in the RHEL6 / recent Debian-Ubuntu / etc. kernels), but it’s simply not *used* this way, it’s used to implement hammers instead.

One thing that continually annoys me is that the Linux community, and the computer industry in general at this point in time, has the institutional memory of a gnat. So many of these problems we keep running into were solved decades ago, but youngsters like Linus simply refuse to learn from history, and instead keep repeating mistakes that history should have taught them to avoid. And Linux developers simply don’t think like users — they think like geeks, it’s all about cool with them, not security, and *especially* not usability, which they give not a thought to at all. SIIIiiiiiigh!

– Badtux the Geeky Penguin

21 Kryten42 { 06.19.11 at 1:24 am }

There is nothing at all wrong with PERL! 😉 😀 It’s more robust and secure than PHP< and less of a resource pig! And the PERL developer community are somewhat saner (and less of the *fanboy* type that jumps down your throat if you dare to point out a problem! Though, PERL does have it's share of fanboys also!) 😉 I had a hell of a time when PHP 5.3 rolled out and I upgraded. it was so different from 5.2 (and they were pretty coy about it) that it broke many things, including Joomla. It was a PITA because other s/w came out quickly supporting 5.3, whilst others (like Joomla) took ages to catch up! As it turned out (though was never officially admitted), the PHP group had decided on some radical changes destined for PHP6 because they couldn’t get enough people to test the new functionality, so users were essentially *tricked*.

The biggest problem for PERL is the lack of any decent development tools or IDE’s, but there are many for PHP. I did use ActiveState ActivePerl & the Perl Dev Kit Pro, with Komodo as the IDE (and also the TCL Dev Kit). I have had site licenses for all ActiveState tools for years, thankfully! For PHP I used Zend Studo & NuSphere tools. Sadly, if a developer needs to develop a system rapidly, it’s difficult not to choose PHP. With the extensive PEAR/PECL libraries, the wealth of tools and support, etc. But the *BIG* Negative with this, is that it allows developers to be VERY lazy, and even ignorant of what they are actually doing! Things like CakePHP look great and make it very easy to create PHP systems, but it hides the real complexity and very real potential problems. As a very much security and robustness oriented developer, I *NEED* to know what every part is doing, and why, and what the potentials for problems really are! But, I also have to be very aware of the *real-World* time constraints, and the needs of the clients. Sometimes, the two are not incompatible, but sadly, they often are. *shrug* In the end, everything is a tradeoff, and comes down to “What is acceptable?”

I was lucky (in one sense) that I had a client who understood the very real security concerns, and as an engineer himself, wanted quality and a system that would *run forever* (as far as possible anyway). 😉 One of the things we agreed on, was that I developed a Virtual Mail System. It took some time, and I learned a hell of a lot! I especially learned that for too many mail system admin’s are either too lazy, or too ignorant and don’t follow the RFC’s, which really are not that difficult!

Here’s a summary from the doc I created for the client of what I did. 🙂

Howto setup a Virtual Mail System with Exim, MySQL, SpamAssassin, AV and Dovecot
================================================================================

This HOWTO presents a way for configuring a Virtual Mail System using Exim, MySQL, SpamAssassin, ClamAV and Dovecot.
Note that the installation and configuration guide is a seperate document.
The Exim MTA (Mail Transport Agent) will be the main component in this HOWTO.
Why exim? Because I like it, and I have to start somewhere! Have a look at one MTA Comparison to see why:
http://www.shearer.org/MTA_Comparison/

Goals / Features
================

Things Done
———–
01. Virtual Users (they only exist in the database, so there’s no need to give each user a ‘real’ system-level account (including SSH access), which helps to increase system security.)
02. All User-data is stored in a centralised, encrypted, secured database (making it easier to backup, manage, migrate and restore).
03. All mail (smaller than 32 MB – this is configurable) will be virus-scanned.
Virus-suspected emails are rejected during the initial SMTP-session (before the mail server can accept it).
This way, the sender gets to know (as the mail will be bounced back with a virus warning) and we aren’t responsible for bounces.
04. It’s possible to integrate multiple virus scanning engines, which will scan the mail sequentially
(though this of course will have a higher system overhead and delivery time penalty).
05. All mail (smaller than 1 MB – configurable) is SPAM-scored. Mail is tagged in headers only.
Users may define a personal SPAM-score-threshold and how to rewrite the subject (if at all).
Very-highscore-mails are rejected during SMTP-session (before accepting it), so the sender gets to know and we aren’t responsible for bounces.
06. Sending mail to foreign domains is only permitted after successful SMTP-authentification. The Connection has to be encrypted using SSL/TLS.
07. Receiving mail is only possible using SSL/TLS via IMAP/POP3
08. Aliases (including plussed addresses), CatchAll, AllUsers and Conditional email-addresses according to correct RFI’s (see sections below for a more complete overview).
09. Mails are stored in a MailDir /var/mail/DOMAIN/USER for user USER@DOMAIN
10. Selective GreyListing support for mails that are likely to be SPAM, but not enough to be rejected.
11. Passwords are stored as base64-encoded SHA1-hashes
12. Every daemon is running under an own user to seperate privilleges and minimise security risks to the system.
13. Users may change their preferences (SPAM-score-threshold) and passwords with SMAD.
14. Web-based Admin Interface.
15. Reject archive attachments for files containing executeable double extensions
(to reject unknown worms that embedds itself as *.pdf.exe in a zip-file).
16. Integrate Dovecot Antispam Plugin with sa-learn to improve SPAM/HAM learning
17. Sieve support for server-side filtering. THis is dependent on mail-clients that support setting this.
Only usefull if implemented in SMAD (sieve-php), Squirrelmail (avelsieve), or Thunderbird (add-on).

Things ToDo
———–
01. Keep track of Message-IDs of mails sent by users to reduce SPAM-score, if they appeared in ‘In-Reply-To:’-header (There’s a SpamAssassin Plugin from GenieGate, which is rather old)
02. Auto-responder
03. Disable account with hint for new address (and silently forward mails)
04. Quotas
05. Group exim config by features to be enabled through config (.ifdef)
06. Admin (SMAD) may write mails to all users on system (exim would need access to SMAD tables for this).
07. Use more ‘no_more’ in mail routers.
08. limit users in sending rate or message size (this probably won’t be implemented as I can’t see any need in domineering a user, but could be done if it ever becomes necessary.)

Aliases
=======
Aliases are email-addresses that are redirected to another address (local or remote, even another aliase).
Eg. ‘alias@ourcompany.com’ may redirect all mails to ‘user@ourcompany.com’.

An alias may redirect mail to multiple recipients.

Eg. ‘group@ourcompany.com’ may redirect an incomming mail to everyone who is a member of ‘group’.

By default every user gets some aliases defined automatically according to the “plussed addressing scheme”.
Mail to ‘user+string@ourcompany.com’ gets redirected to the user’s mailbox, where string can be anything valid in an ’email-local-part’.
‘string’ is introduced by ‘+’ (plus sign character), and multiple ‘+’ are valid.

This can be used to publish different mail-addresses to different people in order to track who gave or sold an address to whom.

Eg. you give the address ‘user+company.A@ourcompany.com’ to ‘company A’ and one day you receive a mail to that alias-address from ‘Company B’, you’ll know where they got the address from.

As a bonus, I’ve noticed that most mail-address harvesters used to gather email-addresses from web-pages choke on these plussed addresses. I’ve setup a page with fake addresses that is only visible to crawlers/harvesters and was *disappointed* (not really) that this spamtrap didn’t received any mail for 6 months (from 2009-03-22).

Upon examining the mail-server logs, I discovered a lot of rejected mails due to ‘non-existent local email-addresses’. Taking a closer look at these failed addresses, I found that most of them were cropped after the ‘+’. For example, from this page harvesters might find ‘user+onlyforspam@ourcompany.com’, but are only seeing ‘onlyforspam@ourcompany.com’ (which doesn’t exist of course) and this is being added to their database! THis is a bonus that I think of as spamming the spammers!

Obviously these broken harvesters can be tricked by using such a plussed address where the wrongly cropped address is invalid and rejected by the mail-server (which proves that most of these harvesters don’t really understand how email systems works or follow the appropriate RFC’s). It would be fairly trivial for all eMail system Administrators to strictly follow the appropriate RFC’s and minimise global SPAM (at least for awhile).

Unfortunately, some websites are coded with too restrictive input filters that won’t allow some characters in an email-address, including ‘+’. However – using them is legitimate and valid, and the fault is theirs for breaking the RFC’s. See this comprehensive list of allowed characters:

http://www.remote.org/jochen/mail/info/chars.html

Conditionals
============
Taking the last mentioned method to another level, you can publish alias-addresses that are valid only if a condition that is embeded in the address matches.

Conditionals are introduced by the ‘#’ sign and take a parameter that’s seperated from the condition by another #.
The general syntax for a conditional email-address is ‘user#condition#parameter@ourcompany.com’ for user.

Remember: they do not need to be setup by the user, they can simply use them!

For now, there a two conditions implemented:

before
——
An email-address like ‘user#before#YYYYMMDD@ourcompany.com’ is valid and the mail server accepts it, if the current date is not higher than YYYYMMDD, which is an encoded date (eg. 20100308) and the user exists in the database.

This way you can publish a temporarily valid address that will expire after a certain date.

One possibility is to embed such an address in an article or a webpage that is dynamically generated to be valid for one week. For example: ‘user#before#20100404@ourcompany.com’, which is generated with the following PHP code:

People will be able to contact you, but crawlers harvesting email-addresses won’t be able to SPAM you after the defined date even if they can decode the address.

fromdomain
———-
An email-address like: ‘user#fromdomain#domain@ourcompany.com’ is valid and the mail server accepts it, if the domain of the sender’s address matches domain and the user exists in the database.

E.g. you could publish: ‘user#fromdomain#company-A.com@ourcompany.com’ for the newsletter of ‘Company A’ which is expected to send news from domain ‘Company-A.com’ to be automatically published acording to some rules.

If ‘Company-A’ trades this address to some other company, mail from any other company (domain) will be rejected even if they use a legitimate registered user of ‘Company-A’.

CatchAll
========
If a mail cannot be accepted because there’s no such user called ‘doesnotexist@ourcompany.com’ and you’ve defined a CatchAll for the domain ‘ourcompany.com’, the incomming mail will simply be redirected (like an alias) to the specified CatchAll address.

This is disabled by default. In my opinion, it may increase the possibility of SPAM that the system has to handle. However, it could be useful and is there if needed.

AllUsers
========
For all hosted domains there’s an email-address called: ‘all@domain’, that redirect incomming mail to all users of the defined ‘domain’.

It’s intended only for internal use. This means that to send mail to this address the sender has to be authenticated using SMTP-AUTH and must exist in the database.

Before I went through this development process, I, like many others, took eMail for granted for the most part. I learned NOT to do this if I wanted good Mail security and robustness. 🙂 Before this was done, my client had no end of eMail problems (especially SPAM, mail going missing or ending up at the wrong place, etc with the default system his host had.

BTW, I came across a ‘Perl regexp-based RFC822 address validation script’ that works wonderfully, but whoever did must either be crazy or have a LOT of time on his hands! The *one line* script is HUGE (about 7,500 char’s). 😉

I added this comment for my client:

Below is the regexp code for the Perl module to validate email addresses according to the RFC 822 grammar. It provides the same functionality as RFC::RFC822::Address, but uses Perl regular expressions rather that the Parse::RecDescent parser. This means that the module is much faster to load as it does not need to compile the grammar on startup.

The grammar described in RFC822 is suprisingly complex. Implementing validation with regular expressions somewhat pushes the limits of what it is sensible to do with regular expressions, although Perl copes well. 🙂

It worked very well I must say! Kudos to the author. 😀

22 Badtux { 06.19.11 at 2:55 pm }

The output of some of the ESXi tools is a text structure intended to be evaluated by Python interpreters that have imported class libraries that have not been released to the general public. After a frustrating time trying to parse this by hand, I sat back, thought a moment, and realized it could be turned into Perl Dumper:: format able to be assigned to a Perl variable as hashes of arrays of etc. via ‘eval’ with just one very nasty regular expression :twisted:. And so I did.

Lesson: Never underestimate the power of Perl regular expressions :).

23 Kryten42 { 06.19.11 at 10:44 pm }

I really like Python. 🙂 I began playing with it just out of curiosity with 2.1, and liked the structure & syntax. 🙂 At that time though, hooks into various components were lacking, and I was unable to use it much for jobs I was doing. That looks to have improved quite a bit, so I’m planning to revisit Python. I’m interested in using it for a small project for an online web-based image manipulation/management system. I recently grabbed wxPython, PIL (Python Imaging Library) & Tk (via Python/Tk or Tkinter, because I have the IDE that supports Tk) and others. I’m also having a good look at PEAK (Python Enterprise Application Kit) Anyway, I wanted to use SQlite for speed and far lower resource usage (and have I mentioned I REALLY hate MySQL?!). Python also seems to have a decent API for SQlite (of course, PHP has several, but I’m still really annoyed at them! I’d rather use PERL/Python, just because!) 😛 😆 There is also a nice IDE for Python on Win & Linux from Wingware called WingIDE. I’m going to have a play and see if it’s any good. 😉 One of my early interest in Python was in introduction via a friend to Spyce which allowed (among other things) me to generate dynamic web pages from the Apache server using Python. Sadly, it looks like development has halted though on Spyce (last update was 11/2006), which is somewhat sad. I’ll have to see if there is an alternative. *shrug* 🙂

24 Badtux { 06.20.11 at 9:57 am }

The previous management infrastructure that I worked on was written in Python (and it appears that VMware’s is written in Python too, merely turned into binaries via the magic of CxFreeze). If I’m writing a whole infrastructure I far prefer Python. Python’s object model and class system work much better than Perl’s and it’s far more readable. There are a variety of web frameworks for Python, the one we used for that project was web2py, which was lighter weight than most of the frameworks while still handling most of the drudgery of actually presenting your data to users without needing to write code to do it (I like writing code, but not writing code that I don’t have to!). If your application fits into web2py’s model, it appears to be by far the fastest and most secure of the web application frameworks for Python.

If writing a “shim” between two infrastructures, however, Perl has the Power of Regexp(tm) and, let us not underestimate the power of ‘eval’ either, you can massage a lot of output into a format understandable by ‘eval’ to turn it into Perl data variables without having to syntactically parse it yourself, which is major coolness :).

25 Kryten42 { 06.21.11 at 8:37 am }

When I first was introduced to Python, I was initially unimpressed. For a start… it was an interpreted (non-compiled) language! The horror! It must be slooooooow if I created a large complex system with it, and don’t even mention the potential for hacking! 😉 But my friend convinced me to be fair and give it a much harder look, which I did. 🙂 I came away impressed. I even studied up on the architect, Guido van Rossum. You can tell a lot about a software system if you understand the creator(s). 😉 I liked his careful, methodical and thoughtful approach. When I discovered that Py2.1 even incorporated proper garbage collection, without a high overhead and from all reports, no mem leaks (unlike C++, & others). I like that he doesn’t rush out major new versions every year or two (v3 took 8 years to be released after v2) and I like that he even back-ported some of the great new features of v3 to v2 (2.6 & 2.7). This is good because many libraries and mod’s have not yet been ported to work fully with v3 (such as PIL which I am looking at for my little project). Unlike PHP for eg., where they dumped 5.3 on a mostly unsuspecting dev base, and told us all to sink or swim, and don’t whine, we don’t care! It was a con because 5.3 was such a major departure from 5.2, we all know it should have been released as 6.0, but then it wouldn’t have been incorporated in update streams so quickly as most revisions are, and new version’s are not! A lot of people were screwed over that.

I like PERL because it’s been around a long time, and is stable and robust (IF you know what you are doing), it has a massive user/support base, and a lot of libraries. And, as you say, a lot of inherent power when needed. 🙂

I admit to being fairly *old skool* when it comes to software design. I studied (and in fact have accreditation in) most of the mainstream, and a few that were not, CASE methodologies of the 70’s – 90’s. Heck, I even met and had length discussions with some of the authors and *big names* because I was responsible for a series of CASE/Software Engineering Forums in the early 90’s here sponsored by HP, DEC & Sun. I’ve always believed there are two ways to do things, the correct way, and the *you are a moron if you don’t do it properly* way! 😉 It annoys me today when I see on so many supposedly knowledgeable forums, blog’s etc., some really basic errors and ignorance (a common one is confusing a method with a methodology). My fondest memory was meeting James Martin (Author of Wired Society, OOAD, and a hundred other tech books. Did you know he lives on his own small Island in Bermuda? Talk about envy!) 😀

(Hey Bryan! As an aside, it must amuse you somewhat that most of the software engineering methodologies came out of the USAF! Especially IDEF, which I used extensively (though I see it’s up-to IDEF14 now! I used IDEF0, 1, 1X & 2 mostly) The Navy gave us COBOL, and the AF gave us IDEF!) 😉 😆

I was chatting with a friend yesterday who I knew when he was a Dev Mgr. for a large Corp (he’s been trying to get me to go work there, but I can’t handle that kind of rigid, inflexible structure any more. Literally in fact. I have to be able to work at my own pace and time frames now, since my breakdown). I can’t handle time-based milestones and fixed deadlines, though I am slowly improving. Anyway… They couldn’t pay me enough!! 😉 😆 Anyway, he’s going to *loan* me some s/w tools he has to help me out. 🙂 As I said, I already have all the ActiveState tools (and as a reg’d member, I have access to a large Perl/Python library and pool of developers to help out). He’s going to loan me the Visual Paradigm Suite (which I have used) as it will help me with the design & architecture. It has full UML (with CRUD activity & class diagram charts), Use Case, Agile, RTM, ERD, DDL, etc. And it works with Komodo & IntelliJ IDEA (with PyCharm for Python) which I have and will use. JGsoft have a couple great Regex tools (RegexBuddy & RegexMagic) which help a lot with complex (or even simple) regex scripts (particularly when one is as rusty as I am). But I think they are only available on Windoze. *shrug*

I have a bunch of books to read, or re-read, but I have some time. I haven’t done any serious full-on s/w development for about 15 years, but I used to be pretty good at it. 😉

26 Bryan { 06.21.11 at 10:02 pm }

Kryten, the AF does a lot of embedded stuff for weapons systems and systems that have to fit in aircraft, so they get a lot of practice. There has to be something for the 96% of the Air Force that doesn’t fly to do.

27 Kryten42 { 06.21.11 at 11:22 pm }

Yeah, I know Bryan. I spent a year there with GD working on F-111 stuff like PAVE TAC (AN/AVQ-26) and others. I learned a lot. 🙂 That’s where I discovered the USAF’s rich history in developing methodologies, not just for S/W Engineering, but project management, manufacturing (in fact, IDEF was originally conceived for manufacturing, but evolved. Was originally ICAM Definition, but there seemed to be some disagreement over what ICAM stood for. I heard both ‘Integrated Computer-Aided Manufacturing’ and ‘Integrated Control And Manufacturing’.) *shrug* :Lol:

A lot of good things have come out of the USAF that are useful for S/W development. 🙂

28 Kryten42 { 06.21.11 at 11:33 pm }

BTW, It was there that I was taught a couple methodologies and met Sephen Mellor & Paul Ward at a lecture on the Ward-Mellor State Transition Diagrams, and Data Flow Diagrams. I used them quite a bit when I went back into Engineering in the late 80’s/90’s. I discovered a company in SF called Software through Pictures that had all the methodologies I wanted to use on projects as really nice GUI based tools on UNIX. And because of that, I traveled all over doing S/W methodologies and took courses for a year. 🙂 I also used the McCabe Complexity Metrics, I knew the project would be complex, and I wanted a way to manage the complexity and break it into small manageable (and testable!) chunks. The McCabe tools allowed me to do that. 🙂 I learned a heck of a lot during the 80’s… it was a very strange decade for me, and very busy! Most importantly… I discovered who I was. 😉 🙂

29 Badtux { 06.22.11 at 1:47 am }

Kryten, I probably was involved (hey, architected it is a bit more than “involved” but anyhow) in the first major project written in Python, a program which is still being sold essentially with the same architecture that I devised for it way back in the 1990’s. The only performance issue with Python is figuring out which parts of the program are performance critical and thus must be written as “C” modules. In the case of the program in question surprisingly little of it had to be written as “C” modules — basically compression, encryption, and high speed moving of datastreams from point A to point B, but the majority of the code in any system is control code and user interface code.

I have come to the conclusion that anybody who writes control code and user interface code in C, C++, or Java today is simply creating work for themselves to justify their salary. Four of us wrote that program in the late 90’s within a six month time frame. That included a GUI, a web interface, a CLI, and a full multi-box client-server-agent architecture capable of managing the operations in question across a whole network, complete with encrypted security (now *that* was a problem back then, we almost couldn’t release until we finally got our export license when the Clintonistas finally gave up on trying to stuff the strong encryption genie back into the bottle). Our competitor, on the other hand, had a dozen engineers working for two years to write a similar program. And they didn’t have a web interface, and they didn’t have encryption. But they were using C++.

That experience is what sold me on Python. Originally we used Python because it was the only Rapid Application Development language that the original three team members had in common and it was already clear we couldn’t do it in C++ because we hadn’t the money nor time to do so. But as I firmed up the architecture and design it became clear that Python was ideal for the combination of component-oriented programming and object-oriented programming that we were engaged in, virtually every class ended up being a subclass off a master data class out of the SQL database while modeling the data flow through the components (and doing the marshalling and unmarshalling to move them across network wires) simply “fell out” of how Python works. Since then I’ve used Python for two other major projects, and a similar language (Ruby) for a third. All of those products were successful. The one project I do not consider a success was architected by one of the Silicon Valley legends in “C”, and not only performed poorly but was a nightmare to modify. Adding an additional object type to his distributed database, for example, required touching twelve different files. No, I’m not kidding. TWELVE FILES. And required using inscrutable macros that makes Linus Torvalds’ convoluted Linux kernel macros look simple. It was a nightmare project and I was looking to get out by the time six months had passed, in the end it took nearly two years to extricate myself from that mess, which after I left went down in a colossal flameout that even made the pages of the local newspaper when it imploded after somehow burning through $80M in 6 months…

So that’s how NOT to do it :). BTW, I have come to the conclusion that there is no One True Design Methodology, but you can’t go wrong if you start with defining the data that is to be entered and maintained, move to components and data flow between components, and then the objects that encapsulate that data fall out of it almost automatically. I prefer to do this by hand the old fashioned way, by scribbling drawings in pencil until I get it right. The master plan for the first program that I mentioned was comprised of six sheets of graph paper depicting the major components and data flows between them, which were on the wall across from our cubicles and every time we were stumped as to what we needed to do next, it was a case of simply walk across to the design wall and there it was. Add to that the database schema with its data definitions and a “master” class that encapsulated the core database record type and common operations thereof, and the actual classes for most of the remainder were just a matter of typing as fast as we could type! For another project, my boss created wire-line diagrams of every single screen in the program before we wrote a single line of code, and then we defined the database schema needed to encapsulate the data on those screens, then we worked from the ends to the middle to make it all come together to handle the operations implied by the screens upon the data defined by the schema. That was another six month project where three of us created a system in a short amount of time that our competitors would have required a team of dozens much longer to create…

And it is late and the cat is demanding petting, so I shall retire to the bed chambers :). G’nite.

30 Bryan { 06.22.11 at 12:17 pm }

As my old systems professor would say, about every 15 minutes, a properly designed program writes itself. It is a hell of lot easier to work things out before you start writing code, than to write code and attempt to patch the bits and pieces together. I’ve been on projects where I wrote modules that fulfilled the terms of the contract and the specifications I was given, but were totally worthless because the input wasn’t what the specifications said it would be, and the output wasn’t what was needed by the next stage. There was some elegant code in that project, some overly complex code, and some that was definitely classed as kludge, but the sucker could never work as a whole. I took my money, and the names of those who did elegant code, and ran.

I also made a point of not mentioning my involvement with that mess.

31 Kryten42 { 06.23.11 at 2:02 am }

I’ll use whatever the best tool is for the job. 🙂 I will also reinvent the wheel, if the particular wheel in question was badly designed or is flawed, or isn’t completely suitable. 🙂

I spend most of my time on concept & design. I’m not a coder. I can code, but it’s the one of the least enjoyable parts for me. 🙂 So, I try to figure out ways to minimise that part of the project. My forte is the planning, design and “What if?” (which generally makes me good at designing tests cases etc.) When I was the project mgr for a major engineering project at the end of the 80’s (that I have mentioned before, creating specialized industrial machinery & control systems), my first task was to visit clients and potential clients all over the World, and ask them how they used their current systems, what they would like to do, and what were the problems. I traveled because I wanted to actually see for myself. 🙂 I must say, most were very surprised. No manufacturer had ever asked them any of this before. Once they realised I was very serious, the floodgates of information opened! I spent the next few months with my team going through all the info, and called several of the companies I visited when new questions arose, or clarification was sought. I had more trouble with my own company Exec’s than with the clients! “Ohhh.. the time being wasted!”, “Ohhh… all the money being spent for nothing!” “Ohhh… this, that and the other thing!” This despite the fact that I’d gotten a Gov grant of $1.5mill for the project in the first place, and we had 2 years to show results! (I really, REALLY hate bloody bean counters on R&D projects!!) I ended going to a full Board meeting and there, I didst spake thusly: “Are you all stupid?? You now have several guaranteed customers dying to get the first machines which we haven’t even designed yet! The project will far more than pay for itself 6 mths after completion! I can easily take my grant, and my team, all of whom I hired, to another company that will appreciate all this, and do it there! Stop creating childish problems, and let me get on with making you all rich! Oh… and just by the way, I will expect a commission on sales to these clients! Good day!” and left. 🙂 And yes… I did rather put a few noses seriously out of joint, but I was counting on the saner ones to keep them in line. We still had problems, and I stayed for 3 projects, all of which received National and International awards for excellence. I spent a lot of money on getting the right tools for the job, and training everyone to use them. Mentor Graphics CAE 2000 complete software suite (circuit design, simulation, PCB design & manufacturing, and system testing). It came with a huge library of components, and arrived in 12 large cartons, one of which held 14 80MB cartridge tapes, the rest were manuals. Software through Pictures, Tektronix DAS9200 (multi-processor Digital Analysis System, with analog/digital pattern generation & code injecion – that thing was amazing!! I wish I had one today!) and high-end high-speed digital/analog CRO, a UNISITE 40 PLD programmer, a Transputer Dev system (we used 2 T800 Transputers, and their 32-way digital crossbar switch, and a Motorola 68030), Plantrack II for project management, SDRC I-DEAS (it was more than a CAD system). And other tools. We successfully completed the first machine prototype in 11 mth’s, and it is still being sought after today. 🙂 It was the most expensive machine of it’s type, but the ROI was guaranteed to be less than 1 year. Most of the competing machines were 2-3 years at least. Our first order came from a US company for 6 of them. But we sold most to Aus, England, Scotland and Belgium & NZ. 🙂

But… that was then… *shrug*. 🙂 Now it’s just me, and I’ll do what I need to.

Speaking about what your professor told you Bryan, reminded me of a product I came across in the 80’s:

The LAst One (Wiki)
The Last One (APC article)

Either the name was tongue-in-cheek, or they had a heck of an ego! 😆

32 Bryan { 06.23.11 at 4:00 pm }

It must have been tongue-in-cheek, because everyone knows that programmers don’t have egos, but do have strange senses of humor [well, except for those that have both, or neither, or one and not the other which would require a two-bit truth table to display… 😉 ]

If you can actually do what the specifications claim, for any product, and people actually need the product, it will sell. Most of the problems come from sales & marketing not really understanding the products their company produces and making promises that cannot be fulfilled. If it does what is needed, cost is not a prime factor. If it doesn’t quite do everything, then cost becomes a major factor, as the buyer knows that they will have to get something else to fill in the gaps.

33 Badtux { 06.24.11 at 10:16 am }

Having met Richard Stallman in person, I can very much report that he has an ego. (As well as body odor and a beard that looks suspiciously as if a family of racoons is living in it 😉 ).

Kryten, the reason why the first Python program I mentioned was so successful (still being sold on the market today, even!) is because the two of us who were the principle designers were systems administrators first, and the target market was Unix systems administrators. We knew our market. We knew what had frustrated us about this product type in the past — the bizarre need to configure things that could be queried from modern computer buses, the need to manually manage complexity that added no conceivable value, and so forth. And before we wrote a single line of code, we pretty much knew what it was going to do and how it was going to do it, though of course the plan didn’t 100% survive contact with reality :). But anyhow, that’s my general experience too — that the projects that work best are those that involve as much time up-front planning them as implementing them. Because for one thing, it takes 1/4th as much time to implement if you have that level of detail — even to actual screen display mockups! — because you don’t have to go back and refactor because you added one more field to a screen.

As far as dealing with actual customers, I have one caveat there: Avoid DEC disease. DEC was good at going out to their customers and asking them what they wanted. What their customers always wanted, when asked, was faster VAX minicomputers and fixes to various minor annoyances in VMS. That was not a recipe for corporate growth or, indeed, corporate survival. The deal is that customers have issues with current products, but they don’t have vision. They don’t create new products or new classes of products. Sometimes you just have to get in front and lead, and hope you got it right, and be prepared to abandon a product if you didn’t go somewhere that customers want to go (and listen to the customers and make it right if it *can* be made right). But of course if you don’t have that core value proposition it doesn’t matter how good your product is — if it doesn’t provide value to customers (or if the customers don’t value what it provides, same equation, just move the terms), it won’t sell. That was the problem with my last employer, what we had was a good security product, but customers don’t see the value of security, so our marketing department tried to sell it as other things and succeeded at that in much the same way as selling ice to Eskimos 🙁 .

34 Kryten42 { 06.24.11 at 11:42 am }

😆 You are correct about Stallman! 😀 I wrote a few articles for APC in it’s early years, and met at conferences etc. I also worked for DEC for a short while. I was initially contracted to create a comm’s system to link a VAX to ICL 2900 series mainframes (based upon a similar system I designed for ICL, the difference being so that an ICL 2900 could communicate with an ICL DRS 300 mini (or what ICL called Departmental System), and yes… many ICL systems had trouble communicating with other ICL systems). I discovered one of the problems at DEC was a lack of any properly defined *scope* for projects. It seemed to me that DEC didn’t believe in *boundaries* very much. I ended up walking away with the project unfinished because they kept changing their minds about one part or another. Originally, I was working on a DEC Rainbow to be used as the comm’s controller (because my original system at ICL was based on the Z80 & 8088 CPU’s, and I knew the Z80 & CPM in my sleep! Hell… I still remember many of the op codes and mnemonics! 😉 😀 Then, I turned up one Monday and my two Rainbow test boxes were gone, and I was told that I would have to use something else because the model I’d been using was now being discontinued, and a another model replaced it. The problem was… the new one had no Z80 CPU! I’d just completed a successful test of a particular component that used both the Z80 & 8088 CPU’s. I said that as that was the only Z80 + 8088 based system they had, I would have to start from scratch! I was told that the order came for Corp HQ, and that was that. So I left. DEC here couldn’t change the signs in the car part without approval from HQ in the USA! Very monolithic, and slow to react because of it. They got what they deserved in the end. 🙂

Like you, I like to know exactly what is expected, and what the target market actually is, want’s and needs. I need to know there is a properly defined scope, and if I am the one doing the scope, I’ll make damned sure it’s done right. There are many other things I’ll need to know about a project before I’ll take it on. I’ve learned many lessons over the decades, but the #1 is that the longer a project takes, the more likely it is to fail. If a project required more than 6 mths ( a rough estimate based upon my experience), the harder it got.

I once did a *Needs Analysis* course because one of the problems I discovered with many clients is that they often had trouble distinguishing between what they needed, and what they wanted (or thought they needed or wanted). It should be amazing (but wasn’t to me) that many companies can’t actually seem to be able to *see* what their actual needs are! Or often, what they perceive to be *the problem* is really a symptom of the problem. And some, do it on purpose. Usually because they would have to admit *they screwed up*! 😉 And rule #1 for many people in some organizations is *it’s always someone else’s fault*!

The other problem in my experience, is that people confuse a project scope with “this is how I think it should be done”, even before even the simplest feasibility study or anything else has been done!

I like security. 🙂 The problem is now that with everyone so *connected*, and with so many organizations with their own ideas about security, achieving anything like a decent level of security is unlikely. 🙂 And anything that involves a human in any way, already has a security problem. 😉 The best one can hope for, is decide what the actual risks are, what an *acceptable* risk is, and figure out if it’s achievable, and if the cost is acceptable. I could talk about some real security horror stores etc. But I’m sure you’ve heard similar. *shrug*

As one of my instructors tole me long ago “The best security is the one you never have to use!” It amazes me how many times I hear some ignorant penny-pincher say something like “This security system is costing a fortune and it’s never been needed! We should cut it back and use the money on *this or that*”. Unfortunately, idiot’s are everywhere, and many become Exec’s.

When I had my security biz, we did several audits for some huge clients. Almost all failed, and some failed in mind-bogglingly basic ways!

Nothing surprises me any more, if it ever did. 😆

Yes… trying to sell security is a tough job. All people can see is “The expense! The expense!” I used to get very strange looks whenever I asked an exec to put a value on their reputation, and what would happen if they lost it! And invariably the question I’d get would be something like “What’s that got to do with security?” 😆 After all… that’s a marketing problem, right?

Idiots are everywhere, and many are in charge. Sad, isn’t it?

35 Kryten42 { 06.24.11 at 12:50 pm }

Heh… Ya got me reminiscing about DEC! 😆 I’m remembering things about those days…

In some ways, the rainbow 100 (100+ I used actually) was very advanced for the day. I remember that the two CPU’s actually worked together, and they they actually had some shared memory space where they could communicate (about 2KB I think). It was a multiboot system and as standard could boot either CPM-80, DOS, or as a VT100 Terminal. It could also boot CPM-86, MPM, VENIX (SYS-5 UNIX), and one of the really bright engineers DEC had, ported Windows-1 to it (a real task as the Rainbow had a weird proprietary graphics display system). Generally, when one of the processor’s was operating as the *CPU*, the other acted as an I/O processor. 🙂 It was one of the fastest PC ‘s around back then. 🙂

The major problem with DEC, was really Ken Olson. He was a control nut. Everything had to be done his way! He decided that they wouldn’t use the IBM defined CGA graphics system, they would create their own proprietary system, and keep it to themselves! So, whilst the hardware was wonderful, there was a severe shortage of software. It was years later that a DEC engineer produced a software dev kit for the Rainbow, but it was too late by then. Then Olsen came up with the absolutely terrible VAXmate! (Essentially an IBM AT clone). He wanted it done fast, so the Engineers were forced to cut corners, and DEC got hit hard by IBM for stealing IBM BIOS code (which had been used verbatim in the VAXmate). The VAXmate also only had a monochrome EGA support, and had no fan’s at all! It overheated easily. I heard many at DEC call olsen *short-sighted* (when they were being polite anyway!) 😉 In the end, I think they only sold about a half dozen of the things! In contrast, people were begging for the Rainbow to be continued and to release the s/w dev kit to s/w developers! But Olsen refused. *shrug*

Ahhhh… “Them good ol’ days!” 😆

36 Badtux { 06.24.11 at 4:43 pm }

Reminds me of a story from when Honeywell was in the computer business. One of the main issues with bringing up the Multics system was when the sysop forgot to input the current date and time. The system would come up to the point where it would mount the filesystems, notice that the date of last mount was in the future, then it’d crash, *hard*, and have to be brought back up by hand again (remember, this was in the day when mainframes could take thirty minutes or more to boot).

So this was the early 80’s, the first CMOS battery chips had become available in PC’s, and one bright engineer got the idea to put this $5 chip into the Multics console to keep time so that the operator didn’t have to manually enter the time every time the system booted. He spent the next six months trying to get approval. In the end he gave up, because the numbers were that it would cost over $50,000 per system to make this one little change, due to all the change orders, design documentation through multiple layers of management, changes to release documentation, and so on and so forth that would be required. So in the end Multics went down to its ignomious abandonment and doom without ever knowing what time it was…

By contrast, at my current job two months ago we learned we had a significant opportunity if we provided X to a major corporation. We looked at all the bits and pieces in our toolbox, rearranged them in a novel new manner, and six weeks later we had X for this major corporation. Which is the only way that a smaller company can compete — by being nimble. By contrast, at a previous startup it took them 2 1/2 years to release their first product. That startup crashed and burned, because if you’re that much smaller and newer than your competitors you have to simply execute faster — and if you don’t you’re toast. That was the startup that was led by the Silicon Valley legend who loved “C” spaghetti code and forced it upon everybody for everything because it was “faster”… but a crappy architecture is slow no matter *what* language you write it in. At my current employer we have some Perl, some PHP, some Java, some C++, and yes, some “C”… but it’s all about what works best for any particular component. If you architect your system right, it shouldn’t matter what language *any* component is written in.

37 Badtux { 06.24.11 at 4:50 pm }

Ah yes, the Multics clock chip story, from the source:

http://www.multicians.org/multo-antes.html

38 Kryten42 { 06.25.11 at 2:56 pm }

Ahhh… Thanks for that Badtux. 🙂 Multics wasn’t a system I’ve used, though I’ve heard of it, it has been used here. 🙂

Honeywell make pretty good industrial hardware, and I think I may have used one of their computers in the 70’s… I used Honeywell control’s in a couple projects. 🙂

Yeah… I’ve been involved in projects where what should have been a relatively small and simple change became very complex and expensive (one particular military project springs to mind, but that is common on Mil projects I was told by a very experience project manager.) In the early 90’s, just after we’d started assembling the first Collins class submarine, I had a call from this project manager. The sub was being assembled in Sth Aus, and he asked if I’d like to take a break for a few days, and have a look at the work (I still had high security clearance as I’d been working on a couple Mil projects elsewhere. The clearances have to be redone every 6 mth’s for contractors.) As I’d half expected, he had an ulterior motive. 😉 They had assembled just over half of the hull, it was assembled by dropping a pre-assembled cylindrical section in place and welding it to the previous section (in a nutshell) and installing whatever bulkheads etc were to be added. He asked me to have a walk through the completed sections, and see if I could spot anything… unusual. Hmmmm… So I did, and after the 3rd trip I stopped at the 2nd last section because there was something wrong and it was nagging me. I went back, looked around carefully… moved forward again and looked… and then it hit me! They’d welded one of the sections the wrong way around, and the bulkhead was all wrong! There was an armored conduit pipe running the length of the hull, and in this one section, the conduit pipe was on the wrong side! 😆 And because of the special welds etc, they couldn’t just cut it and turn it around!They had to replace two whole segments (very expensive! But cheaper than scrapping the whole boat!) Apparently, they had originally had that conduit running down the other side, but a decision had been made (for a variety of reasons, that in all honesty were pretty minor), to move it to the other side. But the blueprint for this segment had been mixed up with the old one, and nobody had spotted it, in spite of it having a different code, date, and other differences. And you know… I saved those buggers a fortune, and I got nada. Typical. *shrug*

I asked my friend how something like that could have happened on one of his projects! I was *really* surprised… he taught me most of what I know, and he was one of the best pm’s in’s the biz! He said he’d had nothing to do with it. The original pm had decided to quit, and he’d come on board after all the decisions had been made, and changed. He wouldn’t have allowed that, making a major change after a project has started is asking for a disaster (and the Collins has had a lot of problems!) Oh well. 😀

39 Bryan { 06.25.11 at 5:28 pm }

One of the reasons I refused to work on military projects [beyond the fact that I got screwed when a project got canceled after the contracts had been signed] is that the major contractors always under-bid the job knowing they will make the money on change orders. This is why major projects always have cost overruns, they end up not being the projects that were bid. They aren’t the projects that are bid, because there isn’t enough time or money spent defining what is needed and wanted, so what is bid normally won’t do the job the military thought it would do.

The last time I was involved with a government project, it was as a subcontractor, and I only did it because some friends were involved. I still remember sitting in a meeting when I told them what it would cost to implement what was specified, and, because friends were involved, I explained why it wouldn’t do what they really needed done. No one on the government side was able to change anything, so I did what was specified. When the change order came down, the company I was subbing for stuck it to them because they had been warned it wouldn’t work.

40 Badtux { 06.26.11 at 1:29 pm }

Kryten, I had the pleasure of using Multics back in the day, and of even writing a couple of attacks. For example, Multics had the concept of project accounts, where you logged in both with a user ID and a project ID. Supposedly your activities while logged into a specific project were limited to a specific project and you had only the permissions available to that project. But upon examining the interprocess communications mechanisms available within Multics, I discovered that if you had multiple Multics projects, your effective permissions were the sum of all permissions. For example, on one project I had permissions to use the 9-track tape drive and to print out files. On another project, I did not have permissions to use the 9-track tape drive and to print out files, because this was a restricted project where they did not want any of the source code making its way to certain entities ( you’ll understand what I mean by that 😉 ). Needless to say, it took me roughly two days time before I had the ability to print or save to tape any file in the restricted project.

Still, Multics was far more secure than any currently-popular operating system. It was no less succeptible to phishing or social attacks than current operating systems (yes, I did both too — I can say this with confidence since the statute of limitations is long since expired), but things like stack smashes simply wouldn’t work (stacks were not executable code, the data stack was in a separate segment from the program return stack, and the location of things within address spaces were effectively randomized, you literally did not know where any segment was located ahead of time without doing a system call to resolve it). But Honeywell did not value security any more than most customers do, they’d put that security in there because the Department of Defense wanted it and paid to do it, and they largely abandoned the system when they discovered it would take considerable amounts of money to keep it up to date, instead putting only the minimal resources into it needed to fix obvious bugs while spending the next five years holding meetings and pushing proposals through the Byzantine bureaucracy of the company to try to resolve the political question of whether to move forward with Multics or instead improve the older/cheaper GCOS system to modern standards. By the time this was all resolved, both Multics and GCOS were so obsolete that Honeywell ignominously dumped the reeking carcasses upon their former French subsidiary Bull, and withdrew from the computer business.

And that, alas, is the fate of computer security in the modern era. Thus why attackers continue coming up with new and novel ways to attack computer systems, while our ways of dealing with attackers are stuck in 1995.

41 Kryten42 { 06.27.11 at 11:47 pm }

Ahhh, well… Hacking! 😉 Amazing what bored (or sufficiently annoyed) students will get upto. I could tell stories, but I won’t here. 😛

I had a trip to the big smoke (Melb) today. Went to a big clearance/used book warehouse. 🙂 I got some Python and other books. $1-$10 per book is way better than $55!! (I checked the price for: “Rapid GUI programming with Python and Qt: the definitive guide to PyQt programming”). Paper is really expensive these days!

I nearly died when I checked the price of this one on Amazon! “Python and Tkinter Programming by John E. Grayson” I picked it up for $5. On Amazon, they are going (new) from $100 to $191! And used for $44 to $150! That cannot be right! Hell, the price direct from the publisher (Manning) is $50 or $30 for the PDF! People seriously are not stupid enough to buy from Amazon (well, Amazon sellers)? (Well, yeah, they are, I know… but… geez!)

Amazon: Python and Tkinter Programming (Paperback)

Well, anyway… I also picked up “Rapid GUI Programming with Python and Qt: The Definitive Guide to PyQt Programming”, “Python Essential Reference (4th Edition)” and “XML Processing with Python”. Also got a couple Perl books (a reference & a cookbook). All up, 9 books for $39! 😆 Anyone who pays the insane retail prices for books *IS* insane (or so wealthy, they don’t care). 😛 😉

BTW, the first Mainframe I actually worked on was a Sperry Univac 1100. We had these huge AWA (ugly green) terminals with a paper tape punch attached so we could save our code! 😆 I used Fortran on that monster. 🙂 They stuck us in a room next to the Mainframe, and it was full of those big mag tape cabinet’s and washing machine sized disk drives and 2 big high-speed band printers! Was all noisy as hell. I dunno how we got any work done, but I (and a few others) quit after 4 Months! We were working on a centralized Airline booking/reservation system. It was all done mostly manually here back then (late 70’s), and was a disaster. Our job was to automate it. 🙂

42 Badtux { 06.28.11 at 10:59 am }

I suppose I can blame Multics for my subsequent computer career. My original major was electrical engineering, had a scholarship and all that in EE. But hacking what was supposed to be the most secure operating system on the planet was *fun*! Then when Multics was clearly on its way out they moved us all to some Unix minicomputers. Two of them had 4 megabytes of memory and a single 500 megabyte hard drive and typically ran 25 users without a problem. One of them had a whole *8* megabytes of memory and two 500 megabyte hard drives, and typically ran 30 users without a problem. Those three minicomputers cost us around $750K total but would handle almost as many users as a $5M+ Multics system! Those were fun in a different way, but it was like moving back in time. Multics had all the culmination of decades of MIT computer culture. Unix culture was new back then, Perl hadn’t even been invented yet! My first task once granted a maintenance account and Unix source code access by the university’s computing center director (his solution to the problem of hackers was to put them to work fixing actual problems with the system 😉 ) was to get the Ada environment, a truly awful thing called Arcturus, up to snuff, it had more bugs than Windows 2.0 and it was always a joy trying to figure out which Ada constructs were supported and which were not. But that was when DARPA was spec’ing ADA and we had a couple of DARPA contracts, so (shrug). There were a couple of important pieces of functionality that were supposedly there but were broken, that is how I learned “C”, by fixing Arcturus. Needless to say, as with COBOL, I have forgotten every stitch of ADA that I ever knew (funny how that works, my memory is very selective, I can remember how to write 6502 source code but cannot remember a stitch of ADA or COBOL!), but for some reason have remembered how to write “C” , a language I never studied in college, over 25 years later.

Ah yes, the first computer I ever used. It was a TRS-80 Model 1 with the Advanced BASIC. 16K of memory, and a cassette deck for saving/loading programs. What I remember most about that system was the manual that came with it for learning BASIC. I have never read a “how-to-program” guide since that was even 1/10th as good as that, it was entertaining, well illustrated, and took you step by step over everything you needed to know to start writing BASIC programs on the TRS-80. When later I was tasked with teaching BASIC programming to a classroom of bored high school students, I was decidedly wishing I still had access to that old TRS-80 BASIC programming tutorial, it was far more entertaining than boring me.

43 Kryten42 { 06.28.11 at 2:09 pm }

Sounds like you had some experiences, and fun along the road. 😀

I’ve always wanted to know *how things work*. I blame my Grandfather. 🙂 I watched him take a radio apart to fix it when I was very young. And then I (successfully) took our radio apart… and discovered that reassembling things (so they will actually work) is harder! 😆 So, I kinda became his *apprentice* when anything needed fixing or building (I suspect my Mom – his daughter – had wards after the radio incident). 😉 I actually began an Automotive Mechanics apprenticeship when I was 15 (the earliest age it could be done back then) with Shell. The ONLY good part about that was the 2 days a week I spent at TAFE (Technical And Further Education – our equivalent of a Technical College) when I actually got to work on a car! All I did at Shell was change tires and replace batteries! But that was a springboard for me. The teacher at TAFE said I was wasting my talents and should consider some form of Engineering. I had no idea what I was really interested in at that stage, so the teacher had me meet the Principal, and we discussed options. They had a new course that intrigued me, called Certificate of Technology (CoT). It was an advanced diploma course of 4 years which was a path to a University Engineering or Science degree (for 1 year). It was a combination of several studies, and unlike University it was split 60% practical, 40% theory (Uni is the other way around). It was made up of physics, mathematics, metallurgy, electrical trades, electronics, mechanical, machine shop (using all kinds of tools, drill press, band saws, lathes, presses, benders, grinders, oxy & arc welders, *normal* tools and a small 3-axis milling machine). The first year, we had to make a center punch (and it wasn’t as easy as it looks! It had to be perfect, the hatching for the hand grip had to be exactly 45o 2mm parallel lines, the tip had to be 60o, and was made from hardened tool steel! 🙂 Each year was made up of 32 *modules* of 2 to 4 weeks each, 2 or 3 times a week (with 3 different modules per day). They forgot to mention until I stated that the pass mark for each module was a minimum of 75%, and if we didn’t avg much better than that, we would be forced to consider alternatives at the end of the year. It was a very tough course! We started with 4 classes of 16 students, and lost 23 by the end of the first year! And that was the easy year! 😆 By the end of the 3rd year, there were 17 left. 🙂 As well as all the great tools, we had computer’s! A PDP-11, and a VAX 11-750 (and a bunch of micro’s, Northstar Horizon, TRS-80’s, Apple II’s, and others) At the end of the third year, those who passed were presented with a set of Heathkit H8 computer kit plans and some hardware (with the Z80 CPU board, 2 4KB RAM boards, and an I/O board), there was a gotcha (or two actually)! First, it had to be fully assembled and working by the start of year 4, AND we had to make our own PCB’s, and we had to get all the parts ourselves from the school inventory (with all appropriate paperwork). We had a lab for making PCB’s, but we had to use a bare copper board and make up the photographic solution, set up the equipment, set up the etch bath (which was toxic and had to be exactly the right temp or the copper would dissolve unevenly etc), drill the holes and add via’s. 🙂 Luckily, I’d spent much time in there, and it was easy for me. I even helped several other students. 🙂 Anyway, I built it without too much drama. I even added 2 more RAM boards so I could add the floppy drive system, that required 16 KB RAM. 🙂 I passed with a distinction avg (and missed the coveted high distinction by a lousy 2%, and it was all because I was sick as a dog for a couple months!) One upside to doing this particular course at this school (unbeknownst to me until later) was that as this was then the toughest course in Aus, if you made it past the 3rd year, you were guaranteed a job! At the start of y4, head hunters would turn up and we’d listen to their spiels and choose 2 or 3 for interviews. We then had the choice of going t0 Uni for a year to get a degree, or starting work after y4 ended (assuming we passed, of course). 🙂 I decided to take a job with Tandem to work on their new NonStop II systems after going to their new office (which was small at that time, half of one floor of a tall building in (what was known as *Computer Land* because almost all IT companies had their offices in this area of the city, the exceptions were IBM, DEC, HP & CDC). I saw these awesome black smoked-glass fronted cabinet’s and was shown what was inside, and fell in love! 😆 They showed the CPU cabinet which had 4 rows of CPU cards, all had a row of LED’s at the front edge to show each CPU’s work load, and the snr engineer said “watch this” , went to the CRT terminal, and started a db dump and text went rushing up the display, and the LED’s on several cards went to max, then he walked over to the rack, unlocked a fully lit card, and pulled it out! And another card that had been idling, went to max, and the display of data kept going without a pause! And that was my intro to *Fault Tolerance!* Yeah… I wanted to work there real bad! 😆 Even today, I almost never see the level of redundancy that that system had in the early 80’s! Then, a couple things went bad for me, I left. A year later, I was recruited and began my Mil/Int *career* *shrug*.

I owned a string of computers from the late 70’s (except when I was in the Mil). I started with the EDUC-8 (which was pronounced “educate”, I built myself from plans in an electronics mag in ’75, and which is primarily why the H8 kit at TAFE later was no problem for me 😉 When I started the course, I had an Apple II). I got a TRS-80, and wasn’t impressed really. fter my stint in the military, I decided to get back into my hobby. I decided to get a BBC Master Turbo. It was really an amazing and flexible system. It had so much s/w (and great games for the time). 😆 I added the Z80 & 32016 co-processors (it came with a 4 MHz 65C102), added 8 extra ROM sockets (for 16 in total), the TI speech synthesis system, and the whole kit and kaboodle! 😀 I began teaching myself to program again, and discovered that almost every language known at that time was available on the Beeb (I still have them all and all the books). I got: BCPL, Forth (I really liked that), Prolog, Lisp, Fortran-77, ISO-Pascal, and Extended BASIC. Later on, I found PL/1 (micro-PL/1 actually) and played with that for awhile. 🙂 The engineer in me like Forth and Lisp (I played with neural net’s), Prolog was very strict and finicky, and BCPL was quite powerful and flexible.

I wanted to get an IBM PC, but didn’t like it. I discovered a PC *clone* (only better) called a ACT Sirius 1 (Victor 9000 in the USA, ACT later became Apricot Computers). It’s spec’s were so much better than an IBM XT, and it was faster, had more storage (2 1.2MB DS FDD’s), and cheaper! I still have the manuals and some s/w for it. 🙂 Did a lot of coding and hacking on that thing. My next serious PC, was an Apricot (UK) XEN HD (80286 CPU, 8087 math & 8089 I/O co-processors, 3.5″ FDD & 20MB HD, with a 2nd optional – and Windows 1) I also got an Acorn (BBC) Archimedes with the 32-bit ARM RISC CPU, 4MB RAM, HDD, and a swag of dev tools.

I eventually went back to Uni for my degree in Electronics Engineering / Industrial Design / Robotic Automation. I got a job as R&D Manager and created semi-automated (only needed a single operator) machines. I looked at Ada for the initial project, but found it too big and restrictive. I settled in the Inmos Transputer’s and the Motorola 68030 for several reasons, including that both had VMEbus based development boards. 🙂 We used the Occam language, and a C-to-Occam translator (that actually worked pretty well, amazingly). 🙂 Occam supported concurrency and channel-based inter-process or inter-processor communication as a fundamental part of the language, and security was also a consideration.

(A lot of other stuff happened in between my getting the CoT and the degree in EE. It was a very busy decade. 😉 🙂 I was partner in a Modem /control systems biz in Canberra. One of my partners was also MD of GD (Aus), he was also my boss when we *worked for the Gov*. We had very high sec clearance, and I got to travel a lot! 😀

Maybe I will write that book… one day! 😉 😆

44 Kryten42 { 06.28.11 at 2:36 pm }

Oh! I just remembered as I was getting ready for bed… 😉

We also had a Cromemco Z2D at TAFE. When I went to GD in the USA, I found that they also used them, especially for supporting the F-16. I was told that the USAF used the Cromemco’s to support various aircraft, but mostly the F-15 & F-16. We also had an ICL PERQ (which had been donated, as had the VAX-11/750 “Comet”. DEC got a lot of it’s workforce from that TAFE school, the DEC Aus HQ was about a 15 min walk from the campus. It caused a bit of commotion, as that school became the first educational institution in Aus to get a VAX. Melb Uni was trying to get funding for an 11/780, and the lowly TAFE got it’s smaller brother for *free*. 😆 Melb Uni eventually got one of the first VAX-11/782 “atlas” systems in the World (for the cost of the single CPU 780), and later one of the rare quad 11/784 “VAXimus” systems. 🙂

OK… g’night all! 😉 😀