Replacement Post Two
So, the second post also closed on this two month old discussion. The software is using UTC instead of local time, so I discovered the problem when I went to comment.
The practice stops spam from being dropped into old posts, but it complicates following the discussion.
41 comments
Badtux’s last comment:
I don’t understand why the guy would risk alienating anyone if he hopes to survive as a consultant. Word of mouth is the best advertising there is, and if the word is bad it is hard to recover.
I glad to hear some good news for a change. Things have been down for so long that it is very welcome.
You have some strange wishes, ‘jung’ penguin 😉
Maybe he’s just upset that I’m mucking around with “his” infrastructure? But the company that he build this infrastructure for no longer exists.
I find it amusing to be making money off of war criminals. They’ll never truly pay for their crimes in this world, but at least they’re paying *something*, and why not to me? 😈
Clients do things to systems all the time, and frankly, I stopped being annoyed about that the first time something broke and I told my contact at the company that they shouldn’t have changed what they did, but I would do my best to get it to work as well as could be expected. You have to accept that users will not always do what you expect, and live with it to stay in business. Having a hissy fit doesn’t do your business much good.
Face it, if M$ coded software the way I wanted it done, I wouldn’t have had a business. If users actually followed the proper procedures, I wouldn’t have had nearly as much business as I had.
So you are taking war reparations by proxy, right? 😈
Hiya! 😀 Had a damned head cold. So I missed all the fun! 😉
Hard to study with a head cold! *shrug*
I’ve come across the same thing with IT guys refusing to cooperate after loosing their job (one way or another). It’s damned petty… But that’s Human nature for ya! The other common reason is “You’re supposed to be so hot, you figure it out!” So-called “professional jealousy” is rife in any industry. 😉 *shrug* And it is amazing how many “pro’s* are completely insecure! And I learned the hard way many years ago, working with insecure people makes everything a hell of a lot harder! So these days (if it happens) I generally make sure to get rid of them ASAP by any means possible!
MediaFire just celebrated ‘Pi’ day! And one of the ladies made a ‘Black-bottom pecan cheesecake’! I am gonna be drooling for days! Life sux! *sigh* And some crazy guy recited Pi to 100 places! LOL
http://blog.mediafire.com/2013/03/happy-pi-day/
Oh well, not that my head’s cleared up somewhat… back to it!
“So you are taking war reparations by proxy, right?”
GROAN!! 😉 😆
Y’all are heading towards winter and the flu season while we are finally warming up a bit. Yeah, despite the advertizing almost everything they sell for a cold makes you drowsy at a minimum, so learning is definitely not a real option.
You can’t convince people that acting like a jerk only gives people an excuse to get rid of you. A lot of people who assume they are irreplaceable seemed really shocked when they get fired. The really good people stay to the end, or leave when the load of BS gets too high. The jerks have to be fired, because they won’t take a hint.
Weather’s turning nice here, mid 50’s at night and mid 70’s during the day.
“war reparations by proxy”… yeah :).
Meanwhile my SSD that died and got sent to Micron came back yesterday, so I’m backing up my laptop right now and will do the clone of the spinning rust to the SSD tomorrow. It w ill be nice to have my laptop back at full speed again, SSD has spoiled me!
You get to commune with the mountains on a regular basis while breaking your Jeep.
There is nothing more annoying than having to wait on a formerly fast system. When things broke during one of our little storms, getting used to using dial-up again was a major pain.
Now you face the agony of Bill – updating all of the things you finished updating not that long ago.
Nope, no agony of Bill. The migration only took a few hours this morning. The procedure:
1. Resize source drive’s C: to as small as possible using Easus Partition Master.
2. Put SSD into a USB3 enclosure.
3. Put Linux Live CD into CD drive.
4. Boot into Linux Live CD.
5. dd if=/dev/sda of=/dev/sdc bs=16384 count=1000
6. sfdisk -R /dev/sdc
7. dd if=/dev/sda1 of=/dev/sdc1 bs=131072
8. dd if=/dev/sda2 of=/dev/sdc2 bs=131072
9. Power off the system, remove the spinning rust from the computer, and put the SSD into the computer
10. Boot the computer into Windows.
11. Expand your C: partition to use the remainder of the SSD using Easus Partition Master.
No re-install of the OS required. No driver updates required (just re-running the Windows Experience Index tests, which tells Windows about the SSD and makes Windows turn off things that should not be done to an SSD, such as defragmenting). Cloning a drive makes things *much* easier. I could of course have done the same thing using a disk cloning program running from its own live CD, but I know how to use dd and sfdisk and already had the live Linux CD pre-burned, so… shrug.
And now things go “BAM!” again. I like :twisted:.
It is certainly easier to do things that way, than the rebuilding route, and I truly understand about not using new software to do something that you already know how to do with existing utilities. Linux has been the easiest way of dealing with hard disks for a very long time. Seagate used to include cloning software with its big disks, but it made choices I didn’t like, so it was back to the Red Hat boot CD.
And now I find out that Crucial has a brand new 1 terabyte SSD for $599. I guess you know what my next nerd purchase is going to be :twisted:. Well, when I can afford it. SIIIIIigh!
– Badtux the Nerd Penguin
That will be the next box, although the speed of the AMD APUs has gone up while the price has come down and the motherboard, etc will support them, but disk speed is more important than processor for most of what I do … decisions
It is so easy to spend money on your computer(s) that you might as well own a boat.
Or a swimming pool. If a boat is a hole in the water that you pour money into, a swimming pool is a hole in the ground that you pour money into :).
MySQL: Menace or boon? Discuss.
I’m leaning towards the menace side myself, because people see that it has SQL in the name, and assume that it’s a relational database rather than syntactic sugar in front of a dumb-stupid ISAM database (a.k.a. dBase II on steroids). The end result is that they end up making the kind of queries that a real database uses a query optimizer to discover indexes etc. to satisfy, then wonder why their queries are so friggin’ slow. That, and they wonder why, when they’re doing a batch insert of a thousand rows of data, suddenly they can’t query out any data. (Hint — it’s because, like dBase II, writes lock up the entire friggin’ database, even though they’re not supposed to, but because the possibly affected row entries in the indexes are write-locked and will be waited upon by the select before it can get a read lock on them, it might as well be true).
MySQL is a nice little data base that responds to a subset of the SQL standard, but it has limitations on its scalability that will bite you if you ignore them. POSTgreSQL is more robust, and usually a better choice if you want to get big without breaking things.
MySQL is shipped by default with a lot of other packages, and is the target for many blogging/CMS packages, so people get used to it and don’t want to change. POSTgreSQL is a much safer choice for transaction processing, but people will stay with what they know to the bitter end, and ends can be very bitter in MySQL.
My entry to RDBMS was DB2 on an IBM 370, and I have created pseudo-RDBMS systems using dBASE II and dBASE III for small projects with static queries. I have gone to battle with Oracle fixing problems for clients. Given that Oracle now controls MySQL, people need to consider moving to something else. Oracle could drop it at any time, and that would leave a mess behind for users.
I can ignore it in WordPress, but I wouldn’t use MySQL for a serious project.
You already know what I think of MySQL. It’s a PITA!
Percona & MariaDB are better (and aare supersets of MySQL). PostgreSQL is a good, but non-compatible alternative. But none are perfect. I don’t like Oracle either (and I spent years battling it, mainly because it’s middle-ware tools were completely rubbish) Out of all the DBMS’s I’ve used, I prefer Sybase (either the Adaptive Server or Advantage Server). Of course, it’s not free or cheap (unless you count MS SQL which was derived from Sybase). Sybase have good tools that work (eg. PowerDesigner & Power Builder). If I were running a big DB project again, I’d use Sybase.
I’m using Percona (Percona Server with XtraDB) for my project for a few reasons. It’s much faster for a start! It doesn’t make stupid assumptions such as that you will be running a massive DB with high resource requirements. Has much better InnoDB support, especially for performance counters, statistics & status counters, it has twice as many INFORMATION_SCHEME tables (including the InnoDB data Dictionary (which MySQL lacks for some no-doubt stupid reason), has much better diagnostics (including mutex), and unlike Mysql handles corrupted tables gracefully and keeps a transactional replication state, and has fine-grained mutex locking! Yes!! 😉 😀 Oh… and is much more configurable if it’s needed (unlike MySQL with refuses to allow you to manually change or configure certain things). etc, etc, etc…
Honestly dunno why people don’t use Percona or MariaDB. They are drop in replacements for MySQL, and the advantages far outweigh the disadvantages. *shrug*
And that’s my 2c! 😛 😆
Oh! One of the other reasons I chose Percona is that because I am using an offloaded DB server on high-speed system, MySQL presents problems with NoSQL access. And Percona is optimized 8out of the box* for Cloud & SSD use.
NoSQL Access with Percona Server
Here’s a comparison list:
Percona Server Feature Comparison
Dammit! Sorry… I am rushing to reply to this, and do my work and study… Been a VERY hectic week! Can’t believe it;s May already! Half the year is over, and I’m still trying to get stuff online and running! I’ll never get finished at this rate!
\rant! 😉
Yet ANOTHER reason, is because as discussed above, I’ll be using Nagios for my monitoring & fault-detection system. Percona has proven open-source plugins for Nagios (and Cacti also).
Including plugins to check: lvm snapshots, mysql deadlocks, mysql deleted files, mysql file privs, mysql innodb, mysql pidfile, mysql processlist, mysql replication delay, mysql replication running, mysql status, pt table checksum, unix memory.
Also, Percona has a robust open-source hot-backup software called Percona XtraBackup. It has better features than MySQL Enterprise Backup which costs $5k / server!
What else… Ummm… Oh! yeah, persona has a good Data Recovery Toolkit for InnoDB (it will recover data that InnoDB’s standard recovery tool can’t, such as when there is a dropped table, or the the DB is corrupter to the point where InnoDB can no longer recognize the data (both of which problems I’ve had in the past with MySQL!)
One other thing I like about Percona is that it supports high scalability and growth, without paying through the nose as with MySQL! There is a compatible Cluster version (Percona XtraDB Cluster) that supports such things as: mynchronous replication, multi-master replication support, parallel replication, and automatic node provisioning.
AND… last, but by no means least… Percona will provide independent reports and audits to prove their claims! 😉 😀
OK. I think that’s about it. 😉
…and it’s still my 2c! (Or maybe $1.50 now!) 😆
I remembered reading an interview of Michael Widenius (one of the creator’s of MySQL, and the creator of MariaDB) a couple months ago in ComputerWorld. I like him! He doesn’t trust Oracle a mm either, and gives reasons! 😉 😀
Dead database walking: MySQL’s creator on why the future belongs to MariaDB
BTW, just to clarify. If you *MUST* use MyISAM, than use MariaDB, it has better performance and support than MySQL Percona supports MyISAM for compatibility, but their focus is on InnoDB.
Right. Back to the confusing World of IPv6 (that I think I have the hand of now!) 😉 😀
I have been trying to figure out what my IPv6 subnet mask is. And I finally got it. 😀
I was given a /112 block. This essentially breaks down to a FFFF (65,536) range of /64 blocks. So, I have 281,474,976,710,656 subnets of 65,536 addresses available! I don’t think I’ll be running out for awhile. 😉 😆
I wanted to put my 3 servers on different IPv6 subnets and use IPv6 for all the communication between them. That way I don’t have to buy more IPv4 IP’s which are getting expensive. I have 2 (one of which is dedicated to SSL) which will be used for the main server with DirectAdmin, and will route via IPv6 to the other servers.
My ISP doesn’t yet support IPv6 either (from the discussion above), which I initially thought would be a problem! But it turns out not to be. 🙂 There are (so-called)IPv6 Brokers that will take care of it for me (or you). There are 2 here in Aus, and 2 in the USA (Hurricane Electric & SixXS). Info here on Wiki:
List of IPv6 tunnel brokers
Right! back to work (@ 5AM!) Who needs sleep? It’s overrated… At least, until one collapses from exhaustion! *shrug* 😀
Kryten, I think the reason people don’t use Percona is because the “regular” MySQL comes pre-installed on pretty much every Linux system, and is “good enough” for little things like WordPress.
Bryan, MySQL is GPL so if Oracle tries to discontinue it or torpedo it, it’ll turn into LibreSQL like OpenOffice turned into LibreOffice. That’s the joy of Open Source software — nobody can discontinue it, as long as it’s useful enough for people to continue to maintain it, it’ll always be there.
Hmm, googled around and found that MariaDB (Monty’s fork of MySQL) is replacing MySQL in both the next Fedora and the next OpenSUSE, and has already replaced MySQL at Wikipedia. Seems like the LibreSQL-ization of MySQL is well underway :).
At least you we won’t have to worry about Oracle reversing the optimization sequence between versions, as they did once to their own system, What a nightmare that upgrade was.
I know, Mosaic begat Netscape which begat FireFox …
LibreBase already talks to MySQL so it won’t be a huge change.
You guys are interested in things working, whereas I made my money when they broke, so less than ‘perfect’ software was my meal ticket. I rarely got the opportunity to create something from scratch, but had to use the tools that the client had which were normally the most common software products of the time usually sold to the clients by people who quite often couldn’t use a computer of any kind if their life depended on it.
People should spend a lot more time planning and designing before they buy anything to be sure that they will have the capabilities that they need. The lack of planning and design was a big reason I wouldn’t work for the government. At all levels the government lacked the people to write a decent specification. You did what they specified knowing it wasn’t what they needed, and there would be bad feelings. It wasn’t worth the aggravation, and you waited months to get paid. Lawyers were even worse.
One of the great things about Open Source is that projects can share and use code from other projects. MariaDB has announced a merger with SkySQL who provide Enterprise level support for MySQL clients, and as part of that will use XtraDB and increase support of NoSQL engines.
You can add: Arch Linux, Chakra Linux, Slackware and Mozilla as full users of MariaDB. Others that now officially support MariaDB include Drupal, WordPress, Zend, Plone, phpMyAdmin, MediaWiki, Kajona, and lastly Ubuntu LTS. Major companies currently testing MariaDB include Amazon, HP & Rackspace among others. Also there is an open source Windows client for MariaDB called HeidiSQL. And the Navicat suite of DB tools (from PremiumSoft CyberTech Ltd.) now support MariaDB (and Percona and the Windows Azure SQL Database, formerly SQL Azure). I’m using the Navicat Suite.
As far as Percona, many VPS & dedicated host providers have been using Percona for awhile, especially for DB offload servers where it currently is superior to MySQL & MariaDB.
Also, there is another MySQL alternative called Drizzle. 🙂 It is being designed to be slimmer and faster than MySQL, and over time many MySQL components that won’t be required for their target market will be stripped out or improved (currently it offers much better storage support than MySQL which still has lousy write latency issues on HDD’s, for example). It’s not being developed as a *me too* project, it’s primarily targeting web-infrastructure and cloud computing markets. Interestingly, it’s developer community include staff members from Percona, Canonical, Google, Six Apart, Sun, Rackspace, Data Differential, Blue Gecko, Intel, HP, Red Hat, and others! So, I’m keeping an eye on that one! 😉 😀
Hey Bryan. 🙂 Posted at the same time again. 🙂
What you say is true. 🙂
Rackspace have a nice concise blog entry re the various MySQL products/projects.
Navigating The Various Versions Of MySQL
Yah, about to switch the test database to MariaDB (after doing a backup via mysqldump first, duh) and see what kind of difference it makes. The database is on shared storage so it’s just a matter of shutting down the MySQL VM and firing up the MariaDB VM then switching the IP address to the new instance. If it makes a huge difference in our write lock contention problem, we’ll switch production to it too. Our big issue is writes — we have potentially thousands of writes per second, each of which requires a read to make sure that we’re not putting in a duplicate value but the read is ridiculously fast. MariaDB has done some benchmarks of writes on MariaDB vs writes on MySQL that show MySQL total performance dropping off rapidly after a hundred writes per second, while MariaDB keeps chugging along with minimal lock contention. Be interesting to see if that’s true :twisted:.
Kryten, there’s numerous other databases we could go to. For example, PostGres is generally slower than MySQL, but has consistent performance under heavy write load and a much better query optimizer than MySQL 5.1 or 5.5 (dunno about MariaDB yet, which claims it has a much better query optmizer than MySQL) and handles very large databases much better. Our biggest issue with the various clustering options is that they are designed for read-heavy workloads. Most Wikipedia accesses, for example, are reads — very few actions on the Wikipedia databases are writes. But we’re streaming heavy writes and doing relatively few reads, basically the only time people will log in and use the UI is if they get an alert on a system, probably there will only be a few dozen logins at any given time. This workload pretty much destroys the majority of the available clustering solutions out there, which aren’t very good at “eventual consistency”.
Now throw in Hibernate, which is an atrocity that is interfering with my ability to shard the writes. Gah, the stupid, it burns, it burns! SpringSource as a whole should have been burned to the ground rather than purchased for millions by VMware. But that’s a subject for another tome.
As I’ve said before, my general DB preference is for PostgreSQL. I simply have less trouble with it than MySQL. However, for this project I have little choice. *shrug* 🙂
Here’s a couple of good recent stories about Percona’s performance you might be interested in. One from HP:
DNSaaS application MySQL HA solution with Percona XtraDB Cluster (Galera)
Sazze Smashes Black Friday Records, Enjoys Higher Uptime and Real-Time Visibility With Solutions From ScaleArc, F5 Networks and Percona
ScaleArc iDB Enables Instant MySQL Scalability, Faster Query Performance and Real-Time Visibility Without Requiring Any Changes to Existing Applications, Driving an Unparalleled 3 Month ROI
Sazze Smashes Black Friday Records
Oops, at some point my host, NearlyFreeSpeech.net, replaced MySQL with MariaDB 5.3, and it hasn’t affected anything on this site. They tend to make all of their choices based on stability and speed. Now I’ll have to change the sidebar logo.
The thing about software and DBMS in particular is that there is no one best choice. The best choice is the one that does what you are trying to do the best with the least amount of aggravation.
As you point out, Badtux, most DBMSs are tilted towards reads, because that’s the most common use, but your system is heavily weighted towards writes. That’s like some of the science experiments that were running at Scripps Institute. They were writing data all day long, but only ‘read it’ on a monthly basis to generate a report that was then fed to the next layer of the experiment.
Because of the nature of your data, the only way you can be sure is by running the tests yourself.
In your case, Kryten, your decision on the DBMS was made when you selected the services you would use. Until you can be your own host, that is the reality on the ‘Net. Everything involves comprises, so there is no ‘best’, but you can hope for ‘good enough’, and at least reliable.
Yah, well, we found the Amazon API to read/set my.cnf parameters on their MySQL database server, and found that the defaults there were set laughably low. I just emailed the co-worker in charge of that the URL of Percona’s my.cnf parameter generator. You’d think that with Amazon marketing this as a turnkey database service, they’d do the optimization on their end. Nooooo. They make *us* do it. If we’re going to do that, we might as well quit paying for their database service and just run MariaDB directly on Amazon instances.
People keep telling me, “use replication and do all your reads off the replicas!” Well, the problem with that situation is that reads are not our problem, writes are! And replication doesn’t speed up writes. To the contrary, replication makes writes slow to the speed of the network between the two VM’s. We’re going to do replication anyhow to have two copies of the data, but this is going to require some serious sharding, and soon. Unfortunately the idiot who designed the original program that we’re hacking on to fit our own particular needs didn’t think about sharding so that’s going to be a PITA…. sigh!
Oh yeah, on testing I switched us to MariaDB today. It was pretty simple — I paused the testdb vm and made testdb-clone from it and started it back up, popped into testdb-clone in single user mode and patched the networking to let me in, uninstalled mysql, installed MariaDB, pointed mysql at the correct location, edited the network address to be that of testdb (but didn’t activate it), shut down testdb, switched the IP address, start mysql. There was a total of perhaps 30 seconds of outage time in all of this, max. Nobody even noticed, all the clients just re-connected and kept on keeping on.
So far haven’t seen much difference, but testing only has 3M rows in it, so I’ll need to create some fake clients generating lots of data to get the rows up to production’s levels to see whether it really makes a difference…
Sharding is always a PITA unless there are obvious divisions in the data. Face it, very few people really plan for success with geometric growth. No one mentioned it, so the coders didn’t bring it up. Faster processors and bigger drives have been the solution for scaling up for years.
Good luck with your testing.
Ooooh! I’ve said before, and I’ll say it again!! Western Digital HDD’s (their commodity desktop one’s anyway) are CRAP!!
I have (or had) one of their Elements Desktop 2TB external drives. The HDD died because it cooked itself. WTF WD was thinking about putting a high-performance HDD (it’s a Caviar Black 2TB) in a completely sealed plastic box with zero ventilation… Well, the only reason I can think of is that they know they will die and people will have to buy new ones (because some people ARE that stupid!) But I’m not a normal know-nothing consumer, and I will raise hell!
They give a standard 12 mth warranty on the Elements Desktop unit, but the Caviar black drive itself has a 5yr warranty according to the WD site! Since this drive is only 1.5 yr old, I went to the store and took all my proof and demanded a replacement HDD or full refund within a week! I told them they could keep the crappy plastic box it comes in, I won’t be needing it. There are a LOT of people online screaming about this also. Thankfully, here in Aus, I am protected by law, not WD! The unit is CLEARLY no suitable for the purpose intended (and the proof is on WD’s on website!) 😆 I have copies and I’ve eMailed it all to WD Aus Cust Svc and stated that I will forward it all to Consumer Affairs etc., if a new HDD doesn’t appear within a week! I also sent scanned copies of my qualifications and my employment record as Svc Mgr for Apple and DEC, as well as snr Tech consultant for HP & Coles-Myer! 😈 I hope they are really stupid and try to take me on! I’ll sue them stupid! 😆
I also have a Buffalo 2TB external I got about 4 mth’s ago, and I was curious because the case is longer than the others (about 1.25″ longer than the WD). Turns out is has a Caviar Black also, but they put a fan (one of those blower cylinder types and the case is well ventilated). You can feel the hot air being blown out the back. According to the WD forum, the Black drives NEED to be actively cooled as they can easily run at anywhere from 50-55C at 25C ambient!
According to WD, the operating temp for this HDD is 0-60C. But according to the S.M.A.R.T. diag’s and the SMART website, anything over 53 is dangerous and the HDD is likely to fail within a year. They recommend that any HDD operating temp should be kept below 45C.
And lastly, the Black drives have both APM (Advanced Power Management) & AAM (Automatic Acoustic Management). APM control’s how quickly the drive returns to Standby or Sleep (low power modes) after an access. The AAM controls the spindle speed and head latency (movement speed) of the drive. As well as lowering the noise levels at lower speeds, it also lowers power consumption (from 10.7W to 8.2W) with at best a 10% hit to performance. Both of these are actually disabled in the Elements Desktop unit and cannot be enables as the controller they use doesn’t support these features (and I bet they do it so they can claim that this unit is faster than the equivalent Seagate’s or others!) Both these are enabled in the Buffalo, and they provide s/w to manage them as well as monitor the unit. I’ve disabled both, and have been copying files from it to my NAS for over an hour, and the temp is 41C! So… my question is, if Buffalo can do it properly (and their unit is about $25 cheaper than the crappy WD unit), why can’t WD??!
(And I included all that in the eMails!)
SOB’s!
All the commercial grade HDD’s I’ve had the past few years that have failed have been WD (and 1 Samsung and 1 Seagate Barracuda 1TB that was 4 years old).
So… Caveat emptor!
Right! I’m off to my favorite Cafe/Bakery to be pampered by my wonderful girl’s there (been going there 5 years now, and it’s like family)! 😀 One of my girl’s is gonna have a baby! Awwww.. And she’s a true sweetie and will spoil the kid absolutely rotten! 😆 I am so happy for her. She’s been truly wonderful to me over the last few years, and we see each other often outside of the Cafe (and Fiance is a DJ, and I’ve been helping him out). They plan to get married in a Month, the pregnancy was a bit of a surprise. Contraception doesn’t always work! 😉
Ciao! 😉
The problem with hard drives is that we’re basically down to a duopoly now that Hitachi’s disk drive division has been absorbed into WD — Seagate and Western Digital. Toshiba also makes drives but they’re very much an also-ran. Nobody else makes hard drives anymore. We have a cart in our back room full of dead drives that came back from customers. Half of those drives are 1TB Seagate “Enterprise” drives. The other half of those drives are 2TB Western Digital “Enterprise” drives. Every one of these drives was their top of the line SATA RAID-rated drive. We had an 8% failure rate per year, regardless of whether Seagate or WD. It’s pretty much a choice of horse manure from one vendor, and cow manure from the other vendor. It’s all manure in the end :(.
The only real beneficiary of the consolidation of the hard disk industry is the SSD industry. We’re still a ways to go before SSD’s match cost per terabyte of spinning rust, but now that they’ve matched the density with Intel/Micron’s latest NAND chips, there’s nowhere for costs to go but down… and in the meantime, there is a *lot* of high end storage being sold with SSD’s not only for performance reasons, but because they simply can’t get reliable hard drives from the incumbent duopolists. But perhaps they shouldn’t get too smarmy about going SSD, because it looks like SSD is going to a duopoly too — Intel/Micron on one side of the big pond, and Samsung on Kryten’s side of the big pond (everybody else is just packaging of Intel/Micron or Samsung SSD chips with their own controller). And duopolists rarely care much about reliability because hey, the competition is just as unreliable, so it’s not as if you’re going to go to the competition…
That is the biggest problem in technology – the competition keeps getting eliminated by mergers and acquisitions. The last two standing essentially refuse to compete because that would cut the profit margins.
Or you could be Microsoft, and essentially compete with yourself :).
True story: My boss installed the new product. It didn’t work. I told him to turn off the firewall. It started working. He turned on the firewall. It quit working. Baffled, we looked at the firewall rules. Yes, an exception had been put into the firewall rules — but only for public networks. Not for private networks. So if we’d installed this program in a coffee shop using their WiFi, it would have communicated fine.
So I investigated the API being used to add the firewall rule, checking on MSDN. Hmm. Says “Works only on XP/2003, for Vista/2008 use Advanced Firewall.” Which, it turns out, is an entirely *different* API, which allows you to query the available network profiles (Public, Private, Domain) and add a new rule to all of them. But *everything* was different. Microsoft didn’t just add new fields to the existing API, they added an entirely new API — without removing the old one. If you used the old one, it just chose a random network interface and used its profile, and one of the network interfaces didn’t have a gateway thus was treated as it was a public network, thus that’s the one that got the rule.
So now I have *twice* the code in my installer for installing (and removing) firewall rules — one for XP/2003, one for everything newer. Is it any wonder that the typical Windows program suffers code bloat, having to be written to multiple competing API’s — from the same vendor?! 😈
All of which BTW points to *another* stupidity in Windows networking. The camera network for the security cameras doesn’t have a gateway. It’s a camera network, doh! But if you don’t have a gateway, you can’t tell Windows that it’s a private network — it treats it as if it’s a coffee shop open network, even though we’re talking about a physically security-hardened network with nothing on it but security cameras! Gah, the stupid, it burns, it burns!
Now add in a *third* network — the back end storage network to the private storage cloud — with the same attribute. Windows is treating the two most secure networks — the ones that are either entirely within a hardened security bunker or at least physically hardened — as if they’re coffee shop networks, and treating the only network that can actually be touched by the Internet as if it were a hardened security bunker network! Gah. The stupidity doesn’t just burn, it rages like a California forest fire!
They don’t have enough people writing code who have actually worked in the real world. They don’t understand how things are used, or how real networks are set up. They seem to have people who can create code, but not many who can modify it, and they don’t have enough permanent employees to develop the institutional memory to make the process easier. They hire for a project and then fire everyone when it ships.
The only reason they bother to ship new versions is to get more ‘rent’ from their customer base.
Well, XP certainly won the competition with Vista 😉
And now Windows 7 is winning the contest with Windows 8 :).
I’ve read about all the improvements under the covers that Microsoft has done. But the UI and Microsoft’s inflexible attitude regarding it has PO’ed people to the point they want nothing to do with Windows 8. And unless Microsoft creates a SBS version of 2012, they’re unlikely to get a lot of uptake on the server side either… I have four Windows 2003 virtual machines running our old 2003 site license, replacing them with four Windows 2012 virtual machines would cost a ridiculous sum of money, assuming that I even had an ESXi server that would run it (which I don’t, both of my ESXi servers are currently running ESXi 4.1 and won’t run 2012).
You have to be very careful when you muck about with the UI. Your new one may be much better for numerous reasons, but users don’t like change. The Dvorak keyboard layout is vastly superior to QWERTY, but people are not buying it. The Win8 UI may be the greatest thing since sliced bread, but it doesn’t work the way I do. Frankly I hated it just looking at pictures – it’s too ‘busy’. I prefer a clean, solid color desktop with the apps lined up in the taskbar. The only icon on my desktop is the recycle bin.
The upgrades are just to extract more rent after they have captured your computer, and customers are sick of the process. They are losing market share to the Xs as IT departments come under cost containment pressure. Re-training everyone on the network for a new UI is not something anyone wants to pay for at the moment.
Yeah. I hate all the *cutesy* crap M$ force on us. First thing I always do after a win install is turn everything to ‘Classic’ mode (and thankfully thare are plenty of free tools for Win7/8 to do that!) Also, most people don’t seem to realize that they can in fact install and use a completely different UI/desktop and even kernel (just like Linux in a way)! I have my own hacked kernel for XP I created some years ago with some people on a Win forum. It uses 40% less RAM & 28% fewer resources than the std kernel, and is faster. We did it to fix the problem (before SP2 & 3) of Win running out of User GDI resources with regular monotony! Now with SP3, you can tune the Reg to make it better, but it’s not really a fix. *shrug*
Vista was the prototype for Win 7. M$ dumped it on people to find the bugs because the Corp’s wouldn’t touch Win 7 otherwise. 🙂 And I figure Win 8 is the same. Win 9 is due out next year, and I bet it will be like Vista/7. The idiot consumers will beta test Win 8, and will have to pay to get a working version (win 9). And the Corp’s will take up Win 9 (or whatever it will be called). *shrug*
It’s always been that way. Win 3.10 was garbage, 3.11 WfW was better. W95 was crap, 98 was somewhat better, 98Se was good, ME should have been drowned before birth! But it did fix a few problems (mainly some irritating mem leaks in 98SE) and since the code base was the same, it was possible to take the best of both and make a fairly stable w98SE+! NT 3 was woeful, 3.5 was better. NT4 was OKish… w2k was better, then XP was partly better and partly worse until SP2. SP3 fixed some things, and created new problems. Way it is! *shrug*
So, I got an eMail from WD (Aus) and I will have my replacement HDD tomorrow, with a full 5yr warranty (as per their website). They even apologized! So, given their reliability… I should have free WD drives forever! 😆
Thankfully, the S.M.A.R.T monitoring tools are pretty good and will give ample warning when the drive is likely to fail so you can get the data off before it does. *sigh* What a way to operate! Oh well…
Once you could stay up waiting for the -.1 release of M$ software, now it’s the odd number versions after SP1 is released. If capitalism actually worked they would have been bankrupt after MS-DOS 6.0 was dumped on the unsuspecting masses.
It is almost like they are trying to appeal to the people who use Apples. They are never going to lure away Apple users, because they can’t duplicate the vision behind the Apple interface. They should stay away from that space because they fail every time they attempt to enter it. If they want a Smartphone and Pad OS, fine, but don’t impose that on people who have to deal with numbers and text. If you put a screen flat on a desk, someone will use it as a coaster for their coffee cup.
To be successful innovations should make things easier, not require relearning basics from scratch. If I wanted to use the same interface as an iPad, I would buy an iPad. All they are going to accomplish is to create new Apple users.
Sadly, nobody seems to have the attention to detail that Apple has. I’ve spent the past three hours trying to get my Samsung Galaxy S3 Android phone to talk to my HP laptop. I know I had it working before the SSD crashed and burned, but Samsung is worse than Microsoft — about 75% of Kies releases (Kies being their so-called “iTunes equivalent” used for syncing the ‘Droid) crash and burn either in the installer or in the program itself. Kies bricked my phone once and I had to have AT&T re-flash it. That’s simply not acceptable — and this has been true of Kies for the past two years. Right now I can’t even get it to install. It says ‘Feature transfer error: Error -2: The system cannot find the file specified.” What file? WTF? Look. I just finished writing a Windows installer. I know how to write Windows installers. And this isn’t how you write Windows installers. Talk about a pile of garbage!
And let’s talk about power consumption. Or not. Because if you run the wrong program, your battery will be dead within a couple of hours.
The iPhone has fallen woefully behind Android in a number of ways. But it Just Works(tm). I’m just completely sick of having to fiddle with Android all the time just to do stupid things like sync music to it (why I’m trying to get Kies working). With Apple, you plug it in and hit the sync button. Done. Am I asking too much to say that you ought to be able to do things like sync music to your device without having an advanced degree in installation of MTP drivers and blah de blah? Heck, I couldn’t even get my photos off of the thing except by turning on USB Debugging in the device preferences of the phone — yes, the only way to transfer files without (the refuses to install) Kies, apparently, is to put it into Debug mode!
Of course, now that Daddy Jobs is gone, Apple is likely to go the way of Palm. Palm is who Apple ripped off for their user interface, and Palm got stagnant and quit innovating and got run out of business by Blackberry and Apple. I haven’t seen much innovation from Apple recently. Just slightly newer and faster versions of what they already have, with no fundamental technological improvements. We’ll see, I guess. The thing about Steve, he knew when it was time to fold’em and move on to new technology. The Mac moved from “Classic” to Unix, and from Power to Intel, under his leadership. He recognized that the change had to be made, and had the authority and power to make it happen. I’m not so sure that Apple’s current leadership could be so bold…
I’ve been getting some pressure to get a ‘smartphone’ rather than my apparently ancient LG800, which does what I want. Now I have a new, and better, reason not to do it.
Syncing is one of the basic reasons for having the smartphone in the first place. I use a USB cable and copy to do it with the LG, but it is a manual process that involves moving things around on the phone. LG may have a utility for doing it, but they want as much as I paid for the phone for their software.
One of the big pluses about Apple was that Steve was a user. He used the stuff he had created and it had to satisfy him. I don’t think his replacement(s) are that involved. It makes a big difference in the ‘it just works’ department.