DO NOT DISABLE YOUR AD-BLOCKER!


If you’re using an ad-blocker like AdBlock Plus (you should be!), and a page tells you that you need to disable your ad-blocker to see the content, it’s time to leave that page, and not return until they change their policy. Here’s why:
 
1) The most popular ad-blockers have “whitelists” that let content providers submit their ads for screening. If their ads are respectful–don’t install malware on your computer, don’t pop-up and cover the screen, don’t play loud videos, etc.–then AdBlock Plus and similar will let you see it! There’s no excuse for not being on the whitelist.  Are you a webmaster?  Click this link.  Now, you really have no excuse.
 
2) If an ad is not on the aforementioned whitelist, it’s because it’s a truly obnoxious ad, and/or the site’s owner isn’t a responsible citizen of the Internet. It’s literally unsafe to display such ads. In addition to being REALLY ANNOYING, they can install viruses on your computer/phone/device, steal your credit card information/identity, give your personal information to dangerous people, cost you hundreds or thousands in electronics repair bills, etc. There’s no good reason for displaying such an ad.  There’s no good reason for trying to make people see such an ad.
 
3) If you boycot pages that refuse to make their ads respectful and safe, you will force web site owners to make their content respectful and safe…which they should have, to begin with. Don’t give in. Yes, that includes Forbes.com, or your favorite “reputable” web site. It’s only as reputable as its content.  Be patient, and keep your ad-blocker on.
(You should also consider installing Web of Trust.)

Computing Lesson: How to backup a GPT hard drive using dd on Linux


This assumes you’ve already used “blkid”, “fdisk -l”, etc. to determine which drive is which.  All of these commands require root.

N = (number of partitions * 128) + 1024
The drive in question has 4 partitions, so N=1536
In this example, I’m using an eMMC drive: /dev/mmcblk0
Each partition appends “p<number>” after mmcblk0, such as “/dev/mmcblk0p1”
If you’re using a normal hard drive or USB drive as your source, it will show up as /dev/sd<letter><partition number>
My storage drive is mounted at /mnt/

Because one of the drives involved (storage drive) is connected via USB, it’s important to sync between transfers.  I find that using the “sync” mount option slows things down more than syncing manually.

First, backup the partition table:

dd if=/dev/mmcblk0 of=/mnt/mmcblk0.img bs=1 count=1536 && sync

(Note: the output filename is arbitrary.)

Next, back up each partition, syncing between transfers:

dd if=/dev/mmcblk0p1 of=/mnt/mmcblk0p1.img bs=4096 && sync && dd if=/dev/mmcblk0p2 of=/mnt/mmcblk0p2.img bs=4096 && sync && dd if=/dev/mmcblk0p3 of=/mnt/mmcblk0p3.img bs=4096 && sync && dd if=/dev/mmcblk0p4 of=/mnt/mmcblk0p4.img bs=4096 && sync

(Note 1: “bs=4096” sets “block size” to 4KiB, which speeds up the transfer.  Default is “bs=512”, which works, but more slowly.)

(Note 2: each “&&” tells the shell to only execute the command that comes next if the previous command worked.  This can also be accomplished with scripting, loops, etc.)

When you’re done, you should have a set of files representing the partition table and all the partitions.  They will be as big as the source data, so you may want to compress it with gzip or similar.  (This can be done with a pipe, but it might introduce a point of failure, so I don’t recommend it.)  To restore the data, first restore the partition table:

dd of=/dev/mmcblk0 if=/mnt/mmcblk0.img bs=1 count=1536 && sync && partprobe

(Note: I switched “if” with “of”, and added “&& partprobe” at the end.)

Then, restore each partition with the same command you used to create the backups, but swapping “if” and “of” after each instance of the string, “dd”:

dd of=/dev/mmcblk0p1 if=/mnt/mmcblk0p1.img bs=4096 && sync && dd of=/dev/mmcblk0p2 if=/mnt/mmcblk0p2.img bs=4096 && sync && dd of=/dev/mmcblk0p3 if=/mnt/mmcblk0p3.img bs=4096 && sync && dd of=/dev/mmcblk0p4 if=/mnt/mmcblk0p4.img bs=4096 && sync && partprobe

(Note: the “&& partprobe” at the end may not be totally necessary.  You may, however, have to reboot/replug the drives, regardless, at some point, to get the drive geometry to be properly refreshed in the OS.)

If, at any point, you run into errors, try removing “bs=4096” from each command that has it.  It will make the transfers slower, but more reliable.

Want to check the progress of a dd operation?  You can do this by sending dd the “USR1” signal.  If you have only one instance of dd going, you can simply do the following:

killall -USR1 dd

This will cause all dd commands to output–in their own terminals–their current status.  If you have multiple dd commands running, and only want to get a progress report from one of them, you can do this:

ps -A | grep dd
(Note the number to the left of the "dd" entry.)
kill -s USR1 <number>

Finally, if you want to periodically check on a dd command without having to keep typing or hitting ENTER, you can do the following.  In this example, “<command>” refers to either “kill -s USR1 <number>” or “killall -USR1 dd”.  Note that reporting progress uses more clock cycles than one would think, so it’s best to do so only once every few minutes.  This example checks once every 3 minutes.

while `true` ; do <command> ; sleep 180 ; done

(Note that the marks around “true” are backticks, not single quotes.  On a typical QWERTY keyboard, these are to the left of the “1” key, on the same button as “~”.)

You can press CTRL-C in the appropriate terminal to stop any of the above commands.

Happy computing!

 

 

Why Internet Ads Are A Dying Business Model


I started using the Internet in the early ’90s, back when almost nobody had a computer, and few of them had an Internet connection.  Since then, I’ve never once made a purchase based on unsolicited Internet advertisements, and here’s why.

I started building computers at a very young age, and have since spent a large portion of my life fixing other people’s machines when they break.  The number one problem is malware, which includes (but is not limited to) viruses, spyware, and pranks; with the first two being the most common.  Malware usually exists for one of three purposes, with the first being true of almost all malware: (1) someone is trying to scam/steal money from you; (2) someone wants to annoy you for fun/revenge; and/or (3) someone wants to make a political statement (vis carrying out a Denial of Service attack on someone whose political/economic activities they don’t like).  The number one source of malware is web sites trying to scam money out of you.  The number one way they do so is by advertising to you (often in ways that make you think they’re NOT advertisements), such that you visit their site; and WHAM! whether you know it it not, your computer is now infected.  (Sometimes just visiting a site gives you malware, and sometimes you have to download and run something from that site that they claim is good/harmless, but isn’t.  Beware any file ending in .exe, .bat, .com, .msi, .dmi, as well as anything that can be “installed” or “run”.  Yes, this includes “FREE GAMES!!!!!!!”)

If every person whose computer I fixed because they clicked on an ad paid $100 for it…wait, what am I saying?  They DID pay $100 (or so) in repairs for every ad they clicked on!  That, in a nutshell, is how computer repair shops stay in business: people who don’t know that ALL Internet ads are extremely likely to infect their machines with viruses, spyware, and so on–such that they will soon have their sensitive information stolen, and their computers rendered useless–do something they don’t know they should NEVER do on the Internet, and then bring their computer into the shop to have it repaired.  (Note: sometimes hardware fails, Windows/Linux/Mac OS screws up of its own accord, or someone falls victim to the dreaded PEBKAC error.  Usually, though, it’s because of Internet usage failure.)

This is why, in theory, movements in favor of allowing “respectful” ads and blocking all other ads (example: AdBlock Plus–a great browser add-on that everyone should have, despite its failures) are ultimately not realistic.  Even if an ad doesn’t play obnoxious sounds/videos at you, flash distractingly, open pop-up windows, take up half the page, etc., there’s never going to be any absolute guarantee that the content behind the ad isn’t fraudulent, in some way.  Nobody is capable of policing every advertisement on the web, so those who try to come up with software to detect “annoying” behavior and block only that, rather than truly investigating every web page that advertises anywhere on the Internet.  That’s just not realistic to expect from anyone.

So, what’s that mean for ad-based revenue?  You can probably guess: as more computer users realize that clicking on ads (and things that don’t look like ads, but really are ads) is what’s causing them to shell out money for computer repairs, and take effective measures to avoid doing so, it will become decreasingly profitable for web pages to host web ads, at all.  Sadly, almost every page on the Internet can only exist because of advertisements, so we’re left with quite a quandary: how do we support worthwhile web pages (like this one, I hope…) without becoming easy targets for dishonest people looking to harm us for personal profit?

One solution that’s been proposed is to have every web site screen its web ads.  Unfortunately, ads just don’t work that way, and here’s why: webs are served up by companies who are “aggregators” of advertisements, such as Google (AdSense), Facebook, AOL (not dead, yet!), NYTimes, CBS Interactive, Ad.ly, and many, many others.  Almost nobody has the resources to get enough companies to buy enouch ad space from them to cover all of their expenses, so they instead let these aggregators post web ads to their pages in exchange for a small cut of the profits (and I do mean small).  “Well, make the aggregators censor out fraudulent ads!”  Sounds great!  …But again, the problem is volume.  How does a company of “only” 30,000 full-time employees (most of which don’t sell ads, but do other things, like programming Gmail and GPS maps, designing driver-less cars, and so on) thoroughly investigate 100,000 ads a day to determine which ones lead to pages that will never ship purchased products; will attempt to infect some types of computers with viruses (dependent on OS and software versions); ask for sensitive information that they will sell to their “partners”, three years from now; and so on?  The short answer is that it’s just not possible to turn a profit by selling advertisements if you try to do this.  So, what we’re unavoidably left with is a stinky, seedy, smarmy Internet full of paid advertisements that nobody should ever click on.

So, again, where does that leave us?  I don’t know, and neither do web-centric economists (professional or hobbyist).  Most will acknowlege, if pressed, that ads are a blight on safe computing, and almost anything would be better than the digital cesspool we have, now.  But, like democratic forms of government, it’s the only option we have that seems not to utterly break at the drop of a hat.  So, instead (like any nominally-working form of government), it’s breaking slowly, and nobody is very certain about what we can do to fix it.  In fact, most people who have tried at all to deal with it are utterly befuddled with the problem.

So, what do you think the solution is?  Maybe the right kind of genius is reading this ad-supported web page, at this very moment…  😉

Millennial


I’m of the generation that started off in one world and then crossed into the next during my formative years.

While those before me barely understood how to use a typewriter, I spent much of my childhood building computers and typing at a rate that would put most secretaries to shame.

My generation was the first to start off talking on a phone with a 6-foot long spiral cord, and then carry around high-powered computers in our pockets as we entered adulthood.


As soon as we entered kindergarten or first grade–since, back then, kindergarten wasn’t required–our teachers did a little bit of math on their abacuses and realized that when we graduated high school, it would be the year 2000.  I know you think I’m kidding about the abacuses, but when I started school, that’s actually what we did math on.

Graduating high school in that seminal year somehow carried a lot of weight.

It wasn’t just a number; it meant that humanity was getting a sort of “new start”, in the minds of a lot of people.  Therefore, it was generally instilled in us from an early age that it was up to us and those born at a similar time to change the world drastically and, essentially, fix all the epic screw-ups of our parents, grandparents, and every previous generation.

The funny thing is, while we were starting to learn the world and contemplate how we might change it when we finally got all grown up, it actually did change into something that nobody before our generation could have fully expected or adapted to.

Just about every piece of academic information suddenly became free.  Yes, I know that if you want to really drill into a topic, you still have to take a free online course from an actual university; but essentially, it became the new big thing that, if you didn’t know something, you could type it into Yahoo, Excite, Altavista, and later, Google, and then…you knew it.

This was really cool, and our parents, teachers, and, once we got all grown up, our bosses thought that this was the best thing ever…until they actually got a taste of what it was like to be around someone who knew more than they did.

Not long into my adult-ness I got hired on as a Computer Assisted Drafter at a door company.  This wasn’t because I’d ever done drafting of any kind before, and certainly not because I knew a thing about wood-working, beyond a few projects in elementary school; but the boss had realized that the digital age–whatever that meant–had arrived, and all the famous ink-and-paper magazines said that it was going to make her rich if she embraced it.  Therefore, she eagerly hired the first freshly minted grown-up who knew a particularly great amount about computers to do all the computer-thingies that she and her other employees didn’t really understand.

My first task was to start learning the drafting program, and my second task was to remove the plethora of viruses and other malware from all the computers on the network so that the program would actually run.  That was cool, and dollar signs began to flash before my boss’s eyes.

My next task was to actually start drafting.  This was easy enough: plug in the numbers, draw the lines, and print it out on a really big piece of paper so the guys in the shop could build it.  Except that the head of the woodworking department, who was over me, didn’t trust anything that wasn’t written in graphite.  Therefore, my final task before I could be happily away in my new career was to learn how to teach a person born in the ignorant world of pencils and paper that computers could do things better.  We were running Windows Millennium Edition, so this wasn’t an easy task.  Ultimately, though, despite all the difficulties this entailed, the company failed for the most venerable and inane of reasons: the boss liked to play fast and loose with the books, and apparently “going digital” didn’t make that any more legal.

From this, it quickly became apparent that simply knowing how to do one’s job wasn’t enough to be successful at making money.  One first had to figure out how to deal with the obtuseness of human nature.

Funny thing: in all of our classes on learning “the theory of how to do everything”, not one class was taught on how to actually get along in society.  Stuff like “how to talk to your boss without making him mad” and “what a checkbook is for, and how to make the numbers be nice to you” just weren’t considered important.  Thusly, Millennials, for all our unique insights into what technology does and doesn’t change, and despite being the foremost experts in turning an ignorant world into a knowledgeable one, it’s become a famous fact that, as a group, we simply can’t hold down jobs to save our lives.  People are just too stupid to know when they’re being stupid, and being as how (according to everyone more than 10 years older than us) we were supposed to teach the world how to drastically change for the better, we’ve largely done what any brilliantly unwise person would do and tried to actually teach people how to stop being stupid.

Wikipedia has the following to say about the Millennial generation:

Millennials [were predicted to] become more like the “civic-minded” G.I. Generation with a strong sense of community both local and global…[Some attribute] Millennials with the traits of confidence and tolerance, but also a sense of entitlement and narcissism…Millennials in adulthood are detached from institutions and networked with friends…Millennials are somewhat more upbeat than older adults about America’s future, with 49% of Millennials saying the country’s best years are ahead though they’re the first in the modern era to have higher levels of student loan debt and unemployment…Some employers are concerned that Millennials have too great expectations from the workplace.  Some studies predict that Millennials will switch jobs frequently, holding many more jobs than Gen Xers due to their great expectations…[Some describe] Millennials’ approach to social change as “pragmatic idealism,” a deep desire to make the world a better place combined with an understanding that doing so requires building new institutions while working inside and outside existing institutions.

That last part is a real pain in the butt.  As children and young adults, we were stuck playing the game of, “Yes, teacher/parent/employer, you are older and therefore much wiser than I am.  Sure, I’ll teach you how to open your word processor…again.”  Being the lowest person on the social totem pole because of your age, and having the best insights about how to actually get stuff done in this strange new world is a really fast path toward unemployment, unless you learn to (A) forget that you know what you’re doing, and become satisfied with doing everything the stupid way–at least until your so-called superiors retire, die, or stop telling you how to do things–or (B) try to be your own boss…just like every other unemployed person.  So, “changing the world”, apparently, must first start from a position of not doing anything to change the world, or being jobless.

About that.  Changing the world, I mean.  Sitting on the fence between the world of mostly-unwilling ignorance and the world of willful ignorance means that pretty much every modern “social change” movement not created and run by Millennials looks a lot like a pipe dream created by those who grew up with a search engine good enough to avoid ever having to look at anything they don’t want to.  While the older generation could, in most cases, rightfully claim to be doing the best they knew how, based on the information they were given, the generation after us sounds a little tinny when they say that “something is a basic human right” because they read it on SaveTheWorldWithCuteCatPictures.com.  How do these people who started life with the best access to information that the world has ever seen still not realize that the kinds of supposedly radical changes they’re totally bent on bringing about have either failed or caused total economic, social, political, and governmental meltdowns every time they succeeded?

Sure, it must be a good idea to let Russia keep pushing west, through Ukraine, in spite of the treaty they signed at the end of the Cold War.  Maybe if we shake our fingers at them hard enough, they’ll march back to their own territory like Germany did in 1939.

The truly galling thing about this, though, isn’t the naivety of post-Millennial 20-somethings, but how the previous generation seems to have decided that if something shows up on the Internet when they type “social justice in Crimea” into Google, it must be absolute truth.  Did they totally forget about voting for education reforms that involved teaching HTML code to high school kids who showed any particular aptitude in computing?  It would take me under an hour to create a not-too-shabby-looking web page saying that cheeseburgers cause cancer because cows are naturally-occurring GMOs.  But I won’t bother to do that, because it’s already been done, and a lot of people already believe that cheeseburgers cause cancer because…”GMOs!!!!”…to a sufficient degree that they’re willing to start a protest in front of Burger King.  They might even bring their very-skinny-but-still-cute-enough-to-post-pictures-on-Pinterest vegan cats with them.

To put all this another way, Millennials who really absorbed and believed what they were taught in school tend not to start “blooming” until they’re in their thirties, if ever.

Wikipedia also notes that some sociologists refer to us as the “Peter Pan Generation”, and as horrible as it might seem to be called that, I can’t help but agree with this assessment.  How does a person learn how life works before the dawn of the Information Age, then learn how to be the fore-runners of that age, then learn how to avoid pissing people off by being too good at it, and then finally learn how to have a career (read: wait for the older generations to die or retire) without taking a long time doing it?  If we’re lucky, we’ll have started our careers by age 35, and not hate ourselves for the dead end careers we picked back before all the careers that were profitable and fun switched with all the careers that didn’t used to be.  Some of us are bloody lucky to land a “career” at a fast food restaurant by virtue of having a bachelor’s degree.  And our parents’ generation is all up in arms because we complain about having $50,000 of student debt and want the minimum wage to be raised.

Well, except for those Millennials who, against everyone’s wishes, didn’t attend or finish college.

Sure, there are a lot of people my age who managed to buy degrees that will eventually pay themselves off.  However, most of the people I know who were born around 1982 did what all the adults told them to and ended up with little more than very expensive pieces of paper and a few years wasted in college housing.

One the upside, additional time spent learning things means that, to an even greater degree, those who spent at least a little time studying the “cutting edge” in such institutions know more about this “brave, new world” than people who didn’t attend college, at all.  On the down side, we’re once again stuck trying to convince people older than us that we do, in fact, know some better ways in which to do things, that are different from how they’ve always been done, without getting into trouble for saying so.

It’s worth noting, however, that there is a very sizeable contingent of Millennials who have figured out how to “live the American Dream.”  Overwhelmingly, these are the people who were uninterested in, or just too stupid to understand all that new-fangled computer stuff, back in high school.  Sorry, but those Millennials who were good at these things know exactly who and what I’m talking about.  They did as their parents and grandparents did, before them, and got jobs doing stuff that wasn’t, in any way, going to change the world.  Some examples include accounting, vehicle repair, construction work, bartending, marketing, and anything involving keeping your head down in a bureaucracy.  Perhaps the rest of us realized too late that anything that has generated tax revenue consistently for a few thousand years will, by extension of a famous proverb, result in job security–even if it’s the sort of thing that only a trained monkey could totally avoid feeling suicidal about.  Surprisingly, most people who actually got into computers when Forbes was predicting that people who got into computers would get rich, currently do computer repair or technical support for close to minimum wage.  After all, how much are people really willing to spend to keep a computer running when they can get a cheap-and-crappy new one for around $300?

I’ve never met a business owner who wasn’t willing to save a penny, now at the cost of a dollar, later.  Computers are like that, and contrary to what one might expect, business owners are willing to pay more than most to keep theirs running.  That should put things nicely into perspective.

This has been a rather long rant, and what I really mean to say by all of it is that people of other generations gripe way too much about people of my generation not “hitting the ground running”, “grabbing life with both hands”, “pulling ourselves up by our bootstraps”, and all that jazz.  The fact is, we did all that, and it turned out that both the ground and life were covered in grease.  A lot of us fell flat on our faces with suddenly-ending careers, nervous breakdowns and other mental health catastrophes, stock market crashes, unrealistic expectations instilled in us from an early age, and so on.  That we’re at all willing to try–yet again–to get back on our feet in spite of how painful and discouraging our early adulthood was, is a sign of just how great this generation really is.

And we are going to change the world, damn it.

The Dreaded Blue Screen of Death


This is a poem I wrote in 2002 for my college creative writing class.  The professor hated it.  Most other people love it, or at least don’t hate it.  I’m pretty sure that’s a metaphor for college education, in general.

The Dreaded Blue Screen of Death

The archaic din of white text superimposed upon a black screen is no more.
The blinking curser and the cryptic jibe,
“Syntax error,”
have receded from the much-coveted position of
“operating system”
into the subcutaneous untreaden cave of
“MS-DOS Mode.”

Upon the release of Microsoft’s 1995 crowning innovation,
the new “Windows” operating system,
fully equipped with tranquil desktop themes
and a myriad of cheery, sound-coordinated pop-up menus,
people around the world rejoiced.

No more will the unconscionable
Config.sys errors
of yesteryear interfere with the high-profile,
high-fidelity
file management systems of modern times.

The gratingly irritating beeps
and infinite lists of “Bad commands” or “Filenames;”
the stubbornly unbootable hard-drive has given way;
techies around the world groaned
for they knew that the days of horribly stubborn operating systems had ended,
and their jobs as the unapproachable gurus of the Great OS
would soon cease to exist.

But there was hope.

For the dreaded Blue Screen of Death has been replaced
by the Gray Window of Frustration.

Woe be
to the unsuspecting user who
dares check
the internal workings of his system—
who dares click on

Control Panel > System > Performance

the windows popping up—
cascading”—
presenting him with that
forty-two billion dollar grin of approval,

and the user,
piles of driver disks and small papers on the desk in front of him,
stares,
with half-closed eyes
at the messages:

Compatibility-mode paging reduces overall system performance.
Drive C is using MS-DOS compatibility mode file system.

Noooo!
say his unflinching, half-closed eyes.
Briefly,
he reminisces about the
real operating systems of old.
He thinks,
why did they have to make this thing so dang user friendly?

The “Default”
green desktop stares back at him,
unaware of its error.
He stares for a moment longer
at those two
insolent messages,
and at the five-cent euphemisms–
the kind that make this operating system
the most widely used operating system in the world–
and explain why his computer is running so
DANG SLOW!

The stuttering CPU fan blows hot air
out of its overworked medium tower.
Glaring light from the ceiling fan reflects
in the dark window behind the computer.
He stares, dazed, tired
at the clock on the Taskbar.

I should have hired a techie,”
he murmurs as he futily replaces the yellow driver disk
with yet another version of the software.
He restarts the hardware installation program.

Windows will now search for any new Plug and Play devices on your system.
Your screen may go blank during this process. This is normal.

One pulsating vein highlights his greasy forehead
as he clicks the Next button.

Please wait while Windows searches for new Plug and Play devices.

For a few fleeting moments the hard drive activity indicator flickers its compliance.
Mother board resources, mother board resources,” he chants,
in vain hopes of coercing the stupid machine into subjecting itself to his will.

For the next five minutes there is no activity.
His limp fingers grope around on the keyboard for those three familiar buttons:
CTRL+ALT+DELETE
.

The End Task window doesn’t appear.

He sighs as he presses them again.
Then chuckles, reminiscing,

as he longingly smiles at the familiar blue screen in front of him.

What PC Gamers Think About PC Game Critics


I just noticed this while looking for a specific game review.  These are screenshots of the “top rated games” for PC on Metacritic(.com), as of 11:30pm on October 8th, 2012.  You’ll notice that many of the top-rated games according to the critics (such as Diablo III, Max Payne 3) are in roughly the opposite position, according to user ratings.  I’ve noticed a trend of virtually any “AAA” (large-budget) game that’s released getting “universal acclaim” from critics no matter how much users hate it.  Of course, there are a few games that the critics and users largely agree upon–but is it by virtue of good critique or just random chance?  How many such games are “AAA” titles?  Have you ever seen a Metacritic page where a large-budget game gets universal shame from the critics?  I’m pretty sure I haven’t.  By contrast, user reviews seem to “blast” any game that’s genuinely awful–with very little regard for how much money was put into making it, or how good its ad campaign was.

This has bugged me for a long time: why are the game critics so often very, very wrong? I’ve spent a fair bit of money in the past on games that were supposed to be quite good, according to game critics, only to find that they were utter garbage (recent Call of Duty titles, anyone?).  Can you enumerate the money you’ve wasted on such games?  Don’t even get me started on pre-purchases…

So, what do you think?  Are the critics somehow being paid/bribed by game makers, or are their tastes simply very biased against what most PC gamers like?

On the State of Linux GUI Development


Posted on the Ubuntu-Devel-Discuss mailing list on 4-7-2012 (just a few minutes ago).  I don’t yet know how this will go over, or even if those on the list will have the patience to read it all; but I can hope.  In any case, I’m increasing the potential audience of this essay/letter, since I find the subject important and oft-neglected.

I’m not keen on getting involved in a debate, but since this issue affects my ability to be productive on Ubuntu, as well, I find it appropriate to inform the developers of it.

I’m aware that there have been numerous complaint threads on this mailing list and others about some people finding Unity (and Gnome 3) basically unusable for their purposes.  I’m in the same boat, and while I realize that “+1” on this issue is basically pointless, the continued postings on the subject raise an important issue that’s only being obliquely touched upon, for the most part:

Ubuntu has decreased in usability for many people due to the all-out “war on the old GUI.”  At this point is where someone says, “use Gnome classic!”  This, however, proved rather problematic for me, and continues to be so for many others; also, it’s both condescending and counterproductive to insist that users with genuine problems with the direction of development simply “deal with the new GUI” or switch back to a somewhat broken Gnome 2 that lacks significant pieces that made Gnome usable before these changes started.  I’ll mention a couple of examples, just to cursorily illustrate that I’m not simply “blowing smoke,” but ultimately it’s something you have to use and have problems with to fully understand.

1) No system menu; everything is shoved into Applications > Other.  Having 30+ items here is utterly impractical, and I found that not everything even made it into a menu after System was removed.  I often had to search the web for the program’s actual name so that I could then open a terminal and type the command to let me do some basic administrative or customization task.  This is greatly compounded when much of the menu is full of things that were designed for Unity or Gnome 3, and therefore do nothing useful for Gnome classic–or just as often break things.

2) Even if you can find everything in Gnome classic that you used to use in Gnome 2, half the stuff on a recent Ubuntu installation is stuff that breaks Gnome 2, or is only usable with Unity (and therefore Compiz), or Gnome 3.  Furthermore, if you reset as much as you can to “default” for Gnome classic (delete config files in ~, etc.), you’ll end up with programs that require working Unity/Gnome 3 components, but since you’re no longer configured for those desktop environments, they’ll be unpredictable and crash frequently.  This is especially bad because Compiz breaks Unity (and its components) when not properly configured.  I experienced crashing window manager, freezing, and even segfaults about every hour while using a stock install of Unity with just a few minor Compiz customizations.  These crashes also carried over into Gnome classic, once I stopped using Unity.  (Yes, I disabled Unity support and enabled Gnome support in Compiz.)

Ultimately, I’ve been forced to switch to KDE on Linux Mint–neither of which I’m particularly fond of.  The thing is, though, they work *for me* 10 times better than Ubuntu has since it dropped Gnome 2, so it’s the best of several undesirable options.  I’d love to go back to stock Ubuntu, but as long as the GUI is busy being re-invented (not just in Ubuntu, notably), I’m finding myself stuck dealing with Windows a lot more, and Linux–which I generally like much better–a lot less.  I used to boot into Windows only to play games, but now I find that staying in Linux means spending lots of time arguing with unnecessary GUI problems.  (I’m personally quite fed-up with it all, but I’m trying to be civil and rational so as to be productive, rather than a problem, in and of myself.)

…But all the above is only marginally relevant; the real problem, as I see it, is the development trend being espoused.  I understand that it’s great to invent new, exciting software, and I don’t begrudge anybody of it.  In fact, the mere fact that you bother to write for a free OS is admirable, and I commend you for it (for whatever that’s worth).  Unfortunately, it’s been a consistent-but-growing trend in Linux development, generally, and Ubuntu, specifically, to make a piece of software *pretty* good, then whimsically decide that instead of making it *really* good, it’s more fun/better/whatever to invent a completely new thing, based on better principles, technology, and so forth.  Unfortunately, these good ideas rarely get fully realized before yet another set of good ideas emerges and causes working systems to be abandoned in favor of alpha-stage projects.  This is a problem endemic to Linux as a whole, but it’s been especially disappointing to see it infest the otherwise amazing Ubuntu.  For an example, I note that Red Hat 7.2 had a rather good built-in, cross-environment menu editor.  Then, the underlying software changed, and it was about 5 years until Gnome had a menu editor again (which Ubuntu’s developers helped to create, as I understand it).  Similarly, KDE3 had a good menu editor, but now that KDE4 is out, it’s all but impossible to simply organize items by alphabetical order.  So, while the underlying technology got better, the useful, basic features that we all expect to “just work” (as they do in Windows and Mac OS X, which are the main competition to Ubuntu and Linux) have *repeatedly* gone by the wayside because it’s somehow more appealing to re-write things than to polish them.  I encourage those who still don’t believe me to look for other examples, themselves, rather than fixating upon the ones I’ve given; productive conversation would suffer from arguments over inane details like these.

Since the release of Warty Warthog in the early 2000s, the Ubuntu developers turned the quirky-and-barely-functional Gnome desktop into a darned good system for getting things done.  With a couple years more polish, it could have been truly competitive with GUIs by Apple and Microsoft.  But as soon as it had really come into its own–and before it became “really good”–folks decided to completely redesign a working system, producing the magnets-for-complaints we call Gnome 3 and Unity.  (When you get rid of something that works, in favor of anything at all that’s different, you WILL have complaints–some for good reason.)  I don’t at all doubt that those systems will one day be at least a little better than Gnome 2 ever was, but since in the meantime we have nothing but half-baked new systems and gutted old systems (i.e. Gnome classic and its oddly-more-faithful fork, MATE), the state of the Linux GUI has brought adoption back to a matter of just how much time a competent computer user wants to waste on learning something new, rather than sticking with a system that already works for him.  For a lot of people, the question isn’t even reasonable.  Until this trend of “fixing” things that aren’t broken (from the end user’s perspective) by inventing “shiny-yet-incomplete” things ceases to hold sway, Linux will truly never garner a solid place in the desktop market.

So, here’s the “thrust” of my dissertation: Please, developers, stick with something that works until it’s become something truly great; then when public demand requires it (or your foresee that requirement) make something new and better–but under no circumstances take away what we already use and love!!  It feels like a betrayal of the user base (those who don’t like the new system, at least–and you know there are plenty, if you read these mailing lists), and it puts users in the very awkward and problematic position of deciding to limp along with a broken system or just revert to a commercial offering.  I personally have a somewhat fanatical love for Linux, but for me, anyway, no amount of fanaticism can compete with a gross lack of usability (for my purposes, of course).  I beg you, the developers of this otherwise great OS and superior Linux distribution, to consider the awkward place you’ve put your (existing/potential) user base in, and allow us to install and use the FULLY-FUNCTIONAL version of what’s previously worked for those of us who don’t want the new system just yet.

I know that I’ve been wordy and dissertated at length, so if you’ve read all the above, you have my sincere gratitude.

Thanks.

–Dane Mutters

 

UPDATE (4-18-12): As it so happens, somebody’s actually found a term for a big part of the problem with the new GUIs: The Principle of Least Astonishment.  Now, if we can only get the Gnome and Unity devs to take a look…