Google Maps seems to need to learn that some streets go East AND West

I think that Google Maps is overlooking a basic function: In the real world, people sometimes go east, and sometimes go west.

Yesterday for the third time in a couple of years I relied upon Google Maps for directions and was sent to the wrong place. Caveat Emptor strikes again.

In Montreal, east-west streets which bisect St. Laurent Boulevard (which, no surprise, goes sort of north-south), start their numbering in both east and west directions from there. Hence you can have two equally valid addresses on a given street, given the proviso that one is designated as “East” and the other “West”. (Hey! It’s Captain Obvious!)

Fortunately, the address I was looking for was 151; during an hour of going around the neighbourhood looking for parking around “151 Laurier” (East as proposed by Google Maps), I found out that that address wasn’t a dรฉpanneur that sells a huge variety of microbrewery beers, and looked like it never was, and finally decided to go further down the street looking for similar businesses. I suddenly had a V-8 moment and realized “Ooops what about 151 Laurier WEST?” I high-tailed it in the opposite direction and found the business in question. And to my disappointment, they were out of the particular beer I was seeking — Weizenbock, by La Brasserie Les Trois Mousquetaires, which has replaced my previous definition of ambrosia, Trois Pistoles by Unibroue.

Twice before I have had similar experiences:

About a year ago, while in Western Canada in completely unfamiliar territory on a business trip, I had looked up a client’s address, and not knowing about any local east/west splits that addresses on the Trans-Canada Highway may have in that locality, I tried to find the address, on the east end of town, that Google Maps had provided; I was about 45 minutes late by the time I finally managed to suspect that my client’s address was a “West” address and got there.

And just to quash any participant in the Peanut Gallery out there about to say “Aha well when using Google Maps you should know that in such cases they’ll always send you to the East address, so be sure to always check both!” a couple of years ago I had looked up a local address for client, and Google sent me to Gouin Boulevard West here in Montreal, a solid 45 minute drive away from my client’s Gouin Boulevard East address.

Now the Peanut Gallery may have a point: In the real world, people sometimes go east, and sometimes go west. And when it comes to using a free online service, you get what you paid for. As such, when looking up an address on any online service, one should notice “Hmmm this is an east-west street which may bisect such and such a street and as such have East addresses and West addresses; I should specify both east and west in my address search.”

But I wonder how many other people place enough faith in Google that under such circumstances — such as when they don’t know that there’s an East and West of a given street — they would reasonably expect in the case that a street has valid East addresses and valid West addresses (and likewise for North and South addresses) that Google’s response page would come back with “Did you mean (A) 151 Laurier East, or did you mean (B) 151 Laurier West?” Certainly Google seems good enough at asking such a question when you slightly misspell a street or city name, or decides that it doesn’t recognize the address you supply and provide you with half a dozen options, as often spread across the country as spread across the city.

Cool (or mundane) computer trick impresses co-worker

I managed to impress someone at the office this week with a cool (read mundane) computer trick.

I got a call from the secretary, who is a few seconds’ walk from my desk, asking for a scanned version of my hand-written signature. I replied that on my computer at home I have it, and I could easily get it within a few minutes; she replies that it would be faster for her to just walk over with a piece of paper for me to sign, which she would then scan and play around with.

And this is where I began to impress her: By the time she got to my desk with said sheet of paper, I had already VNC’d into my home server’s desktop and was in the process of doing the same from the server to my main computer’s desktop (gotta finish the process of giving it a static IP and setting it up so that I don’t have to go through my home server. ๐Ÿ™‚ ) I finished logging into my desktop, and looked in the likely directory, and voilร  ! I fired up my home email client, and within a couple of minutes, she’d received my scanned signature.

Beyond the fact that the Gnome desktop is set up standard to do VNC — and the fact that I installed TigerVNC instead of using the standard Gnome Remote Desktop Viewer — too bad that I can’t really claim that this is a cool Linux trick, since my computer at work is Windows, and you can set up Windows boxes to “pick up the phone” too ….

She was still impressed, though. And it took about as much time as the whole process of signing a piece of paper, scanning it, cropping it, etc.

PDF’s, Scanning, and File Sizes

I’ve been playing around with PDF’s for the past few weeks and have noticed a very interesting thing: A PDF is *not* a PDF is *not* a PDF is *not* a PDF, ad nauseum, and it would seem, ad infinitum. At least, so it would seem. Part of me almost wonders if the only distinguishing feature of a PDF is the .pdf extension at the end of the file. In “researching” this post I have learned what I knew already; PDF boils down to being simply a container format.

Lately I have been scanning some annual reports from years past for an organization I belong to, and due to the ways xsane 0.997 that comes with Fedora 12 scans pages — which I will concede straight out of the gate I have only explored enough to get it to do what I want and to learn how it does things “its way” — the PDF file sizes are “fairly” large.

In order to find this out, I first found out about one of the quirks in xsane 0.997: Something about the settings with xsane doesn’t have it stop between pages for me to change pages; at least, I haven’t gotten around to finding where the settings in xsane are to have it pause between pages. This is important because my scanner doesn’t have an automatic page feeder. The first page of results of a google search indicate several comments about this problem, but not a solution. At first glance the second page of results is of no help.

So I end up scanning pages one at a time, and then use GhostScript to join them all up at the end to make a single PDF.

Without having added up file sizes, it was obvious that the total size of all the scanned pages at 75 dpi and in black and white was sufficiently larger than the single PDF with all the pages joined. This did not bother me since, again without having added things up, the difference didn’t seem *too* great, and I assumed that the savings were principally due to adminstrative redundancies being eliminated by having one “container” as opposed to having 25 to 30 “containers” for each individual page.

Then this week a curious thing occurred: I scanned a six page magazine article, and then separately, another two page magazine article, at 100 dpi and colour, and whaddya know, the combined PDF of each set is smaller than any of the original source files. Significantly so. In fact, the largest page from the first set of six pages is double the size of the final integrated PDF, and in the case of the second set of two pages, each of the original pages are triple the size of the combined PDF. I’m blown away.

Discussing this with someone who knows the insides of computers way more than I, I learn something: It would appear that xsane probably creates PDF’s using the TIFF format (for image quality) as opposed to what I imagine Ghostscript does when joining files, which would seem to be to do what it can to reduce filesizes, and as such in this case I imagine convert the TIFF’s inside the PDF’s into JPEG’s. A bit of googling indeed appears to associate tiffs and PDF’s when it comes to xsane; indeed a check on the “multipage” settings shows three output file formats — PDF, PostScript and TIFF. And looking in Preferences/Setup/Filetype under the TIFF Zip Compression Rate, it’s set at 6 out of 9.

So I google PDF sizing, and one result led me to an explanation of the difference between using “Save” and “Save As …” options when editing a PDF: “Save” will typically append metadata on top of metadata (including *not* replacing the expired metadata in the “same” fields!); “Save As”, well, that’s what you really want to do to avoid a bloated file since all that should be will be replaced.

Another result begins describing (what is no doubt but a taste of) the various possible settings in a PDF file, and how using a given PDF editing application, you can go through a PDF, remove some setings, correct others, etc., and reduce the size of PDF’s by essentially eliminating redundant or situationally irrelevant — such as fields with null values — information whose presence would have the effect of bloating the file unecessarily.

I’ve known for a few years that PDF’s are a funny beast by nature when it comes to size: For me the best example by far used to be the use of “non-standard fonts” in the source file, oh say any open-source font that isn’t in the standard list of “don’t bother embedding the font since we all know that nine out of ten computers on the planet has it”. In and of itself this isn’t a problem; why not allow for file size savings when it is a reasonable presumption that many text PDF’s are based on a known set of fonts, and most people have said known set of fonts installed already on their system. However, when one uses a non-standard font or uses one of the tenth computers, when one constantly creates four to 6 page PDF text documents ten times the size of source documents, frustration sets in; having wondered if designating a font substitution along the lines of “use a Roman font such as Times New Roman” when such a font is used — such as in my case, Liberation Serif or occasionally Nimbus Roman No9 L — I asked my “person in the know”. Apparently, Fedora 12’s default GhostScript install, whose settings I have not modified, seems to do just that.

I guess what really gets me about this is how complicated the PDF standard must be, and how wildly variable the implementations are — at least, given that Adobe licences PDF creation for free provided that the implementations respect the complete standard — or more to the point, how wildly variable the assumptions and settings are in all sorts of software when creating a PDF. I bet that were I to take the same source and change one thing such as equipment or software that the results would be wildly different.

So, concurrent to the above scanning project, I happened to experiment with a portable scanner — a fun challenge in and of itself to make it work, but it did without “too much fuss”. And I found out something interesting, which I knew had nothing to do with PDF’s but (I presume) rather with scanners, drivers, and xsane. I tried scanning some pages of one of the said annual reports with the portable scanner on an identical Fedora 12 setup using xsane, and the PDF’s that were produced were far greater in size than those scanned with my desktop flatbed scanner. My flatbed scanner would scan the text and the page immediately surrounding the text, but correctly identified the “blank” part of the page as being blank, and did not scan in those areas, thereby significantly reducing the image scanned size. The other scanner, a portable model, did no such thing and created images from the whole page, blank spaces rendered, in this case, to a dull grey and all, thereby creating significantly larger PDF files than the scans of the same pages created on my flatbed scanner. However, as I mentioned, I assume that this is a function of the individual scanners and their drivers, and possibly how xsane interacts with them, and in my mind is not a function per se of how xsane creates PDF files.

Another interesting lesson.

ASP and Windows centric web pages slow

I am at another hotel on business. (Ho-hum, they have a password, I don’t care at this point to find out how long it’s been in place, I’m sure it’s good odds that it’s been a while. Shall we say that it’s named after a good ship. I suppose I could be wrong.)

Surfing is, often enough, slow. A good number of pages hang and time out. At first, I don’t notice much because the main one I visit is always slow, always hangs and occasionally times out.

At first I was wondering if it’s because I’m in a hotel using their internet — you know, bottlenecks due to lots of people using the internet hookup at the same time (what, at 6AM?), people setting up repeaters in the bushes stealing signal because there is an insecure password that at worst would cost them a night’s stay to figure out, etc.

I also have the company laptop with me, to do company work (of course, I have my own laptop to do personal stuff, the company policy on personal usage of the computers is getting to be much like closed source licences that make you wonder whether you may use the software at all, even for its apparently intended purpose.) It uses windows on a Centrino Core 2 with something like 2.8GHz or more. For the fun of it I type in le web page du jour. It loads quite speedily, while the page load on the same web page on my F11 ACER Aspire One is still hanging.

And I notice something interesting: le web page du jour is in ASP. So is the historically slow site. Last night the site that was impossible to properly log into using my laptop, the work email server — such that I logged into my own web email to send the message to the home office — is, you got it, on a windows server. A fourth site this morning timed out in the middle of a survey I agreed to take; I hope it’s a linux server, it’s for a magazine I subscribe to on a little topic called linux (the publishing house also has a PC magazine, so go figure.)

I’m wondering: Am I having difficulties with these pages because I’m using Firefox? Linux? A slower machine? Is it Fedora’s implementation of Firefox 3.5 Beta 4 or whatever? Some combination of the above? Is this an ASP compatibility problem? Or an ASP discrimination problem? Or are the pages in question themselves biased against non-windows computers? Or non-IE browsers? (Never heard that one before!) (here’s my archive)

ร€ suivre …

The new Google OS

Well for those of you who haven’t heard, internet darling Google announced in the past day or two that it will be releasing a new OS expected in 2010 (here’s my archive).

I had a few reactions:

– Google getting headline news should make it interesting, and they have the money and clout to be a real competitor. I saw the news about the new Google OS by watching the morning news and one of the taglines was “Google to launch operating system”. Sorry, Ubuntu only gets headline news within the likes of gearheads like me (see below), and it’s a footnote at best when people talk about that South African Space Tourist.
– What will “it” be? Linux? Google-Hurd? Open-source? GPL? BSD-licence? Apache Licence?
– I wondered what it would be about. Goobuntu? Ahhh, it’ll be an internet-centric linux distro — meaning, even though it’s obvious that it’s meant to be a MS-killer on the netbooks (with the possibility that it could be released, with appropriate changes, for the desktop too), its main comparison will be gOS. (Insert tongue firmly in cheek here. Then bite.)
– It’ll only have any way of working if it A) deals with the problems Fedora has out of the box (flash, mp3, avi, DVD, etc.) by no doubt including such support out of the box, and generally be AS GOOD POINT FOR POINT as MS, and then some, and B) do something better than MS — and be something that people want.
– It’ll have to likely change the computing paradigm. The cloud computing paradigm has been touted for about seven years or more now and has only been taking off in the past year or so. Google has been slowly eroding MS with things like gmail and google docs, alongside Firefox and OpenOffice.org, and generally contributing to opensource and other projects, but I’m wondering when the breakpoint will be when suddenly EVERYBODY drops MS and goes somewhere else, or rather the pie becomes properly split up such that what’s under the hood matters less than what goes on on the screen. Oh, and people don’t like change. Resistance to change is one of Open Source’s, no scratch that, “any alternative to MS”‘s biggest enemies.
– My original take on the above was that Google *would* be the people able to push things beyond the breakpoint.
– I’m wondering if it will have to go on par with MS by pulling a Novell to integrate MS-files nicely.
– Ahh, “machines with it will be sold starting in 2010” — it was but also wasn’t as specific as that … will there be a slice in it for Google? Or will there given away the way other distros are, but have insidious settings that encourage the user without realizing it to go to some web page that has google click ads? Or … what’s in it for them?

Then of course, I’m listening to one of these “deal to the lowest common denominator then add 2 points of intelligence” syndicated talk radio hosts who’s got a guest talking about this subject. To set the stage, the previous topic he discussed was a videoclip on YouTube of a person using both hands to shave his head while driving and whether there should be a law against such a thing, which he caps up with the likes of “there should be an anti-moronification law against such morons.”

To be fair, the stance he and his guest take is targeted at most people who inexplicably (to me, anyway) have no clue that there *is* an (easy) alternative to MS on the PC, besides the Mac, which he rightfully puts in a class of its own. And, Linux *is* mentioned as an available alternative, but “it’s pretty much for the gearheads”.

Here’s what I sent him, I was so riled up:

*****

Forget Ontario hair-shaving idiots making the roads less safe, I wonder about those on the radio who say linux is for “gearheads”.

I suppose I’m a gearhead, I do indeed like computers for their own sake beyond the day to day usefulness they present.

However I’ve been using various versions of linux for the past several years on my PC and take great pleasure in overwriting any existing MS format on any new computer I get — over the past three years, that’s about 5 computers, formatted a few times over on some. Some are older and more archaic than the netbooks your piece mentioned, let alone today’s top of the line desktops, and I’ve been using them for desktop uses, not server applications. On them are full OS’s that are not stripped down — unless, of course, I were to have chosen one of the minimalist versions — and interestingly are not all that slow.

There are several versions which are geared toward the “average” user. Most of the more common versions can do all of the day to day uses that were mentioned in your piece and are on par with — sometimes superior to — MS. I use a version that is a cross between the “gearhead” market and day to day usage. I recommend to newcomers Ubuntu, which I do not use. Virtually all users of MS however would be able to use Ubuntu, available at ubuntu.com, with no difficulty, and it is the most popular of the linux versions and is not aimed at the “gearheads”.

I was incredulous listening to the show to hear that people still think that MS is the only option for their PCs. I suppose that the few who have heard of linux figure that something given away for free is worth the money paid for it. Au contraire, MS is less configurable and as you know virus prone as compared to linux; for the virus part, you have to pay more to get properly protected. Linux on the other hand is safer, faster, and free compared to MS.

I found your guest informative but I found the bias toward linux not being a competitive alternative on the desktop — which it has been for years — compared to Windows “very interesting”.

*****

Oh, I do think that the driver in Ontario is a complete moron. ๐Ÿ™‚

And Mr. Shuttleworth, please note that I *will* recommend Ubuntu to the general public since the learning curve is easier than even Fedora’s.

Printing PDFs

I’ve just had an interesting object lesson in the differences between two different pieces of software that have more than essentially the same function.

Today I had an important PDF document to print out at home instead of at the office. For the purposes of practical convenience, it was far better to print it out at home and just deliver it to the office than spend the extra time at the office (5-10 minutes) turning on my office computer and printing it out there on a printer I knew would have no difficulty dealing with it, having printed out a few dozen identically-generated documents on it.

On my pretty much stock Fedora 10 box, I use the Evince Document Viewer 2.24.2 using poppler 0.8.7 (cairo) for the Gnome desktop to display and print PDF documents. So far, I’ve been satisfied.

The PDF’s layout had margins beyond my printer’s abilities. And of course the most important parts of the document, being right at the edges of the margins in this document, were being cut off in the process of printing out the document. A reduction in the print size was not useful since the vital information was on the end of the document being cut off in the margins. I suppose I could have tried rotating the document to try to see if the cut off part would not contain crucial information, which I didn’t think of at the time. Both these strategies, however, miss the point: If the original document has very narrow margins, something is going to get cut off no matter what; not exactly desireable.

I did try something that happened to involve a Windows box (ughh) mostly because it had a different printer, and you never know how things behave differently with different equipment.

Not surprisingly, the windows box happens to have an Adobe viewer installed (I avoid that box as much as possible; I don’t even maintain it, that’s my brother’s job. ๐Ÿ™‚ ). I click to print the document and whaddya know, in the print dialog there’s an option to fit the document within the printable area. Document printed, convenience secured.

Now what I would like to know is how much of the print window in my desktop is governed by HPLIP, how much by Gnome, how much by CUPS, and how much by the application invoking it at the moment. So I did a little experiment: Always selecting my printer, I opened a print dialogue in Evince Document Viewer, OpenOffice.org (3.0.1), Firefox (3.0.7), The Gimp (2.6.5), Xpdf (3.02) which I intentionally installed for the purpose of this experiment, and gedit (2.24.3) (on which I’m composing this blog). Besides Xpdf, each appears to have the same base, and except for Evince Document Viewer, each also adds a function tab of its own. Xpdf, on the other hand, has its own stripped-down interface — either invoke the lpr command or print to a file.

Here’s a quick table listing the tabs listed in the print dialogs available in five, off-the-shelf standard installs of Fedora 10 software, with my printer selected, plus Xpdf, which was installed directly from the Fedora repositories without any modification of settings or whatever on my part:

OpenOffice.org*: General; Page Setup; Job; Advanced; Properties
Firefox: General: Page Setup; Job; Advanced; Options; Properties
Document Viewer: General; Page Setup; Job; Advanced
The Gimp**: General; Page Setup; Job; Advanced; Image Settings***
Xpdf: Xpdf has its own stripped-down interface
gedit : General; Page Setup; Job; Advanced; Text Editor

* There is an Options button in the “Page Setup” tab for OpenOffice.org.
** The Gimp treats my “special” PDF as an image much like any other, and automatically sizes it to the current settings, much like it would handle a .png or .jpg image
*** The Gimp has an option to ignore the margins; see above note

Not one, besides The Gimp, has an option to fit the document within the printable range, and The Gimp only indirectly, because of the way it seems to handle PDFs by default as an image to be manipulated. And of the others, to be fair, only Document Viewer and Xpdf deal with PDFs — even FireFox delegates PDFs to the Evince Document Viewer by default.

Then I did another little experiment: I installed Adobe Reader 9.1 (that license is interesting, pretty convoluted, and makes me wonder whether I may use the installation at all; in any case, I’ll be getting rid of it since I really only installed it for the purpose of this experiment, and decided a while ago that having 2 PDF viewers above and beyond that which is available in the basic distro installation is superfluous unless ther’s a particular reason for it.) And what do I see? A new print dialog that reminds me of the one I saw earlier on the windows box. Interestingly, it has “fit to printable area” and “shrink to printable area” options.

So my little experiment has led me to the following conclusions:

– many pieces of software, presumably not wanting to reinvent the wheel, rely either on the OS or I suspect, at least in this case, the desktop environment for its print dialogs;
– some software authors do want to reinvent the wheel, such as to “do it their own way”, or to be completely platform and environment independent, and therefore make their own dialogs;
– some software authors want to do extra things but don’t want to reinvent the wheel, so they have a wrapper for to add extra functionality to an existing base;
– in my documents, I shouldn’t try to stuff as much content as possible into each page too far, at least not by playing around with the margins.

Looks like something for the Evince authors to toss in. Assuming, of course, that — without fundamentally changing a document — resizing a PDF and/or its content to the local printer’s printing range is a really useful feature, such as to deal with awry margins, or PDFs sized for A4 instead of letter sized or vice-versa. ๐Ÿ™‚ And that such non-conformities and/or their prevalence make it worth my using the Adobe Reader, licensing issues aside. Or that another PDF reader out there that has that functionality.

Hmmm … OO.o differences, Fedora, and Ubuntu

In my post I may just have that reason to get rid of Ubuntu I whined about minor differences between “stock” OpenOffice.org appearances and functions and those I used straight off the OO.o website as well as what ships with Fedora.

This blog (here’s my archive) explains a bit why: It says “Many Linux distributions ship ooo-build. … Fedora ships a modified OpenOffice.org, but Fedora does not use ooo-build.” Which means that in keeping with Fedora’s usual policy, it ships upstream versions of software with only reasonably required modifications to make it work under Fedora. When I was using CentOS, I was using the vanilla version directly from OO.o.

That explains a few things. It doesn’t necessarily justify my whining — nor all the changes Ubuntu or other distros (or even Fedora) make, but … Why mess with a good thing? ๐Ÿ™‚

Uggghh … I need a bar of soap

I think I’m going to be sick. ๐Ÿ™

I never cared for Debian and derivatives because Debian never seems organized enough to get a new release out. In all honesty I’ve never tried Debian. I hate Ubuntu, mostly because I’m very suspicious of anything with great marketing hype and hordes of fanboys to boot. (So much for my initial suspicion of the Stargate movie in 1994 and all of its over-hyping; I have long since wished I had overcome this and gone to see it in the theatres, and I do love SG1 in reruns. ๐Ÿ™‚ )

Last week my brother and I were jumping hoops again and again to get my printer working under Centos 5.2. Last January we’d gone to a lot of trouble to get it to work under Centos 4.6 (I finally upgraded to the 5.0 series about a month ago.) No matter how many hoops we’d jump through and resolve there were still more, or another set would surface. Realize that this is a relatively new printer that must have come out at least last fall if not earlier, my brother received it as part of a “throw it in with the new laptop he bought” kind of deal. Red Hat therefore had gone through at least one update, if not two (at least 5.2 if not also 5.1) to add the appropriate drivers or move to the next HPLIP version that would support the printer. To give you an idea, Centos 5.2 comes with HPLIP 1.6.something, my printer needs at least 1.7.something, and the current version is 2.8.something.

Seems to me that commodity printers should be supported, it’s not as though a corporate situation doesn’t use printers. Though they would probably say that my line of printers is too commodity for an enterprise to be interested in, they probably want high-capacity, high-quality printers, not an inkjet meant for the consumer market.

I knew that the printer worked under ubuntu since I tried a live CD from them and it worked without saying boo. My brother was “willing” to continue trying to get it to work but was pushing hard to switch. “You can always switch back to Centos you know.”

The printer was a killer. So is getting wireless on my laptop, using a several years old (about 4 years old) pcmcia wireless card; under CentOS 4.6 I had a kernel under which it worked but any time there was a kernel upgrade I would have to switch back if I wanted to use the wireless. We hadn’t done anything yet about the wireless but had a plan.

I still haven’t gotten the wireless to work under Ubuntu but to be fair I haven’t tried yet at all.

My first reaction was that Ubuntu was the Playskool version of linux.

I also HATE the fact that the default user under Ubuntu is a defacto root user — first thing I did was get rid of the annoying sudo requirement by assigning a password to root, but it’s not of much value because so far I haven’t come across anything in Ubuntu that really requires root the way it would under ANY other linux distribution, other than the fact that it constantly asks for passwords to do anything. Also annoying is that I can’t log root into a gui to do things that way (including to REMOVE the default user from the admin ring.)

This may be the undoing of Ubuntu along the lines of the way that Windows is plagued with problems because most of the time the default user has admin rights and can install and run just about anything unless the Admin user shuts it down. The only upside is that it always asks for your password, but I expect that most windows converts would find this annoying and just mindlessly enter their password just to get on with things.

Once I got over the shock, the problem now is that the user experience, other than the administration to which I’m accustomed mostly doing under a command line instead of gui, is identical to Centos. (The main ubuntu distro desktop is gnome, as is the case for Centos.) Admittedly, the Synaptic gui package manager along with the extensive Ubuntu repo vs. the Centos repos is as good as they say, and worth the switch. And 8.04.1 is an LTS version, meaning that it’s supported for 3 years instead of having to go through the reformat treadmill every 6 months (OK, Fedora supports versions for a month after the release of the second release following, meaning about 12-13 months.)

I hope that RHEL (and hence Centos) shapes up and realizes that some people like using as a desktop, and that making it at least vaguely usable without pulling teeth and hair is as important as making it stable.

I have to go now and wash my mouth out with soap.

New Desktop and Centos 5.2

So I’m going to the bank to deposit a trivial amount of pocket money (ok, not so trivial that it isn’t worthwhile to me in the moment; I’m discussing $50) and I decide to walk into the used laptop store before crossing the street.

I see this really cute Dell mini-tower. “Hyper-threading,” the guy tells me. Elsewhere I hear, “no good, could be a real security flaw under linux”. $100; I take out all my bills and about $40 in silver (I occasionally have way too much silver in my pockets!) and promise to come back with the remaining $10.

A cute little P4 2.8 with 512 megs and a 40gig HD, but only a CD rom. I’m happy, apparently there were a couple of duds in the lot of them he received.

Got home, and put on Centos 5.1 using the CD’s from a couple of months ago. Funny, the next day 5.2 comes out and I of course immediately upgrade, but the whole thing takes several hours to download and install!

Trying to make a new server

So last December I find this old clunker in the garbage pile at work: base memory, CD rom, and power source. Oh, and it weighs a ton.

I decided that I wanted a server to act as my internet gateway and was too cheap to buy a router; I bought an 8 gig HD used for $10 and burned the appropriate disks. Realize that my wonderful PIII 555 is slowing down a bit, so in order to get the most out of it I decide that it should no longer have to deal with internet sniphing, attacks, pinging and so on.

The install takes just about all night. It succeeds, though, and I got to see what the “new” (now of course old) gnome desktop looks like, a glimpse of which I’d seen a couple of years before with FC5. (Shudder. Bad experience.)

And what happens?

My brother says that the HD I bought was a dud. Point final.