This is a quick note (mostly to myself) to say that the computer hosting www.malak.ca — the website hosting this blog — has been switched out and replaced.
Last night, I was able to access the site normally and remotely while out to dinner at the home of some friends. This morning, in trying to ssh into the machine to do a routine manual software update, the connection kept timing out and disconnecting. Some quick diagnostics along the lines of “is the machine plugged in?” and a few reboots to watch was what happening — about as much as it would allow me to do, in fact — revealed that for reasons unknown, it was rebooting, going through a grub page, booting up, showing the Fedora logo, and, after the logo disappeared but before the login prompt appeared, a bios message came on the screen indicating a signal loss, and a reboot would begin again.
I tried a few past kernels in the grub menu, including the rescue kernel, and checking the bios, to no avail. Bringing up the text display of what was going on during the bootup was hard to access since I was scratching my head wondering “What’s the keystroke to do that again?”; same for getting the console. No matter, other things needed attending to in the moment, and I moved on.
Fortunately, my brother-in-the-know was coming within the hour, and I sent him some messages about it. He offered to bring an old junk-computer-which-wasn’t-quite-junk-yet I had given to him a while back and which he wasn’t using, at least not yet. After describing the problem to him and offering my rough diagnosis — either there was a corruption somewhere in the software, causing the reboots, or, during the reboots software commands invoke a (presumably faulty due to old age) physical hardware system or circuit, which caused a problem leading to the reboots — both of which, particularly the latter, he thought may have had merit.
My brother brought the old machine. Before installing anything, he first checked the OS SSD from the server (which also contains this blog’s database) in a USB caddy, then he checked the external data drive holding the rest of the static website and my backups, again by USB. Data on both units were in good condition. We finally went straight to replacing the machine by transferring the SSD and external drive to the new old machine, and here I am typing up this memo to myself.
The machine’s specs?
Dell Vostro 420 series; 8.0 GiB; Intel Core2 Quad Q9400 x 4; Mesa Intel G45/G43 (ELK) video card, with lots of USB ports, a networking card, and other things many people including myself take for granted.
And since the 240.1 GB SSD is the drive from the previous machine, it is still running the same instance of Fedora and the LAMP stack with WordPress, suffice it to say that I’m up to Fedora Linux 37 (Workstation Edition) 64-bit on it, and running up to date LAMP and WordPress software.
In fact, as I am finishing up this post, the machine is being updated!
This post is a translation of and (somewhat of an) adaptation, as well as slight update, of a presentation I gave in November, 2021, at a meeting of my local Linux Meetup. This adaptation includes some extra limited mockups of demonstrations performed live during the presentation.
The presentation was put together using Fedora Workstation (a general purpose version of Linux, in this case specializing in being a desktop workstation), highlighting some software either installed by default, or available in the Fedora Linux and rpmfusion software repositories (“App Stores”). It is therefore not intended to be a complete exposé on all available open source / free software options for PDF, even under Fedora Linux, let alone GNU / Linux in general, or other systems.
It should be noted that the presentation’s original target audience was a French-speaking group of Linux enthusiasts, Linux professionals, and other IT enthusiasts and professionals familiar with Linux. Most of the listed software would typically be available in standard or easily accessible Linux software repositories (“App Stores”). Beyond the world of GNU / Linux, free software is generally available for use on other systems, and, barring instances of a specific given package offered with paid warranty support, are usually also free of charge to download, install, and use.
In the case of the software highlighted in this post, all are either free-of-charge, or represent the free-of-charge version.
The Value of a PDF File
Context / Situation:
Take the case of the exchange of a document between two computers — such as between one running Linux, and another running Windows (or vice-versa) — and each computer is endowed with a different office suite, such as LibreOffice (cross-platform) on one, and Microsoft Office (Windows / Mac) on the other. (Of course, other possibilities exist, such as Calligra Suite (cross-platform), Pages / Numbers / Keynote / etc. (Mac), Corel Wordperfect, Google Docs, etc.)
LibreOffice, and in days gone by, OpenOffice.org, have long been touted as being “compatible” with MS Office; this purported compatibility, however, is disappointingly nowhere near as good as I and many others would like to believe.
As such, each user will open the shared document, which will be displayed according to each suite’s interpretation of the file, and may find that the actual displayed content on their screen could be different — sometimes substantially so — from the intended original display of the document. Text lines may be cut off; fonts may not be available on one or more of the systems, causing font substitution; font sizes may be changed, or text size may be different while substituting a different font due to the lack of the specified font; certain symbols may not be available on some systems; table effects may not work, or objects inserted into tables may not function or be displayed as expected, such as the insertion of a spreadsheet.
Unfortunately, I would estimate that said disappointing lack of “complete and perfect” “drop-in replacement” compatibility is a very common experience in comparing many well-known pieces of proprietary software and their open-source counterparts — not just LibreOffice and MS Office. Personally, as a Linux user, I have experienced this lack of complete compatibility a number of times since beginning to use OpenOffice.org in 2005 and Linux in 2006. Since then, I have also seen the incompatibility in action on a number of occasions during varying presentations under completely unrelated circumstances in which the presentation files were produced in one suite, and attempts made to show them in another were met with varying degrees of disappointment, sometimes leading to complete failure.
The following four images are jpeg images of the pages of the PDF document linked to above, and which I created in LibreOffice Presentation. It should be noted that, for the sake of argument, the pages could have been created in another format, such as a word processor, a spreadsheet program, or a drawing program, for instance.
Page 1 — Song lyrics to be displayed for a Karaoke Night
Page 2 — Expenses list for a Luncheon
Page 3 — TV Listings
Page 4 — Flea Market Poster
The above document — represented here in jpeg format directly produced from a PDF of the document — was originally prepared in LibreOffice Presentation, and therefore correctly represented the original document.
However, the following four images are jpeg images of the pages of the PDF document I created in Microsoft PowerPoint (you will need a PDF viewer) into which I imported the original LibreOffice Presentation, in order to demonstrate the relative lack of compatibility between, at least in this case, LibreOffice Presentation and Microsoft Powerpoint.
Page 1 — Song lyrics to be displayed for a Karaoke Night
Changes: Text fonts and font sizes, causing text to be cut off the page
Page 2 — Expenses list for a Luncheon
Changes: Text fonts, and improper translation of symbols
Page 3 — TV Listings
Changes: text fonts, font sizes, and lack of background colours in the various cells
Page 4 — Flea Market Poster
Changes: Text fonts, font sizes, corrupted translation of spreadsheet table in the centre of the flyer
The value of a PDF:
PDF files are generally well supported across multiple platforms and software, generally regardless of platform, and will usually be displayed in a virtually identical fashion on all systems; in the case of discrepancies, they are usually inconsequential.
There exists a certain perception that, short of having Adobe Acrobat Pro (a commercial, closed source piece software), PDF files are difficult to edit and modify, allowing for a certain view that PDF files are more secure. This is a case of “security by obscurity”, since editing and modification may be performed by many pieces of software, besides but of course including Adobe Acrobat Pro.
PDF files may also benefit from a perception of being less susceptible to viruses and malware, such as through macros. Suspicious files, regardless of format, should always be checked when there is reasonable doubt, particularly under certain environments.
Be careful when using some PDF software downloaded from random websites on the internet, or websites which advertise PDF modification: The may add watermarks to the resulting file — this may be undesirable, and embarrassing, particularly if the software, website, or their output aren’t vetted prior to distributing the resulting file.
Further, websites providing PDF editing services may have very reasonable terms of service for editing your document, limiting their responsibilities toward you. By submitting a document to an external website, it may may not be able to protect personal privacy, nor be able to guarantee to not divulge commercial or industrial secrets or confidential personal information contained in the submitted document: They may become the victim of a hacking, or become the target of legal proceedings, not to mention potential dubious or unscrupulous intentions operators might have to begin with. Or, they may simply be unwilling to formally engage in such responsibilities in the absence of a paid service contract.
This article’s objectives therefore are:
Firstly, presenting the utility of PDF as a useful format for distributing documents to a wide audience, without having to concern oneself with what software individual audience members may or may not have access to, if at all, and regardless of reason(s);
Secondly, presenting safe, free software and open-source software options for using and editing of PDF files;
Thirdly, beyond the general promotion of free and open-source software and PDF editing, this article is not about promoting nor deriding particular OSes or software packages, or strictly speaking their strengths or weaknesses.
As such, if a particular system or software package suits your needs and / or purposes, you should use it.
However, if a given preferred solution is costly software, perhaps your organization (or your family) may find it to be financially worthwhile to only purchase a minimum number of licences and only install it on a minimum number of designated computers, instead of needlessly on every computer in your organization (or family).
A simple cost / benefit analysis would be worthwhile: You should consider whether you wish to pay $5, $10, $15, or more, on a recurring basis (perhaps monthly), per computer on which such software would be installed. The costs, be they one-time costs or recurring, should be considered against how often the software may be used, perhaps in some cases only once or twice monthly — perhaps overall, let alone for each individual instance, depending on your organization’s size, needs, and other considerations. Further, it should be considered what operations are typically executed, especially if they simple operations such as joining multiple PDFs, or extracting a page or two, which can be easily performed by many, using any of a multitude of software packages you can get without cost, as opposed to perhaps more technical tasks which may justify costly specialized software.
Creating PDFs from an established document
To begin with, most software which create documents will have an option in the File menu or elsewhere to Print, or Print to Document, or an Export function, which will offer PDF as a format:
At the risk of skipping ahead to the PDF splitting section below, note that it is a common option to be able to selectively output some, instead of all, pages to the resulting PDF, thereby avoiding the question of having to later split the PDF to get only the desired page(s).
Overview of PDF Software
Perhaps (or perhaps not) to the surprise of many, there are many software packages and suites which will:
Display PDF files
Combine, divide, and export PDF files, as well as reorder pages within a PDF;
Edit PDF files, such as the overall files and the file metadata, as well as the PDF file content
Import and display PDF files according to particular strengths (The Gimp, Inkscape, e-readers)
Displaying PDF files:
Here are some examples of software which will display PDF files directly:
Evince Document Viewer (Gnome Project)
Okular (KDE Project)
Firefox and Chromium (Web Browsers)
PDFSam (limited free version; there is also a commercial version with more capabilities); a version for Debian derived Linux systems is available on their website
Here is a very short list of software which will open and display PDF files and allow editing, each according to their strengths, but whose primary function is not PDF display:
LibreOffice (Office Suite)
Calligra (Office Suite)
The Gimp (Image Manipulation)
Inkscape (Vector Graphics Editor)
Evince Document Viewer
Chromium (web browser)
Software to Combine PDF files
A relatively common activity is to combine multiple PDF files into one file — such as, separately scanned pieces of paper, or PDF files produced separately, perhaps by different people.
Here are some examples of software which will combine PDF files:
PDF Mix Tool
Combining PDF files in PDFArranger
Software to Divide PDF Files / Extract Pages
Another relatively common activity is to divide a PDF File, or extract one or more pages from a PDF file.
Note that if you are the creator of the document, as shown earlier, the software you used to create the document likely allows for you to selectively export individual or multiple pages to PDF in addition to exporting the entire document.
Here are some examples of software which will divide PDF files / extract pages:
PDF Mix Tool
LibreOffice — allows to print and / or export one or more pages
Calligra Suite — allows to print and / or export one or more pages
The Gimp — allows to print and / or export one or more pages
Splitting a PDF File with PDFMod
Here are some examples of software which will edit PDF files to varying degrees:
LibreOffice permits the possibility of creating a hybrid PDF and .odt / .ods file (word processor or spreadsheet files), which will allow for the PDF to be more easily edited by any suite that is able to edit .odt and .ods files; create a document with LibreOffice, and in creating a PDF, choose Export — General — PDF Hybrid (incorporating .odt / .ods file)
In my personal experience, PDF editing — and ease of doing so — can vary wildly according to what one wishes to do, as well as wildly according to the nature of the source PDF. I have had excellent experiences editing a PDF created from a CAD software drawing (presumably created using commercial CAD software such as AutoCAD), and whose individual elements could be manipulated in LibreOffice Draw. I have also used LibreOffice Draw to insert text zones, arrows, and scanned signatures into PDFs. Conversely, documents composed primarily of scanned images — including text and forms — may require more image manipulation skills to edit, modify, and manipulate individual and specific elements of the document, depending on your objectives.
What you can do will also be dictated by which software package you choose and its strengths and weaknesses.
For instance, it should be noted that the phrase “Editing a PDF” can be a nebulous thing which can mean many and different things to many and different people. For instance, actually editing document text directly in the PDF may be what one understands and expects, while the strengths of a given piece of software may lay elsewhere.
LibreOffice has some PDF import functions, as well as imperfect document layout functions. Depending on the source PDF document, it can be quite effective at editing text directly.
Note from the closed-source world: I once had an excellent experience with a moderately-difficult-to-edit PDF using Microsoft Word, which included being able to edit the text — and presumably save in MS Word’s native file format.
Importing and editing a PDF in LibreOffice Draw (note the imperfect import):
In the case of my example PDF, LibreOffice Draw allows for some direct editing of the text (Notice the word “MODIFIÉ” with a brick-red text colour replacing some of the text):
Importing and editing a PDF in Scribus, a desktop publishing programme:
The Gimp can insert text zones into a PDF, and which text zones themselves may be edited within The Gimp; however, its strengths lie in dealing with a PDF as an image, and editing image characteristics, while editing the text as one might in a word processor might be more challenging.
Importing a PDF file into The Gimp, image manipulation software:
Adding a text zone to a PDF in The Gimp:
Exporting Text, Cut & Paste, and .odt File Creating
Depending on the source PDF and its nature, “cut & paste” may work (as opposed to not working at all), and may even “work well”, although this may be wildly variable according to the source PDF document. However, even in the best case, this method will normally only copy the actual text, and some of the images, from your PDF document; it may not usually be particularly useful in actually replicating the PDF document formatting.
As for other document and content formats, such as drawings, pictures, and text rendered into images, other sections of this post should be consulted (ie. using LibreOffice Draw or The Gimp for drawings; optical character reading (OCR), including OCRFeeder, etc.)
In addition to the mention of LibreOffice above, OCRFeeder is software that acts as a front end to optical character recognition software, and is able to import PDF files, and then export in HTML, plain text, OpenDocument (.odt) format, and of course PDF. Again depending on the source file, results may be variable, although the results are usable.
… and here is an image of the exported .odt file (word processor file) of the page viewed and created in OCRFeeder, then opened in my word processor (LibreOffice):
Ironically, as this case shows, the changes (or lack of adequate recognition and / or translation of the original layout) can be as great or even more as can occur by simply sharing documents between not-fully-compatible-though-similar software suites. However, though far from perfect, it is arguably usable, depending, of course, on how much effort you are willing to devote to replicating the original document layout — and then making your desired changes, and finally creating a new PDF document.
Exporting to other file formats:
As has been (indirectly) demonstrated several times throughout this post, PDF files can be imported into software that isn’t specifically dedicated to PDFs, and then allow for the resulting imported file to be exported into other formats. For example, The Gimp was used to create most of the working images for this post: In the case where PDF files were to be displayed, the PDF files were imported into The Gimp, and then exported in jpeg or png formats. This type of conversion — from PDF to another given format — can often be done by other pieces of software (to varying degrees) according to their strengths or weaknesses.
Photo Editing with PDFs
The Gimp is fully functional image processing software, very similar to — but, unfortunately, not fully compatible with nor a perfect drop-in replacement of — Photoshop. Using The Gimp, you can import a PDF and edit the image(s) directly, or extract photos and other images through a variety of means, such as selecting the area of the photo, copying the selected area, and creating a new document from the clipboard.
Here is a The Gimp having imported a PDF of a photo of myself on a cruise:
During the live presentation, I gave the hypothetical example — for the sake of levity — of a barber who particularly likes sideburns, and seeing mine in a PDF, decided to clip out one of my sideburns from the photo …
… and then notice on how I was starting to go grey at the time :
It is taken as an understood that use of The Gimp to manipulate the photo can be continued at this point — such as how my sideburns might look after a colouring, or to compare side-by-side against other people’s sideburns — and then the result exported as a PDF.
PDFTricks allows for resizing of images in PDFs, principally compressing and reducing the file size to the order of “large”, “medium”, “small”, and “extra-small”, as well as image exporting to .jpg / .png / .txt formats, and file merging and splitting.
During the presentation, the PDF document above composed of the photo of myself on a trip was run through the software’s “extreme compression” option. The following is a clip from a screenshot from a file manager, showing the size difference between the the original file, and the newly created compressed file:
LibreOffice Draw allows for some image manipulation.
In this particular situation, the night sky drawing in the karaoke page of the example PDF I created was selected, and the various options directly available were shown. However, as mentioned earlier, I have imported PDF documents of building plans and modified them to include notes showing were works were performed, or to add signatures to documents.
PDF Form Creation
LibreOffice Writer and Calligra Suite are fully-featured for the creation of forms. Unfortunately, I am not particularly adept at creating forms.
Filling PDF Forms
Evince — if the PDF form was designed to be interactively filled
Okular — if the PDF form was designed to be interactively filled
The Gimp — allows for text areas to be inserted, as well as photos, drawings, and the like
LibreOffice Draw — allows for text areas to be inserted, as well as photos, drawings, and the like
Viewing / displaying PDF files : User’s choice (usually a system’s default PDF viewer is adequate, or a web browser)
Combining and splitting PDF files : PDFMixTool
Editing PDF files : User’s choice (depends on objectives and source file; The Gimp and LibreOffice Draw are good contenders)
Adjusting PDF file size : PDFTricks
Form creation : User’s choice
Form filling : User’s choice (usually a system’s default PDF viewer is adequate, or a web browser)
Exporting PDF to other formats : OCRFeeder (for .odt); LibreOffice Draw (Photos and images); The Gimp (photos and images)
Note on Linux availability of the above software:
Here are some screen shots from my system’s installed repositories (Fedora Stable; Fedora Updates; rpmfusion.org — free and non-free)
PDF software easily accessible from my computer’s software repositories (“App Stores”):
As this list suggests, there is lot of software available which have varying PDF abilities, ranging from being dedicated PDF software of various kinds, to other pieces of software with other principal functions but with PDF functions ranging to simple importing from and exporting to the format, to being useful within the limits of the software’s main functions to manipulate PDF files in some way(s).
This presentation’s goals are to highlight:
how PDF files are well supported most of the time on most systems, while the various pieces of software, between two versions, typically a well-known closed source project and an open-source counterpart, for document production, are not as compatible with each other as we may want;
free software while avoiding the security risks inherent to using unknown and potentially dangerous websites, as well as software which is easily available for routine tasks as well as to reduce costs;
the possibility of editing PDF files with various pieces of free software which are easily available in most Linux distributions’ repositories — as well as often easily available for other platforms — albeit occasionally with variable success.
Questions taken during the presentation:
A question asked midway through the presentation expressed a certain surprise that The Gimp can be used to edit PDFs. As mentioned earlier, The Gimp is able to import PDF files, and perform various functions on the file according to its strengths (image manipulation).
A participant asked at the end during a question period about a recommendation for software to affix signatures to documents. I replied that I was not aware of any open source official signing software with digital traceability, simply because that I had not done any research on that subject; however, an image of a scanned signature can usually be inserted in a document using The Gimp or LibreOffice Draw, or as a document is being created in a word processor.
A final comment recommended the use of LibreOffice Draw, based on the commentor’s frequent use of it to perform a number of the functions listed here, to which I’d commented that I had asked my employer’s IT department to install LibreOffice on my work-issued Windows-based laptop computer in order to be able to perform some drawing-modification functions as part of my employment.
Enjoy sharing and editing PDF files!
Signing PDFs can be performed with jPDF Tweak.
JPDF Tweak can also encrypt and add passwords to a PDF.
This is just a little note to mention that malak.ca has been down for the past 28 hours or so for an upgrade only planned as of a few days ago, when the site had been hanging for anywhere from a few hours to a few days, and diagnostics suggested that the hard drive may have been on its last legs.
A backup of the blog database was created, and saved on an external drive;
The external drive, used as a backup for my other computers and the location of the static parts of my website, was separated from the machine, which was then powered down;
The old hard drive was physically removed;
The SSD was connected;
Fedora 34 workstation, which had been previously downloaded and installed on a USB key, was installed on the SSD yesterday evening (I’m currently still running on F33 for my desktop, laptop, and one of my worldcommunitygrid.org nodes)
The desktop for F34, on the core 2 duo, is faster, although some of that is due to the SSD, of course;
Interesting to see the dock moved from a vertical position on the left to a horizontal position at the bottom;
I find it interesting that at bootup, the activities screen appears to be the default;
This evening, the web server was installed;
Although we had planned to use php-fpm to separate permissions, but since this is a single domain box, we used a simple virtualhost;
MariaDB was installed;
The re-registration of my redirections for things like www.malak.ca with noip.com to account for the dynamic nature of my IP address was done;
The re-registration for my Let’sEncrypt was performed;
Various linux kung-fu tricks were performed, and magical linux incantations were uttered, and the setup was complete;
The external drive was reconnected;
The blog was restored from a backup.
The system is peppy, and this blog, which is hosted on the SSD instead of the external drive (as is the rest of malak.ca), loads somewhat more quickly.
As usual, great thanks go to my brother whose herculean efforts were at the core of the setup. Thank you!
Over the past at least twelve years, I have been salvaging computers I have found on the streets on garbage day, or found in other locations where my various personal travels have taken me, for use to reformat into usable computers. The various finds have served as main desktop computers, secondary computers, home servers, computation nodes for the World Community Grid, gifts to my brotheror the occasional friend, and the like. It has variously allowed me to indulge in a bit of tinkering, trying out a new linux distro or version of BSD, build a home server, or just pass the time while engaging in a hobby.
In the process, I’ve watched the lower bar of what is acceptable “junk that isn’t junk, at least not yet” move upwards from about P4-533 MHz 32 bit processors to dual core 2.66 GHz 64 bit processors (although single core 64 bit P4 at 3.4 GHz to 3.8 GHz range is good if you don’t want to depend on a GUI, or if you have a lot of RAM and an SSD), 512 MB of RAM to 2GB of RAM, and 20GB hard drives to 80GB hard drives. Now it seems that the next big thing will be in moving from mechanical drives to SSD drives, which I expect — when SSD drives become common in the old computers I find being thrown out — will make a revolutionary change upwards in speed in low end hardware, the way I learned the same in 2017 when I swapped out the mechanical drive in my laptop and replaced it with an SSD. (To be fair, when I bought the computer new in 2015, the hard drive was curiously a 5400 RPM model, presumably either to make it less expensive, less power hungry vis-à-vis battery life, or both.)
As an aside: My favourite brands of castoffs have been, in order, IBM / Lenovo ThinkCentres, then Dells. After that, I’ve had an excellent experience with a single used HP desktop that has been doing computations for World Community Grid running at 100% capacity, since late summer 2016. I’ve dealt with other types of computers, but the ThinkCentres and the Dells have been the ones I’ve had the most success with, or at least the most personal experience. (Since initially writing this post, I have been developing a suspicion that based on the longevity of the HP cast-off I have, HP actually might be superior to the IBM / Lenovo when it comes to cast-offs; however, since it’s the only HP cast off that I can remember ever having, it’s hard to form a proper opinion.)
But to wit: Over the past two weeks, I have tried to revive three used computers that were cast-offs.
Two of them were IBM / Lenovo ThinkCentres, which I think were new in 2006 / 2007, 2.66MHz 64 bit dual cores, 80GB hard drives, and 2 GB memory. The third computer was a Dell case with only the motherboard (proving to have been — see below — a 64 bit dual CPU running at something like 2.66MHz) but no memory, no hard drive, no wires, no DVD player, and not even a power supply!
The two ThinkCentres were from a pile of old computers marked for disposal at a location where I happened to be in mid-2017, and I was granted permission to pick and choose what I wanted from the pile. I gave them to my brother, who at the time evaluated them and determined that neither worked, one just beeping four times and then hanging. After that, they just sat around in his apartment for whenever they might come in handy for spare parts. He had since determined that one actually worked, but he hadn’t done anything with it.
The third computer was found on the street near home a couple of months ago, and was covered with about an inch of snow by the time I’d recovered it. I brought it home, and let it sit around for several weeks just to make sure that it dried out properly. Based on the “Built for Windows XP” and “Vista Ready” stickers, I’d guess that it was new in about 2005 or 2006.
Having forgotten about the ThinkCentre computers I’d given to my brother in 2017, I casually asked him if he had the requisite spare parts to make the snow-covered computer work, since we normally share our piles of spare parts retrieved from old computers that die. To my surprise, he sent me the functional ThinkCentre. My knee-jerk reaction was “I don’t need a new-to-me computer; just the parts required to see if I can get the snow-covered computer to work.” Perversely, I didn’t actually want the results of my planned efforts to produce a functional computer; I just wanted the amusement of a small project, and more generally to see whether the Dell found on the street would work.
In parallel, my home server on which I hosted my backups and my website, another computer of the used several times over variety, worked perfectly except for mysteriously turning off on its own a couple of times recently, perhaps once a week. My brother and I decided that what was probably happening was the result of one or more thermal event(s) which shut down the computer, no doubt due to a combination of dust accumulation, the CPU fan ports in the case not having enough clearance from the computer next to it to allow for proper aspiration of ambient cooling air, and possibly high heat generation from occasional loads due to search engine bots crawling my website. Despite cleaning out the dust, removing the computer’s side panel from which the CPU fan drew air, and shifting both computers a bit in order to allow for adequate ventilation, the computer turned itself off again after about a week.
My brother and I made a swift decision to replace my server with a new installation on a “new” computer — the good ThinkCentre I initially didn’t want — because even though the existing machine was otherwise performing spectacularly well given the overall small load, we tacitly agreed that the shutdowns were a problem with a production server, though we hadn’t actually said the words. This incidentally dealt with another curious behaviour exhibited by the existing server which appeared to otherwise be completely benign, and hence perhaps beyond the scope of why we changed the physical computer.
The operational ThinkCentre was plugged in, formatted with Fedora 31, and my brother helped me install the requisite services and transfer settings to the new server in order to replicate my website. Newer practices in installation were implemented, and newer choices of packages were made. For instance, the “old” machine is still being kept active for a bit as a backup as well as to maintain some VPN services — provided by openVPN — for the purposes of setting up the new server and installing WireGuard for VPN on the new server, and generally allow for a smooth transition period. Other things that we had to remember as well as learn, perhaps for another time, were to install No-IP as a service, and that drive mounts should be unmounted and re-mounted through rc.local.
One of the unexpected bonuses to the upgrade is that it appears to be serving web pages and my blog a wee bit faster, for reasons unknown.
In the meantime, on the next project, I got the non-functional ThinkCentre for its spare parts. The first idea I had was that maybe this second ThinkCentre might still be good, and we looked at a YouTube video that suggested cleaning out the seats for the memory sticks with a can of clean compressed air. I was suspicious of this but let it go for a while, and I proceeded to harvest parts from the computer after deciding that the machine wouldn’t work regardless.
A power supply, cables, a hard drive, and memory sticks were placed in the Dell found on the street. It powered up, and after changing some settings in the BIOS, I was able to boot up a Fedora 31 LiveUSB. Using the settings option from the Gnome desktop, I was able to determine that there was a 64 bit dualcore CPU running at about 2.66GHz, that the 2GBs of memory I’d inserted worked, and that the 80GB hard drive was recognized. I looked around on the hard drive a bit with a file manager (Nautilus) and determined that the place from which I’d retrieved the ThinkCentre appeared to have done at least a basic reformatting of the drive with NTFS. I didn’t try to use or install any forensic tools to further determine whether the drive had been properly cleaned, or had merely received a quick reformat.
Suppertime came around, and the machine was left idle to wait for my instructions for about an hour or so. When I returned to the computer, I saw an interesting screen:
(If you can’t see the picture above, it’s an error screen, vaguely akin to a Windows Blue Screen of Death.) After a few reboots, all with the same “Oh no!” error screen, my brother suggested that the machine may have been thrown out for good reason, intimating that it was good luck that I’d even managed to boot it up in the first place and look around a little bit. I, on the other hand, was relieved: I’d had my evening’s entertainment, I’d gotten what I wanted in the form of working on the machine to determine whether or not the machine could be used, and I’d learned that it indeed couldn’t be used. Parts were stripped back out of the Dell, and the box was relegated to the part of the garage where I store toxic waste and old electronics for the times I have enough collected to make it worthwhile to go to an authorized disposal centre.
At this point, something was still bugging me about the second ThinkCentre. I hadn’t yet placed my finger on it, but I was suspicious of the “use compressed air to get rid of the dust in the memory bays” solution. So I placed the salvaged parts back into the ThinkCentre — having fun with which wires go where in order to make it work again — and got the four beeps again. I looked up what four beeps at start up means (here’s my archive of the table, which I had to recreate since a direct printing of the webpage only printed one of the tables,) and found that at least on a Lenovo ThinkCentre, it means “Clock error, timer on the system board does not work.” While I assumed that changing the BIOS battery may well fix the problem, I decided not to investigate any further.
I salvaged the parts again and placed them in my parts pile, ready for the next time I find a junker on the street or from elsewhere. The second ThinkCentre’s case was also placed beside the Dell, awaiting my next trip to an authorized disposal centre.
This means that out of the last three computers, I have one functioning computer replacing an existing computer (that I hope will continue with an industrious afterlife doing something else), one computer scavenged for spare parts and the case relegated to the disposal centre pile, and the Dell computer which was found on the street also relegated to the disposal centre pile.
Or, to paraphrase Meat Loaf, “One out of three ain’t bad …”
Cette page est principalement une place à exposer le lien pour ma présentation de ce soir au Linux Meetup Montréal au sujet d’utiliser le SSH et le SSHfs pour l’accès aux fichiers sur des autres systèmes depuis votre ordinateur linux (Fedora avec Gnome, dans mon cas).
I have mounted, on a volunteer basis and in a lay capacity, the annual reports for a community group to which I belong, since about 2008.
Up to that point, the group’s annual reports were individual committee reports delivered to the secretary, individually printed out as and when received, and then stapled together with handwritten pages numbers when it had to be distributed, with an added cover page, and an extra page listing the reports and their page numbers. This did have the charm of not requiring a herculean effort and time requirement in both mounting the report, and on “printing day”, to print literally a thousand pages or more, depending on the number of pages to the report and the number of copies to be drawn. Admittedly, it does not take into account possible collating, as per how one might print out the reports (ie. pages with colour drawings and photos vs black and white, etc.).
The year I took on mounting the annual report, I believed that the annual reports should have been in an electronic format such as PDF so that it could be placed on the group’s website. But that was barely the beginning of why I took on the job.
To fulfill the technical goal of making a PDF for download from the website was not too difficult. Two easy options would have been to either scan the report once produced the “old fashioned way” and produce a PDF from all the images, or, at least for those received in electronic format, create individual PDF documents plus scan for those received on paper, then use a PDF joiner to string the PDF files together into a single document. In fact, at the time, I gathered as many previous annual reports as I could and scanned them, making them available on the website.
However, going forward, I did not consider either option to be satisfactory.
The aesthetic appearance of the annual report irked me. It wasn’t the old school printing on paper — to this day, I still print lots of paper copies for distribution. Rather, I saw an opportunity to put to the test some angst stemming from a bit over a decade earlier when the community group’s recipe book to which I’d contributed led to my having had a few ideas on improvements to the text’s basic formatting and overall layout. (The actual recipes, variety, organization, editing, and recipe testing that I learned went on behind the scenes, and the like, were beyond the scope of my interest, although one common error, separate from my angst, was a mild nuisance.) I of course wisely kept my opinions to myself, both at the time of the recipe book in the mid 1990’s, as well as at the time of initially volunteering to mount the annual report.
As can be surmised from the above, each report came from almost as many different people as there were reports, depending on how many committee reports given individuals would take on. Each person would typically type their report on their computer and email it to the office, or perhaps print it out at home and drop it off at the group’s office personally. They used whichever word processor they had: Sometimes simple text editors, or MS Write, or MS Word, presumably ranging through Word 98, Word 2003, and Word 2006. Presumably some people had Macs with whichever word processor they might have had. I believe that the secretary, who was sometimes typing up the reports which were submitted handwritten, was using a version of Wordperfect. Finally, I was submitting my reports at that point using OpenOffice.org. Presumably, there may have been other text editors or word processors used. Each instance presented a random opportunity for default settings to be different, as well as for the user to change the settings to those that suited their own personal taste.
As a result, each report predictably had formatting unique to each author, sometimes unique to each individual report, if two or more reports were submitted by the same person.
The various differences in formatting in the reports received included the following, without being an exhaustive list:
– varying text fonts and font sizes, and occasionally, more than one of either or both in a given report;
– varying line spacing;
– varying paragraph indentation, including lack thereof;
– line jumps or lack thereof between paragraphs;
– varying page margin widths;
– varying text alignment, typically either left justified, or left and right justified;
– the occasional use of italics over the whole document, beyond that which would normally be used;
– the inclusion or lack of section titles, sometimes (or not) rendered bold and/or italicised and/or underlined and/or capitalized;
– tables listing figures in formats unique to each table and report, or simple lists with varying bullet styles;
– varying spelling conventions, ie. American vs. British vs. Canadian spellings (ie. neighbor vs. neighbour, or center vs. centre);
– varying naming conventions: Sometimes full names, sometimes initialized first names with full last names, sometimes full first names with initialized last names, or sometimes very informally with only first names;
– varying honourific format conventions: sometimes honourifics, titles, and/or ranks would not be used, with persons simply named, and sometimes referred to with variations of their title such as Reverend, Rev., The Reverend, The Rev., etc.
– varying naming conventions for committee names, multi-word names, places, and the like, which were sometimes fully spelled out, and sometimes initialized, abbreviated, and / or contracted;
As such, as alluded to in a previous post, minor changes and differences in formatting between the individual reports created subtle (or, depending on the changes, more obvious) visual changes in how each report appeared compared to each other, when joined and printed on paper or read on a computer screen. Multiple permutations and combinations of the above formatting issues often led to creating wildly varying end results which go beyond the subtle, creating a patchwork of formatting over the multiple reports joined together into a single document. This may be jarring to the eye of some readers, particularly when it is not a subtle, unified, overarching design choice, but rather the result of a decided lack of unified design choice.
The obligatory let’s tie it all together part at the end:
When I collect the individual reports and create one document, I cut and paste all the electronic reports (and rarely, type up handwritten reports) into a single document, imposing a uniform text formatting throughout in the form of a standard font, font size, line spacing, (lack of) paragraph indentation, page margins, and standardized and / or uniform versions of the other items above. Pages are automatically numbered, and standard page headers and footers are automatically added throughout, with date codes to distinguish between earlier and later versions. Basic spelling and typing conventions are applied and made uniform. Note that I don’t dictate or edit writing style, so one report might have section headers, while another may not, nor do I edit for turns of phrase and the like.
This link shows the above hypothetical report changed (you’ll need a PDF reader) to show the same reports with some basic text formatting across the whole document made uniform, while allowing each author’s text flow (and implicitly, were each text to be unique, writing style as well) to remain relatively untouched.
Have I addressed my angst from the mid 1990’s? Yes.
Is the document formatting on the annual reports I produce every year a work in progress, with subtle improvements, changes, and the like every year? Of course.
I’d like to propose my version of a little visual puzzle I saw years ago. In the following table, the same text is repeated in each cell. In eight of the cells, an element of formatting has been changed from the appearance of the text using a basic set of formatting, while the ninth contains, in this case, the default settings on my wordprocessor on my system. The riddle is to find which cell has not been modified as compared to the other eight. (View a slightly larger version of the table here.)
A hint of sorts: What the basic formatting settings are, or which word processor I used on which system or OS, all represent red herrings to solving which is “the original”, or “vanilla”, version.
Scroll down for the solution.
The solution is B2, the cell / square in the centre of the table.
All the other cells have one thing changed from the B2’s qualities.
A1) The font was changed (from a Serif font to a Sans Serif font);
A2) The font size was changed to a slightly larger point size;
A3) The cell’s background colour was changed to a light grey;
B1) The text was italicized;
B2) Standard, unchanged text using my word processor’s standard settings;
B3) The text colour was lightened from a standard black to a grey;
C1) The text was capitalized;
C2) The text was made bold;
C3) The text’s line spacing was increased.
Besides at the core being what I perceive to be a fun riddle, it demonstrates how subtle differences can be made to standard document formatting in a variety of ways. It also alludes to the challenges presented by receiving documents from multiple sources for integration into a single document, such as a community group’s newsletter, or a community group’s annual report, presenting content and / or reports from its various members, leaders, subgroups, committees, and the like. In a forthcoming post, I will further discuss basic issues of varying formatting, and the need for standard formatting in a text document from the perspective of a layman editor of a community group’s annual report.
Yet again, I am in a hotel using their wifi. Again, after being asked during check-in if I wanted wifi access, I was curious about how their wifi password would stand up to any kind of security test as they handed me a slip of paper with the information.
Sigh, it is a terribly obvious password that would only barely pass a “security by obscurity” test by virtue that by and large, people don’t have wifi guessing software with standard dictionaries ranging from a normal library dictionary to a hacker dictionary that anyone’s 11 year old could probably compile, certainly with the help of their friends. In fact, while there are no doubt dozens, no hundreds, no thousands of “obvious” word combinations that would meet the following criteria, it in fact is obvious that it is intended to be very easily remembered by an overwhelming majority of people, be they a typical everyday-anyone-off-the-street person, or a tech savvy person, or a forgetful person, or children, or “even your mom” (I am trying to delicately refer to my mother, who is both not tech savvy in the least, and very experienced in life, if you take my meaning.)
Back in 2015, I was on the subject again, having been impressed at least that the wifi password given to me appeared to be auto-generated at check-in, and obviously not susceptible to simple dictionary attacks.
I started this rant on hotel passwords in 2009 during a series of business trips in which I was at a lot of hotels, and was frustrated for the innkeepers that their wifi would have been so easy to steal for the cost of a night at the hotel and a series of repeaters in the bushes.
Since then, however, I came to realize that my concerns were a bit overrated. Firstly, the potential of signal theft in that fashion was only really was useful for neighbours of the hotels. Secondly, the technical aspects of providing multiple repeaters and power cords down the street (or as the case may be, through the woods) make the cost, both financial and in terms of maintenance, somewhat impractical beyond a few hundred feet.
This is based on some personal experience of the legitimate variety: Since about 2011, my neighbour at the cottage has had internet provided through, I believe, line-of-sight microwave service; it includes VOIP service to provide telephone service, which apparently is prioritized within the router setup. He kindly gave me the wifi password. After about a year, I installed a wifi repeater so that it could be useful within the house, since there was only about one location within the house within a usable radius of the neighbour’s router (a solid two to three hundred feet away); fortunately, I could plug in the repeater at that location. I have since also been giving him some money annually in appreciation.
What have I found?
The repeater is useful. It itself provides constant signal, although it has been susceptible to things like weather, tree foliage, and the like. And, unfortunately, the general service seems to be susceptible to the same, plus things like mountains, and probably the dozens of customers just on my lake and neighbouring lakes. (Yes, people keep on complaining, and no doubt the suppliers’ techies just shift “prioritizing” their services to each successive round of complaining customers, at the expense of the rest of their customers.)
But to wit, the quality of service, at least on the repeater we have, is only barely useful for things like YouTube and the like under the best of conditions; the speed drop from beside the router to our repeater is such that we were able to demonstrate to our neighbour that even if we were consuming such services, we could not be the source of the fluctuating service affecting his internet service (see above.) In any case, by and large we respect a request from him that we not use it to stream video and download large files, since his usage is also metered.
My brother has been wanting to improve our end of the signal for years by setting the repeater near the edge of the property, closer to our neighbour, with things like “waterproof boxes”, electrical extensions, and Ethernet cable through the woods a bit, and then hanging in the air above the clothesline. I have been responding bah humbug, it seems far too susceptible to the elements. As a former geocacher, the notion of a “waterproof” container left out in the woods is no simple feat, and even were it to remain locked, it — and the power cable, and the Ethernet cable too — likely would become susceptible to the elements in short order, and not worth the maintenance effort. It seems to be a challenge beyond most commoners such as myself and even I suspect my brother, more along the lines of the phone company or electric utility face on a daily basis. Remember how annoying it is when the power goes out or the telephones (landlines or cell network) don’t work? Why do they have local teams on the ready 24 hours a day to deal with this? Such outages are regular due to trees falling, water infiltration, and the like.
Is it really worth going to all this trouble in order to have a series of repeaters going down the street for free wifi? I doubt it would be useful to any real degree except to demonstrate proof of concept to your friends for bragging rights.
So … does it really matter how easy it would be to hack a hotel’s free wifi?
Obviously, to the hotel and any costs incurred, of course. The reduction in service and inconvenience that in principle such a signal theft may cause to the hotel and its guests? Of course. And, any illegal activities in which such illicit users may be engaging (kiddie porn, spam, financial fraud, etc.), of course it matters.
But, is anyone beyond the immediate neighbours going to bother with the series of repeaters and power lines through the bushes and/or down the street, possibly spanning several blocks and neighbourhoods?
I have to say “Poppycock!”
PS The “snore fest in the title” was not meant as a pun, but realizing that it unintentionally is — well, I like dumb jokes and puns, especially the dumb ones. 🙂 So, keeping it is intentional.
It has been a sort of starting from fresh to create my personal cookbook, a project I started, I think, long before 2011 — as early as 2007-ish, as I recall. (I remember discussing the cookbook with someone somewhere around 2012, and said conversation could not have occurred before 2011.)
Several years ago, I’d put together a personal cookbook, but at a certain point during its construction, somehow the main file either got corrupted, or I had several copies which I didn’t manage properly (and presumably, in this scenario, began overwriting previously entered recipes with newer versions of other versions.) However it all happened, I became disillusioned and lost interest on a practical level to reconstruct it all, let alone finish it, despite a certain allure it had.
Back in December, I decided to start from scratch, doing a rather 90’s thing — or perhaps even an 80’s, or 70’s, or 60’s thing — I used a basic text editor and started retyping each recipe, sometimes using what I did still have as a reference, and in at least four cases, just reusing the recipe as I’d entered it back a few years ago, with the remnants of the original cookbook file.
In the case of some of recipes I’ve been typing in, I’ve actually been able to tune the text based on recent memory of just having made the items in the last couple of weeks (as in, as I was making the item in question, going over to my computer to make adjustments), or up to a couple of months ago.
And, fun fun fun, today I took advantage of another day of holidays, er, waiting for the garage to call me back and say that my car, in for servicing, was ready: I went through all my recently-typed recipes and did some basic editing. Lists and sentences / semi-sentences were capitalized. Lists received dash points. Instructions which hadn’t already been fleshed out, were fleshed out. Sentences with multiple steps were broken up into discrete instruction lists. A number received sections “do this part, then while that cooks, do this part”, etc. (And then, transferring the updates to my webserver, to my laptop as a backup, and to my backup server which is also my webserver.)
Obviously, the likes of “cooking sausages” isn’t there, even though apparently when I make them for a Santa’s Breakfast, they are highly rated beyond the fact that I’m the only volunteer who actually relishes in making 200+ sausages at home in advance. And, that having the sausages pre-cooked so that they only need to be reheated in the oven is quite convenient when you’re serving 100+ people.
Eventually, if you look at the eggplant, first meatball, cheese biscuit and zucchini dish recipes, I may update them in the style of the newly retyped recipes as above, while converting the texts of the newly retyped recipes to that format (the original format for my “personal cookbook”), and take photos.
Finally! My recipes are now documented, accessible, shared, sharable, and, if I ever get around to it, ready for transfer into a “cookbook”.
I started volunteering some of my extra computers’ idle time for the World Community Grid in December, 2013. Unfortunately, the machine in question, a used computer I’d bought about five years earlier and, after having been used as a desktop for a few years, had been converted to being a server under CentOS, died from a “thermal event” nine months later. It had completed 713 results and earned 419,591 points.
In 2016, I found a P4 3.4GHz machine, installed CentOS 7 on it, and then the BOINC infrastructure. I assigned it to the World Community Grid and 100% of its capacity to the project. From when it began in September, 2016 to today, it has completed 4,540 results, and earned 2,568,590 points.
In 2017, I finally converted my old netbook (32 bit atom processor) to CentOS 6 and did the same thing. From when it began in April until today, it has completed 261 results, and earned 133,073 points. (What a difference in capacity that 3.4GHz 64 bit has as compared to 1.6GHz 32 bit!)
Over the past few months, I have been collecting up a number of old machines which have come my way, including some IBM ThinkCentres from the Windows Vista era. So far, my brother and I haven’t been able to get them running properly, and we will probably end up using them for spare parts.
In the meantime, we acquired two more computers. My brother wanted / needed a replacement computer for his aging media server, an old reclaimed IBM ThinkCentre I’d gotten for him a few years ago. I, in the meantime, wanted to add another node to the World Community Grid (of course, working at 100% of capacity.)
I chose CentOS 7 for this build, like I did for my other nodes, for what I consider to be the obvious reason that I want to pretty much forget about the computers and just relish in the numbers on the World Community Grid website — I don’t want to be re-installing every year!
The install went well enough, although it was long enough process for the base install, as compared to my laptop and desktop. I will rule out the comparison to my laptop since the SSD and physical drive don’t compare at all. As for the desktop and node, I’ll chalk up the difference mainly to processor speed and general architectures: A 2015-era four core i5 running at 3.4GHZ vs a 2010 era Pentium dual-core E6500 running at 2.93GHz (no HyperThreading).
What was really long after that was the yum update after the initial install — about 650 packages! In the process of the updates, I tried a few things like web surfing, and the gnome desktop became unstable; I ended up with a flashing text screen. I finally rebooted, and tried to downgrade to an older kernel in GRUB, to no avail. I tried the rescue kernel, no avail. Under both situations, I couldn’t pull up a terminal with Alt-Ctrl-F2. A quick check under a Fedora live environment was a waste of time, since I didn’t really know how to diagnose things; however, I was able to mount the CentOS drive.
There was some flirting with the idea of installing Fedora 27, but I don’t want the re-installation mill on this machine (or any of my other volunteer computing nodes) every year — although, seeing my brother upgrade from Fedora 25 to 27 through the GUI go as smoothly as a routine DNF upgrade is making me wonder if the point is moot. (Note that CentOS 7, based on Fedora 19, is still using YUM, while Fedora has been using DNF since version 22.)
I continued with the installation of the Fedora EPEL repository (as root “wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm”, then “rpm -ivh epel-release-latest-7.noarch.rpm”). Installing the BOINC infrastructure was easy: As root “yum install boinc*”.
I launched the BOINC manager from one of the pull down menus, and, to my surprise, it actually worked out of the box, unlike previous installations. Someone must have updated the packages. 🙂 I added the World Community Grid website information, and my account and password.
Voilà! At 12:00 UTC the next morning, my machine had already submitted FIVE results, and earned 2,429 points! And, at 00:00 UTC as I’m completing this post, a total of EIGHT results, and 4,638 points!