In this post, a few of the more well known linux distributions and desktop environments will be showcased.
Note: Clicking on the various desktops will show larger versions.
Fedora
Fedora Linux is a general-purpose linux distribution focusing on free software (ie. not containing any proprietary software) and on being on the leading edge of free software development. It can be used by all desktop users. While having many tools that developers find useful, it is can also be used as a general purpose computer desktop.
Fedora provides a variety of desktop environments; the Gnome desktop environment is the default desktop environment, although other desktop environments are available in Fedora’s various spins, which cater to varying visual aesthetics, technical requirements, and useability.
Fedora Linux can be downloaded from https://getfedora.org(note: do not add “www”, it will lead to an error page)
Debian
Debian GNU/Linux is a general purpose Linux distribution aiming to be available on a large variety of computer architectures, built on free software, and is known for its stability. The large number of software packages available under Debian and its stability are often highlighted as some of its strengths. Debian is used for a wide variety of purposes including desktops and servers, and is equally capable in both functions. Debian is often used as a base for other Linux Distributions.
Ubuntu is a popular Linux distribution based on Debian. It releases “Long Term Support versions every two years which typically are supported for at least five years, as well as intermediary releases usually every nine months. Ubuntu is often found not to be too difficult to learn to use.
Linux Mint is based on Ubuntu, and is known for its desktop named “Cinnamon”, which was originally based on the Gnome Desktop, but was branched off into its own desktop environment which focuses on a more traditional computer desktop appearance and functionality.
openSUSE is the community version of SUSE Linux, a business and server oriented version of Linux. openSUSE is known for its use of the KDE desktop, but also uses the Gnome desktop.
openSUSE Tumbleweed is a version which updates continuously and does not require reinstallation after a certain period of time; however, it may prove more challenging to newer users, who might find openSUSE Leap more stable.
Desktop Linux: Unveiled is a series of posts that show how to start using Linux.
In this post, Linux will be briefly explained and briefly compared to other common desktop computer operating systems.
First, what is an operating system?
An operating system (OS) is the software that makes a computer run, like Microsoft Windows, or MacOS. It is typically able to provide a way for users to operate the computer, and translate the instructions so the computer can run them. It also coordinates all the computer’s resources such as its CPU (central processing unit), memory, hard drive, and other components of the computer, as well as coordinate the user’s programs and data.
What is Linux?
Most people understand “Linux” to be a complete operating system like Windows or MacOS. However, strictly speaking, “Linux” is in fact just a part of the operating system, the central part called the kernel. Common usage has had “Linux” to informally refer to the whole operating system.
“Distributions”, (usually) complete and integrated collections of software built around the Linux kernel, can be legally built and distributed by anyone with the abilities and inclination because of the way the Linux kernel and the other software usually used with it are licensed, although most people choose to use an established distribution.
Distributions vs. Operating Systems
Linux distributions usually contain full linux-based operating systems, as well as extra software often not traditionally included in operating systems, such as office suites, media players, graphic design software, educational software, games, various apps, as well as other software such as server software. Although not all of the software is installed at the same time, they are typically all easily available in central locations called “repositories”, similar to app stores on MacOS and Windows; much is available free of charge, too!
Free Software vs. Proprietary Software
A lot of software available under Linux — and a growing amount under Windows and MacOS as well — is called Free Software, or sometimes Open Source Software. As a contrast, a substantial amount of Windows and MacOS software is called Proprietary Software.
Many people hear the expression “Free Software” and assume that it means that it is free of monetary charge. Some may even question its quality on the basis of such a lack of price.
Although free software is often (though not always) given away free of charge, and most common free software is of very high quality, the expression “Free Software” in fact refers to “freedom”, specifically various freedoms granted to the users of the software. These freedoms include the freedom to run the software for whatever purpose you wish, the freedom to study how the program works as well as make any changes that you wish, the freedom to share the software with others, and the freedom to share software you’ve modified with others.
Some of these freedoms require that the source code, or “recipes” that people can read and understand, be available to anyone and everyone.
The various licences used to allow this often tend to foster cooperation between various parties, often allowing groups who might sometimes be competitors to also cooperate with each other, creating common software that each group can then package together to present according to their own vision. Within this cooperation, software sometimes is developed quickly, and often many programming bugs are found and corrected quickly.
Some common free software licences are the GPL and the LGPL, which specifically give the recipient of the software the above freedoms, and require the sharing of the source code to the software, and any changes you may have made to it, when distributing the software. Other common free software licences are the BSD licence, the MIT licence, and the Apache licence, which have very few requirements but which permit users to use, modify, and distribute the software, while retaining copyright and some disclaimers notices.
In contrast, proprietary software is usually controlled by very restrictive licenses that keep the source code hidden, doesn’t allow users to distribute the software to whomever they please, doesn’t allow users to modify it or fix bugs even if they are able to were they to have access to the source code, and may even dictate how the software may or may not be used.
Next Chapter
Chapter 2 will list some popular Linux distributions that people use on their computers.
Over the past at least twelve years, I have been salvaging computers I have found on the streets on garbage day, or found in other locations where my various personal travels have taken me, for use to reformat into usable computers. The various finds have served as main desktop computers, secondary computers, home servers, computation nodes for the World Community Grid, gifts to my brotheror the occasional friend, and the like. It has variously allowed me to indulge in a bit of tinkering, trying out a new linux distro or version of BSD, build a home server, or just pass the time while engaging in a hobby.
In the process, I’ve watched the lower bar of what is acceptable “junk that isn’t junk, at least not yet” move upwards from about P4-533 MHz 32 bit processors to dual core 2.66 GHz 64 bit processors (although single core 64 bit P4 at 3.4 GHz to 3.8 GHz range is good if you don’t want to depend on a GUI, or if you have a lot of RAM and an SSD), 512 MB of RAM to 2GB of RAM, and 20GB hard drives to 80GB hard drives. Now it seems that the next big thing will be in moving from mechanical drives to SSD drives, which I expect — when SSD drives become common in the old computers I find being thrown out — will make a revolutionary change upwards in speed in low end hardware, the way I learned the same in 2017 when I swapped out the mechanical drive in my laptop and replaced it with an SSD. (To be fair, when I bought the computer new in 2015, the hard drive was curiously a 5400 RPM model, presumably either to make it less expensive, less power hungry vis-à-vis battery life, or both.)
As an aside: My favourite brands of castoffs have been, in order, IBM / Lenovo ThinkCentres, then Dells. After that, I’ve had an excellent experience with a single used HP desktop that has been doing computations for World Community Grid running at 100% capacity, since late summer 2016. I’ve dealt with other types of computers, but the ThinkCentres and the Dells have been the ones I’ve had the most success with, or at least the most personal experience. (Since initially writing this post, I have been developing a suspicion that based on the longevity of the HP cast-off I have, HP actually might be superior to the IBM / Lenovo when it comes to cast-offs; however, since it’s the only HP cast off that I can remember ever having, it’s hard to form a proper opinion.)
But to wit: Over the past two weeks, I have tried to revive three used computers that were cast-offs.
Two of them were IBM / Lenovo ThinkCentres, which I think were new in 2006 / 2007, 2.66MHz 64 bit dual cores, 80GB hard drives, and 2 GB memory. The third computer was a Dell case with only the motherboard (proving to have been — see below — a 64 bit dual CPU running at something like 2.66MHz) but no memory, no hard drive, no wires, no DVD player, and not even a power supply!
The two ThinkCentres were from a pile of old computers marked for disposal at a location where I happened to be in mid-2017, and I was granted permission to pick and choose what I wanted from the pile. I gave them to my brother, who at the time evaluated them and determined that neither worked, one just beeping four times and then hanging. After that, they just sat around in his apartment for whenever they might come in handy for spare parts. He had since determined that one actually worked, but he hadn’t done anything with it.
The third computer was found on the street near home a couple of months ago, and was covered with about an inch of snow by the time I’d recovered it. I brought it home, and let it sit around for several weeks just to make sure that it dried out properly. Based on the “Built for Windows XP” and “Vista Ready” stickers, I’d guess that it was new in about 2005 or 2006.
Having forgotten about the ThinkCentre computers I’d given to my brother in 2017, I casually asked him if he had the requisite spare parts to make the snow-covered computer work, since we normally share our piles of spare parts retrieved from old computers that die. To my surprise, he sent me the functional ThinkCentre. My knee-jerk reaction was “I don’t need a new-to-me computer; just the parts required to see if I can get the snow-covered computer to work.” Perversely, I didn’t actually want the results of my planned efforts to produce a functional computer; I just wanted the amusement of a small project, and more generally to see whether the Dell found on the street would work.
In parallel, my home server on which I hosted my backups and my website, another computer of the used several times over variety, worked perfectly except for mysteriously turning off on its own a couple of times recently, perhaps once a week. My brother and I decided that what was probably happening was the result of one or more thermal event(s) which shut down the computer, no doubt due to a combination of dust accumulation, the CPU fan ports in the case not having enough clearance from the computer next to it to allow for proper aspiration of ambient cooling air, and possibly high heat generation from occasional loads due to search engine bots crawling my website. Despite cleaning out the dust, removing the computer’s side panel from which the CPU fan drew air, and shifting both computers a bit in order to allow for adequate ventilation, the computer turned itself off again after about a week.
My brother and I made a swift decision to replace my server with a new installation on a “new” computer — the good ThinkCentre I initially didn’t want — because even though the existing machine was otherwise performing spectacularly well given the overall small load, we tacitly agreed that the shutdowns were a problem with a production server, though we hadn’t actually said the words. This incidentally dealt with another curious behaviour exhibited by the existing server which appeared to otherwise be completely benign, and hence perhaps beyond the scope of why we changed the physical computer.
The operational ThinkCentre was plugged in, formatted with Fedora 31, and my brother helped me install the requisite services and transfer settings to the new server in order to replicate my website. Newer practices in installation were implemented, and newer choices of packages were made. For instance, the “old” machine is still being kept active for a bit as a backup as well as to maintain some VPN services — provided by openVPN — for the purposes of setting up the new server and installing WireGuard for VPN on the new server, and generally allow for a smooth transition period. Other things that we had to remember as well as learn, perhaps for another time, were to install No-IP as a service, and that drive mounts should be unmounted and re-mounted through rc.local.
One of the unexpected bonuses to the upgrade is that it appears to be serving web pages and my blog a wee bit faster, for reasons unknown.
In the meantime, on the next project, I got the non-functional ThinkCentre for its spare parts. The first idea I had was that maybe this second ThinkCentre might still be good, and we looked at a YouTube video that suggested cleaning out the seats for the memory sticks with a can of clean compressed air. I was suspicious of this but let it go for a while, and I proceeded to harvest parts from the computer after deciding that the machine wouldn’t work regardless.
A power supply, cables, a hard drive, and memory sticks were placed in the Dell found on the street. It powered up, and after changing some settings in the BIOS, I was able to boot up a Fedora 31 LiveUSB. Using the settings option from the Gnome desktop, I was able to determine that there was a 64 bit dualcore CPU running at about 2.66GHz, that the 2GBs of memory I’d inserted worked, and that the 80GB hard drive was recognized. I looked around on the hard drive a bit with a file manager (Nautilus) and determined that the place from which I’d retrieved the ThinkCentre appeared to have done at least a basic reformatting of the drive with NTFS. I didn’t try to use or install any forensic tools to further determine whether the drive had been properly cleaned, or had merely received a quick reformat.
Suppertime came around, and the machine was left idle to wait for my instructions for about an hour or so. When I returned to the computer, I saw an interesting screen:
(If you can’t see the picture above, it’s an error screen, vaguely akin to a Windows Blue Screen of Death.) After a few reboots, all with the same “Oh no!” error screen, my brother suggested that the machine may have been thrown out for good reason, intimating that it was good luck that I’d even managed to boot it up in the first place and look around a little bit. I, on the other hand, was relieved: I’d had my evening’s entertainment, I’d gotten what I wanted in the form of working on the machine to determine whether or not the machine could be used, and I’d learned that it indeed couldn’t be used. Parts were stripped back out of the Dell, and the box was relegated to the part of the garage where I store toxic waste and old electronics for the times I have enough collected to make it worthwhile to go to an authorized disposal centre.
At this point, something was still bugging me about the second ThinkCentre. I hadn’t yet placed my finger on it, but I was suspicious of the “use compressed air to get rid of the dust in the memory bays” solution. So I placed the salvaged parts back into the ThinkCentre — having fun with which wires go where in order to make it work again — and got the four beeps again. I looked up what four beeps at start up means (here’s my archive of the table, which I had to recreate since a direct printing of the webpage only printed one of the tables,) and found that at least on a Lenovo ThinkCentre, it means “Clock error, timer on the system board does not work.” While I assumed that changing the BIOS battery may well fix the problem, I decided not to investigate any further.
I salvaged the parts again and placed them in my parts pile, ready for the next time I find a junker on the street or from elsewhere. The second ThinkCentre’s case was also placed beside the Dell, awaiting my next trip to an authorized disposal centre.
This means that out of the last three computers, I have one functioning computer replacing an existing computer (that I hope will continue with an industrious afterlife doing something else), one computer scavenged for spare parts and the case relegated to the disposal centre pile, and the Dell computer which was found on the street also relegated to the disposal centre pile.
Or, to paraphrase Meat Loaf, “One out of three ain’t bad …”
Cette page est principalement une place à exposer le lien pour ma présentation de ce soir au Linux Meetup Montréal au sujet d’utiliser le SSH et le SSHfs pour l’accès aux fichiers sur des autres systèmes depuis votre ordinateur linux (Fedora avec Gnome, dans mon cas).
Essentiellement, je discute le fait que SSH et SSHfs peuvent être utilisés pour les transferts des fichiers, et comment, à la base, les invoquer.
*****
This is a page to expose the link for my presentation this evening at the Linux Meetup Montreal discussing SSH and SSHfs for file transfers on other systems (Fedora with Gnome, in my case).
I have mounted, on a volunteer basis and in a lay capacity, the annual reports for a community group to which I belong, since about 2008.
Up to that point, the group’s annual reports were individual committee reports delivered to the secretary, individually printed out as and when received, and then stapled together with handwritten pages numbers when it had to be distributed, with an added cover page, and an extra page listing the reports and their page numbers. This did have the charm of not requiring a herculean effort and time requirement in both mounting the report, and on “printing day”, to print literally a thousand pages or more, depending on the number of pages to the report and the number of copies to be drawn. Admittedly, it does not take into account possible collating, as per how one might print out the reports (ie. pages with colour drawings and photos vs black and white, etc.).
The year I took on mounting the annual report, I believed that the annual reports should have been in an electronic format such as PDF so that it could be placed on the group’s website. But that was barely the beginning of why I took on the job.
To fulfill the technical goal of making a PDF for download from the website was not too difficult. Two easy options would have been to either scan the report once produced the “old fashioned way” and produce a PDF from all the images, or, at least for those received in electronic format, create individual PDF documents plus scan for those received on paper, then use a PDF joiner to string the PDF files together into a single document. In fact, at the time, I gathered as many previous annual reports as I could and scanned them, making them available on the website.
However, going forward, I did not consider either option to be satisfactory.
The aesthetic appearance of the annual report irked me. It wasn’t the old school printing on paper — to this day, I still print lots of paper copies for distribution. Rather, I saw an opportunity to put to the test some angst stemming from a bit over a decade earlier when the community group’s recipe book to which I’d contributed led to my having had a few ideas on improvements to the text’s basic formatting and overall layout. (The actual recipes, variety, organization, editing, and recipe testing that I learned went on behind the scenes, and the like, were beyond the scope of my interest, although one common error, separate from my angst, was a mild nuisance.) I of course wisely kept my opinions to myself, both at the time of the recipe book in the mid 1990’s, as well as at the time of initially volunteering to mount the annual report.
As can be surmised from the above, each report came from almost as many different people as there were reports, depending on how many committee reports given individuals would take on. Each person would typically type their report on their computer and email it to the office, or perhaps print it out at home and drop it off at the group’s office personally. They used whichever word processor they had: Sometimes simple text editors, or MS Write, or MS Word, presumably ranging through Word 98, Word 2003, and Word 2006. Presumably some people had Macs with whichever word processor they might have had. I believe that the secretary, who was sometimes typing up the reports which were submitted handwritten, was using a version of Wordperfect. Finally, I was submitting my reports at that point using OpenOffice.org. Presumably, there may have been other text editors or word processors used. Each instance presented a random opportunity for default settings to be different, as well as for the user to change the settings to those that suited their own personal taste.
As a result, each report predictably had formatting unique to each author, sometimes unique to each individual report, if two or more reports were submitted by the same person.
The various differences in formatting in the reports received included the following, without being an exhaustive list:
– varying text fonts and font sizes, and occasionally, more than one of either or both in a given report;
– varying line spacing;
– varying paragraph indentation, including lack thereof;
– line jumps or lack thereof between paragraphs;
– varying page margin widths;
– varying text alignment, typically either left justified, or left and right justified;
– the occasional use of italics over the whole document, beyond that which would normally be used;
– the inclusion or lack of section titles, sometimes (or not) rendered bold and/or italicised and/or underlined and/or capitalized;
– tables listing figures in formats unique to each table and report, or simple lists with varying bullet styles;
– varying spelling conventions, ie. American vs. British vs. Canadian spellings (ie. neighbor vs. neighbour, or center vs. centre);
– varying naming conventions: Sometimes full names, sometimes initialized first names with full last names, sometimes full first names with initialized last names, or sometimes very informally with only first names;
– varying honourific format conventions: sometimes honourifics, titles, and/or ranks would not be used, with persons simply named, and sometimes referred to with variations of their title such as Reverend, Rev., The Reverend, The Rev., etc.
– varying naming conventions for committee names, multi-word names, places, and the like, which were sometimes fully spelled out, and sometimes initialized, abbreviated, and / or contracted;
– etc.
As such, as alluded to in a previous post, minor changes and differences in formatting between the individual reports created subtle (or, depending on the changes, more obvious) visual changes in how each report appeared compared to each other, when joined and printed on paper or read on a computer screen. Multiple permutations and combinations of the above formatting issues often led to creating wildly varying end results which go beyond the subtle, creating a patchwork of formatting over the multiple reports joined together into a single document. This may be jarring to the eye of some readers, particularly when it is not a subtle, unified, overarching design choice, but rather the result of a decided lack of unified design choice.
This link shows a hypothetical example of how such a report could look (you’ll need a PDF reader) — with various individual reports each having unique blends of formatting as compared to each other. Note that I intentionally use the “Lorem Ipsum” text so as to highlight the formatting.
The obligatory let’s tie it all together part at the end:
When I collect the individual reports and create one document, I cut and paste all the electronic reports (and rarely, type up handwritten reports) into a single document, imposing a uniform text formatting throughout in the form of a standard font, font size, line spacing, (lack of) paragraph indentation, page margins, and standardized and / or uniform versions of the other items above. Pages are automatically numbered, and standard page headers and footers are automatically added throughout, with date codes to distinguish between earlier and later versions. Basic spelling and typing conventions are applied and made uniform. Note that I don’t dictate or edit writing style, so one report might have section headers, while another may not, nor do I edit for turns of phrase and the like.
This link shows the above hypothetical report changed (you’ll need a PDF reader) to show the same reports with some basic text formatting across the whole document made uniform, while allowing each author’s text flow (and implicitly, were each text to be unique, writing style as well) to remain relatively untouched.
Have I addressed my angst from the mid 1990’s? Yes.
Is the document formatting on the annual reports I produce every year a work in progress, with subtle improvements, changes, and the like every year? Of course.
I’d like to propose my version of a little visual puzzle I saw years ago. In the following table, the same text is repeated in each cell. In eight of the cells, an element of formatting has been changed from the appearance of the text using a basic set of formatting, while the ninth contains, in this case, the default settings on my wordprocessor on my system. The riddle is to find which cell has not been modified as compared to the other eight. (View a slightly larger version of the table here.)
A hint of sorts: What the basic formatting settings are, or which word processor I used on which system or OS, all represent red herrings to solving which is “the original”, or “vanilla”, version.
Scroll down for the solution.
The solution is B2, the cell / square in the centre of the table.
All the other cells have one thing changed from the B2’s qualities.
A1) The font was changed (from a Serif font to a Sans Serif font);
A2) The font size was changed to a slightly larger point size;
A3) The cell’s background colour was changed to a light grey;
B1) The text was italicized;
B2) Standard, unchanged text using my word processor’s standard settings;
B3) The text colour was lightened from a standard black to a grey;
C1) The text was capitalized;
C2) The text was made bold;
C3) The text’s line spacing was increased.
Besides at the core being what I perceive to be a fun riddle, it demonstrates how subtle differences can be made to standard document formatting in a variety of ways. It also alludes to the challenges presented by receiving documents from multiple sources for integration into a single document, such as a community group’s newsletter, or a community group’s annual report, presenting content and / or reports from its various members, leaders, subgroups, committees, and the like. In a forthcoming post, I will further discuss basic issues of varying formatting, and the need for standard formatting in a text document from the perspective of a layman editor of a community group’s annual report.
I have been adding my personal recipes to malak.ca since the beginning of December, 2017.
It has been a sort of starting from fresh to create my personal cookbook, a project I started, I think, long before 2011 — as early as 2007-ish, as I recall. (I remember discussing the cookbook with someone somewhere around 2012, and said conversation could not have occurred before 2011.)
Several years ago, I’d put together a personal cookbook, but at a certain point during its construction, somehow the main file either got corrupted, or I had several copies which I didn’t manage properly (and presumably, in this scenario, began overwriting previously entered recipes with newer versions of other versions.) However it all happened, I became disillusioned and lost interest on a practical level to reconstruct it all, let alone finish it, despite a certain allure it had.
Back in December, I decided to start from scratch, doing a rather 90’s thing — or perhaps even an 80’s, or 70’s, or 60’s thing — I used a basic text editor and started retyping each recipe, sometimes using what I did still have as a reference, and in at least four cases, just reusing the recipe as I’d entered it back a few years ago, with the remnants of the original cookbook file.
In the case of some of recipes I’ve been typing in, I’ve actually been able to tune the text based on recent memory of just having made the items in the last couple of weeks (as in, as I was making the item in question, going over to my computer to make adjustments), or up to a couple of months ago.
And, fun fun fun, today I took advantage of another day of holidays, er, waiting for the garage to call me back and say that my car, in for servicing, was ready: I went through all my recently-typed recipes and did some basic editing. Lists and sentences / semi-sentences were capitalized. Lists received dash points. Instructions which hadn’t already been fleshed out, were fleshed out. Sentences with multiple steps were broken up into discrete instruction lists. A number received sections “do this part, then while that cooks, do this part”, etc. (And then, transferring the updates to my webserver, to my laptop as a backup, and to my backup server which is also my webserver.)
Obviously, the likes of “cooking sausages” isn’t there, even though apparently when I make them for a Santa’s Breakfast, they are highly rated beyond the fact that I’m the only volunteer who actually relishes in making 200+ sausages at home in advance. And, that having the sausages pre-cooked so that they only need to be reheated in the oven is quite convenient when you’re serving 100+ people.
Eventually, if you look at the eggplant, first meatball, cheese biscuit and zucchini dish recipes, I may update them in the style of the newly retyped recipes as above, while converting the texts of the newly retyped recipes to that format (the original format for my “personal cookbook”), and take photos.
Finally! My recipes are now documented, accessible, shared, sharable, and, if I ever get around to it, ready for transfer into a “cookbook”.
I started volunteering some of my extra computers’ idle time for the World Community Grid in December, 2013. Unfortunately, the machine in question, a used computer I’d bought about five years earlier and, after having been used as a desktop for a few years, had been converted to being a server under CentOS, died from a “thermal event” nine months later. It had completed 713 results and earned 419,591 points.
In 2016, I found a P4 3.4GHz machine, installed CentOS 7 on it, and then the BOINC infrastructure. I assigned it to the World Community Grid and 100% of its capacity to the project. From when it began in September, 2016 to today, it has completed 4,540 results, and earned 2,568,590 points.
In 2017, I finally converted my old netbook (32 bit atom processor) to CentOS 6 and did the same thing. From when it began in April until today, it has completed 261 results, and earned 133,073 points. (What a difference in capacity that 3.4GHz 64 bit has as compared to 1.6GHz 32 bit!)
Over the past few months, I have been collecting up a number of old machines which have come my way, including some IBM ThinkCentres from the Windows Vista era. So far, my brother and I haven’t been able to get them running properly, and we will probably end up using them for spare parts.
In the meantime, we acquired two more computers. My brother wanted / needed a replacement computer for his aging media server, an old reclaimed IBM ThinkCentre I’d gotten for him a few years ago. I, in the meantime, wanted to add another node to the World Community Grid (of course, working at 100% of capacity.)
I chose CentOS 7 for this build, like I did for my other nodes, for what I consider to be the obvious reason that I want to pretty much forget about the computers and just relish in the numbers on the World Community Grid website — I don’t want to be re-installing every year!
The install went well enough, although it was long enough process for the base install, as compared to my laptop and desktop. I will rule out the comparison to my laptop since the SSD and physical drive don’t compare at all. As for the desktop and node, I’ll chalk up the difference mainly to processor speed and general architectures: A 2015-era four core i5 running at 3.4GHZ vs a 2010 era Pentium dual-core E6500 running at 2.93GHz (no HyperThreading).
What was really long after that was the yum update after the initial install — about 650 packages! In the process of the updates, I tried a few things like web surfing, and the gnome desktop became unstable; I ended up with a flashing text screen. I finally rebooted, and tried to downgrade to an older kernel in GRUB, to no avail. I tried the rescue kernel, no avail. Under both situations, I couldn’t pull up a terminal with Alt-Ctrl-F2. A quick check under a Fedora live environment was a waste of time, since I didn’t really know how to diagnose things; however, I was able to mount the CentOS drive.
There was some flirting with the idea of installing Fedora 27, but I don’t want the re-installation mill on this machine (or any of my other volunteer computing nodes) every year — although, seeing my brother upgrade from Fedora 25 to 27 through the GUI go as smoothly as a routine DNF upgrade is making me wonder if the point is moot. (Note that CentOS 7, based on Fedora 19, is still using YUM, while Fedora has been using DNF since version 22.)
Finally, I restarted the install of CentOS, this time doing a minimal text install. Things were a touch faster. Then I did a yum update, with only about half as many packages to update. After that, I installed the Gnome Desktop on the machine. (Here’s my archive.)
I continued with the installation of the Fedora EPEL repository (as root “wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm”, then “rpm -ivh epel-release-latest-7.noarch.rpm”). Installing the BOINC infrastructure was easy: As root “yum install boinc*”.
I launched the BOINC manager from one of the pull down menus, and, to my surprise, it actually worked out of the box, unlike previous installations. Someone must have updated the packages. 🙂 I added the World Community Grid website information, and my account and password.
Voilà! At 12:00 UTC the next morning, my machine had already submitted FIVE results, and earned 2,429 points! And, at 00:00 UTC as I’m completing this post, a total of EIGHT results, and 4,638 points!
Over the past two weeks, I have upgraded two computers to Fedora 27 (from Fedora 25, having skipped Fedora 26 and enjoyed roughly a year’s worth of Fedora goodness).
The two computers are:
Dell desktop (main system): Intel® Core™ i5-4460 CPU @ 3.20GHz — no Hyperthreading, 1 TB 7200 HD, 8gigs memory; screen upgraded separately to an Acer widescreen, and old screen relegated to a “new to me” computer setup as a node on the World Community Grid.
Acer laptop (secondary system): Intel® Core™ i7-5500U CPU @ 2.40GHz (Hyperthreaded), now 500gig SSD HD, 8 gigs memory.
Two of the equipment upgrades are the screen on the desktop, which is now a used Acer widescreen, and the laptop’s 5200RPM 1TB drive was upgraded to a 500gig SSD. The laptop went from interminably slow to incredibly fast! The comment from my brother: “SSD’s are one of the few things that actually lives up to the hype.” In my experience — under linux, anyway. Under a corporate controlled windows box? Well I’d say that my work computer, with an SSD, needs the SSD speed just not to be unusable!
The upgrades were incredibly easy this time, and fast, the new SSD installed on the laptop probably being the big factor. In fact, I was able to do the basic install in about 15 minutes, and the rest of my list (made for Fedora 23, but the basic list is still valid) was easy to complete while on a business trip in the motel room during off hours. In fact, one of the things that took a couple of days to realize: Fedora has had difficulty with the UEFI on this machine in the past — it would install, and then not work, and I’d have to reinstall under legacy BIOS. Note that I have a BIOS password, so perhaps in the past I just figured out how to make it persistent. As for restoring the data, once home, I managed to easily copy all my data files from my desktop overnight.
As for the desktop, having just gone through the process a couple of days earlier with my laptop, I was able to easily update, and then re-transfer my data from the laptop overnight, as well as update my data backup on my home server.
The “big” thing this time? The hardware upgrades. The almost un-noticeable thing this time? The installs, which were incredibly easy, quick, routine, and almost easily forgotten. Sheesh, I’ve lost track of how many installs I’ve done over the years …
Since about lat 2016, my website had problems with uptime: It was mostly down. In the spring of 2017, it was finally up and I did a bit of restoration work. And then … it was down again for a few months. (And, due to the circumstances of this downtime, my restoration work was lost.)
Finally, I transferred my website to an existing home server, and it is now living out of a computer which I believe may be as old as 2003, living under CentOS 7.x series, in my bedroom. Having fixed a faulty telephone line (squirrels!) the line is now “not noisy” and the internet is back properly.