Over the last week, garage door product company Chamberlain made a sudden move to cut off all third-party access to it’s cloud APIs. Chamberlain was known for making products like garage door openers and control products, but had begun offering cloud connected smart devices to go along with these in recent years. These smart garage door control products were sold in big box hardware stores like Lowes and were advertised with Amazon and Google compatibility at relatively low prices. (update: the package I purchased did not mention anything about third party systems, only that you could download the app for your phone via the App Store or Google Play.)
While we were going through a renovation project at our home, I wanted to be able to integrate our garage door openers with a smart controller so that I could open the doors for a contractor/sub if they sent someone who didn’t have the code, or had trouble with the external keypad. Chamberlain’s MyQ smart garage door controller seemed to have all the right options. Low cost, compatibility with major home automation systems from Apple, Google, Amazon and even Homeassistant integrations. It seemed like a no-brainer. Remote access to the garage door worked out great for the renovation project and I later integrated the system into my existing Homeassistant setup.
This setup worked perfectly until this week when Chamberlain made the decision to cut off API access to Homeassistant and any other “unauthorized” third party. Chamberlain claims that Homeassistant traffic was overwhelming at times, (to the point of effectively being a DDOS attack) and used that as an excuse to shut it down. However, this doesn’t seem to be the true motivation as they’ve also been slowly backing away from all external integrations they don’t directly control. This leaves consumers of their devices with products that may not work as advertised, controllable only via the smartphone app.
This is the danger inherent in cloud based consumer products. They may be cheap and they might work today, but the manufacturer can change the functionality at will and there’s little to nothing that you can do about it. Devices you paid good money for can become a paperweight overnight when the manufacturer decides they no longer want to support it, or wants to change the terms of service. (perhaps charge fees for access to features that were once free) This seems to be the case with Chamberlain, seeking payment for access to APIs and courting paid integrations with automotive manufacturers and security companies.
If you are looking to build a smart home system and you’re thinking about which device ecosystem to go with, I would highly recommend doing your research. Look for devices with local control via Z-wave, Zigbee, Matter, or Thread. WiFi products can work too, but make sure their core functions aren’t tied to a cloud service that could go away at any time.
I’ve been using RedHat Linux in various forms for the last 26 years or so. I started back in 1997 with a boxed copy of RedHat Linux 4.1. It wasn’t the first Linux distro I tried, but it was the first one that I really liked. I kept buying boxed copies of it through version 6.1, around the time when the first release of RHEL dropped. (it’s hard to believe that was 23 years ago) You see, back in the olden days internet speeds the typical person could get were largely via modem and
far too slow to grab entire CD images in a reasonable time. Buying a boxed copy every time a new version was released both supported the company and allowed me to get a fresh set of installation media in my hands right away.
So why am I quitting RedHat you might ask? The recent news of RedHat Enterprise Linux essentially going closed source, (yes, I realize this is debatable) was the straw that finally broke the camel’s back. However, I’ve had this urge many times in the past due to similarly bad decisions by the company and I’m finally just done. So where did this start? For me, it started with the last release of RedHat Linux 9 being merged into the Fedora project. (as you’ll see, what’s old is new again) When RHEL was initially released to the public, it was based entirely on RHL with some add-ons that were exclusive to RHEL. As time went on, RedHat realized that some of their customers weren’t buying RHEL and were just sticking to RHL, often just grabbing the ISOs for free.
RedHat didn’t like this and felt the existence of RHL as an upstream distribution was potentially hurting their sales of RHEL and decided to kill it in favor of a faster-paced and shorter lived variant that would become Fedora. At the time, I thought this was great… newer packages, more frequent releases… what’s not to like? What I didn’t know was the pain that would come from sticking with the free version. Initially, the short support cycles weren’t all that bad. You could in-place upgrade from one Fedora version to the next and be back up and running in short order. This didn’t last though, and soon it was better to just wipe and reinstall from scratch. This difficulty wasn’t just some random experience, it was by design to drive IT pros into the RHEL fold.
For hobbyists or IT pros wanting to practice with a stable RHL/RHEL style OS there was no longer anything available until CentOS arrived on the scene in 2004. (there were others as well, but CentOS was so close to RHEL that you could use it as a drop-in replacement for RHEL for nearly any application.) This server ran on that distro for the last 15 years and all was good until RedHat, again frustrated by something they perceived as taking business away from RHEL drove them to change the game again. RedHat planned and executed a coup that would see ownership of CentOS transferred to RedHat which laid the ground work to kill off CentOS as it existed at the time. In December of 2020, RedHat announced that CentOS would be discontinued in 2021 in favor of a new offering called CentOS Stream. This new distribution wouldn’t be a replacement for CentOS, but would rather become a Fedora-like upstream for RHEL. (fast release cycle, but unstable) Basically, Fedora is the bleeding edge, new stuff is constantly migrated from Fedora to Stream after testing and a stable version/snapshot is occasionally cut from Stream and used to build the next point-release of RHEL.
The final nail in the coffin was a blog post on 6/21/2023 by Mike McGrath, VP of Core Platforms at RedHat announcing that “CentOS Stream will now be the sole repository for public RHEL-related source code releases.” What this means is that access to the actual source code for RHEL is now locked behind a RedHat subscription. This is a direct attempt to kill off successor distributions to CentOS such as Rocky Linux and AlmaLinux. Rocky Linux did make a statement on their blog that they, “[remain] confident in [their] ability to continue as a bug-for-bug compatible and freely available alternative to Red Hat Enterprise Linux (RHEL), despite changes in accessibility.” However, the writing is on the wall. RedHat intends to stamp out copycats of RHEL for good. In a follow-up blog post, Mike tries to explain that RedHat isn’t evil and then rails against what he sees as essentially freeloaders stating, “I feel that much of the anger from our recent decision around the downstream sources comes from either those who do not want to pay for the time, effort and resources going into RHEL or those who want to repackage it for their own profit. This demand for RHEL code is disingenuous.” Nobody is profiting from the re-packaging, but I’m pretty sure this is a veiled reference to Oracle. (Oracle doesn’t charge for their RHEL derived distribution, but they do sell service and support)
What’s actually disingenuous is claiming that everyone who wants to use your source code without paying is a freeloader. Let’s not forget that RedHat wouldn’t exist today without the free contributions of thousands of open source coders over the last 3 decades. RedHat stands on the backs of these members of the open source community who receive little to no compensation yet feels aggrieved by those who want to use code they have the rights to under the GPL for free. The backporting and all the support effort Mike references in his post are self-inflicted and the very reason companies pay to get their product. Let’s be clear, those who use “rebuilder” distros like CentOS of old, Rocky, or Alma are not RedHat’s customers. If they had a need for RedHat’s support services and could afford it, they would be, but they’re not.
As for myself, I’ve been hanging onto this RHEL compatible distro and it’s ecosystem mostly out of nostalgia, experience and let’s be honest… laziness. When the initial shutdown of CentOS was announced, I created a replacement VM based on Debian/Ubuntu and planned to migrate, but never did. Instead I stayed on CentOS until the updates dried up and then migrated to Rocky. (big thanks to the Rocky Linux team for making that a mostly painless process) However, with this latest chapter unfolding I don’t see the point in staying in the RHEL/RedHat ecosystem. (at least not for anything I care about) RedHat’s senior leadership has made it clear that they are going to make it as hard as possible for anyone to duplicate RHEL going forward. I feel bad for Rocky and Alma, but I’m done being a victim of the chaos caused by RedHat’s periodic jealousy. Today, I built a fresh Debian 12 VM and migrated this blog to it. I already feel better, my packages are more up to date, but equally stable and it didn’t even take that long. Farewell RedHat, I hope this path works out for you, (I honestly do) but if it doesn’t you’ll know exactly why that is.
Having recently restored my old 486SLC board, I was curious how it stacked up against other 486s of the era. I actually have a fairly decent collection of chips from this time period and a few motherboards that accept them.
For the majority of this test, I used a PerComp branded board made by PC Chips, (model M912) that has a fairly broad range of support for these CPUs and is based on the PC Chips “Chip 16″/”Chip 18” chipset. The board is configured with 256KB of L2 cache, 16MB of 32-pin SIMM memory and supports 7x 16-bit ISA expansion slots, 3 of which support 32-bit VLB cards.
One funny thing about this board is the branding on the chipset. At the time PC Chips didn’t have a great reputation for performance. In an attempt to boost their sales, they often placed stickers on their chips with the model numbers of other more popular chipsets.
Alright, enough of the history… let’s get into the testing! The chips I selected for the test were several Intel CPUs, a couple of Cyrix CPUs and a single AMD CPU. These were as follows: i486SX-25 (overclocked to 33Mhz), i486DX-33, i486DX2-66, Cx486DX2-66, Am486DX4-100, Cx5x86-100 (overclocked to 120Mhz)
The five 486 CPUs shown above all contain 8k of L1 cache on-die. They were all tested on the same motherboard mentioned above, which was configured with 256k of L2 cache.
I’ve also included the 486SLC numbers from a previous test. The SLC and the 5×86 aren’t a good direct comparison since the motherboard and configurations are quite different. Unfortunately, I managed to kill the Cyrix 486DX2-66 when I tested it the second time and wasn’t able to get cache and system performance numbers, but an image with the last screen it produced is included below.
What was most interesting to me were the results for the Intel SX and DX chips clocked at 33Mhz. All of the results from these 2 chips were identical except for the performance index. (which is lower on the SX as it lacks an FPU) What this tells me is that unlike the 3rd generation, Intel’s 4th generation SX chips were fully 32-bit externally. The i386SX was 16-bit externally and was dramatically slower as a result. Likewise the Cyrix 486SLC also suffered in this test due to its 16-bit external bus. I suspect Intel’s 4th generation SX chips were simply lower binned parts from the DX production line that had defects in the FPU section of the die. Simply disabling the FPU by cutting/burning the traces between it and the CPU section of the die would allow these otherwise defective chips to be sold, albeit as a lower-end model.
It’s a shame the Cyrix DX2-66 gave up the ghost during the test. However, it’s not a chip I was likely to put back into use in any of these systems. What data I did get from it confirmed my recollection of the CPU. It was slightly faster than the Intel DX2-66 in integer ops, but was a bit slower in floating point due to a weaker FPU design.