Upgraded my home server/desktop machine yesterday and all seemed ok. When I checked this morning I discovered that output from overnight cron jobs had gone to the journal log rather than being emailed to me. Also logwatch had failed to run.
I quickly checked logwatch first by manually running the logwatch service and it complained that /usr/sbin/sendmail did not exist. In fact, sendmail was not even installed. The system uses postfix. Honestly, I don't recall if it was installed previously or not. I got new hardware last summer and did a fresh install of Fedora 39 then and upgraded to 40, 41 and now 42.
In the following -> indicates a symlink.
Checking my Fedora 41 backups, I see the following chain
/usr/sbin/sendmail -> /etc/alternatives/mta -> /usr/sbin/sendmail.postfix (which is an executable file)
I do not know what package owns /usr/sbin/sendmail.postfix unless someone can tell me how to tell rpm to use a different rpm database than the default one.
Under Fedora 42/ there was no /usr/sbin/sendmail. However, there was a /usr/sbin/sendmail.postfix that was a symlink to /usr/bin/sendmail.postfix.
I then installed sendmail. I tested logwatch again and it failed with the same error about a missing /usr/sbin/sendmail. I uninstalledd sendmail. This time logwatch worked because /usr/sbin/sendmail existed. The removal of sendmail left behind the following chain of symlinks:
/usr/sbin/sendmail -> /etc/alternatives/mta -> /usr/sbin/sendmail.postfix -> /usr/bin/sendmail.postfix
Apparently, removing sendmail left behind /usr/sbin/sendmail pointing to (eventually) /usr/bin/sendmail.postfix. /usr/bin/sendmail.postfix is owned by the postfix package. /usr/sbin/sendmail.postfix is not owned by any package. Probably created by the install/remove script for sendmail.
After all this in order to get cron to email output I just had to restart the crond service so it could see that /usr/sbin/sendmail now existed.
Charlie
On 4/18/25 9:36 AM, Charles Dennett wrote:
Upgraded my home server/desktop machine yesterday and all seemed ok. When I checked this morning I discovered that output from overnight cron jobs had gone to the journal log rather than being emailed to me. Also logwatch had failed to run.
I quickly checked logwatch first by manually running the logwatch service and it complained that /usr/sbin/sendmail did not exist. In fact, sendmail was not even installed. The system uses postfix. Honestly, I don't recall if it was installed previously or not. I got new hardware last summer and did a fresh install of Fedora 39 then and upgraded to 40, 41 and now 42.
Just wanted to add that I found a bugzilla report:
Charles Dennett writes:
Just wanted to add that I found a bugzilla report:
I predict there's going to be a lot of this, for a year or so. I forgot what were the actual, technical reasons for collapsing bin and sbin, except for "other distributions did it too". But the deed is done, and one just has to deal with the aftermath:
- New F42 installs have sbin symlinked to bin
- Systems updated to F42 apparently still have separate bin and sbin directories, and something is installing symlinks for the individual files from one to the other. But not all of them.
Someone said this better than me: https://www.youtube.com/watch?v=r13riaRKGo0
I am spending too much time than I should've, dealing with the resulting clusterfark. Github's workflow servers are using F42's official Docker container. Its root's PATH has sbin before bin, which was never a problem. But now, autoconf's AC_PATH_PROG discovers that perl now lives at /usr/sbin/perl, and all the other hamsters carry it into everyone's hashbang. I think rpmbuild has something that tries to patch it up, ex-post- facto, but this doesn't cover all the use cases, and all sorts of stuff is now broken…
On Fri, 2025-04-18 at 18:38 -0400, Sam Varshavchik wrote:
I forgot what were the actual, technical reasons for collapsing bin and sbin, except for "other distributions did it too". But the deed is done, and one just has to deal with the aftermath:
The artificial idiot listed these summaries:
* Simpler filesystem: Reduces unnecessary hierarchy and simplifies finding executable files. * Improved interoperability: Ensures scripts written for one distribution run correctly on others, as the location of commands becomes consistent. * Easier maintenance: Makes it easier to manage system binaries
But I'd argue that the hierarchies were there for a good reason. *Simple* no-access to some things for some people/software. *Simple* more privileged access to things in /sbin to those who had it in their path, and lesser privileged versions of a command with the same name to other people/things. Although another definition of sbin was not more privileged commands, but static binaries.
We have *paths* so appropriate things can actually find commands. We shouldn't be hard-coding paths into other things. I don't think I've ever typed /bin/ls to run ls. And there's a whole mess of reasons things written for one distro won't work on another, a really big one is the libraries that were compiled on one with a different compiler, or you simply have a different version of the library. I think most big programs are probably far more dependent on libraries than individual commands.
If you can't manage to maintain the binaries in /sbin and /bin (I'm including scripts, not just precompiled binaries), that people have managed for decades, or just understand why what's where, what hope in hell do you have for managing a program with 10,000 lines of code in it?
Hell, why don't just we just dump *everything* into one huge directory? That's make it really easy to manage (not). I get the impression that there's too many un-trained programmers in the world, and much of what they've learned has come from bad examples.
This malarkey is up there with we can't have /usr in a separate mount point, any more, because we've put things in there that we need at boot time. Well don't bloody do that!
Tim via users writes:
Hell, why don't just we just dump *everything* into one huge directory? That's make it really easy to manage (not). I get the impression that there's too many un-trained programmers in the world, and much of what they've learned has come from bad examples.
This malarkey is up there with we can't have /usr in a separate mount point, any more, because we've put things in there that we need at boot time. Well don't bloody do that!
It's time, once again, to rewatch Futurama, Season 1, Episode 2, "The Series Has Landed" and commiserate with Bender.
On Fri, Apr 18, 2025 at 10:32 PM Tim via users < users@lists.fedoraproject.org> wrote:
On Fri, 2025-04-18 at 18:38 -0400, Sam Varshavchik wrote:
I forgot what were the actual, technical reasons for collapsing bin and sbin, except for "other distributions did it too". But the deed is done, and one just has to deal with the aftermath:
The artificial idiot listed these summaries:
- Simpler filesystem: Reduces unnecessary hierarchy and simplifies finding executable files.
- Improved interoperability: Ensures scripts written for one distribution run correctly on others, as the location of commands becomes consistent.
- Easier maintenance: Makes it easier to manage system binaries
But I'd argue that the hierarchies were there for a good reason. *Simple* no-access to some things for some people/software. *Simple* more privileged access to things in /sbin to those who had it in their path, and lesser privileged versions of a command with the same name to other people/things. Although another definition of sbin was not more privileged commands, but static binaries.
This points to a failure of the linux community to carry through with past initiatives to standardize the file hierarchy.
We have *paths* so appropriate things can actually find commands. We shouldn't be hard-coding paths into other things. I don't think I've ever typed /bin/ls to run ls. And there's a whole mess of reasons things written for one distro won't work on another, a really big one is the libraries that were compiled on one with a different compiler, or you simply have a different version of the library. I think most big programs are probably far more dependent on libraries than individual commands.
If you can't manage to maintain the binaries in /sbin and /bin (I'm including scripts, not just precompiled binaries), that people have managed for decades, or just understand why what's where, what hope in hell do you have for managing a program with 10,000 lines of code in it?
Before retiring I worked in a Scientific research institute. I'm a mathematician, but my role was helping scientists get their sums right. We had constant turnover of students and visitors and many projects with participants from institutions scattered around the globe with widely varying levels of IT support. This meant dealing with many different linux distros, and constant issues with differences in where tools were located and different versions of tools, so users often needed to install versions of tools not present or not configured to suit the user's tasks. A lot of my time was spent understanding why the same binary program crashed or gave different results when run on different systems.
There were many large "mission critical" applications provided by the large space agencies. Two I was familiar with were a Java application from ESA and a a package of command-line tools from NASA. Both relied on open source libraries, but distro packages often omitted optional support (e.g., gdal support for obscure formats) so NASA built most of the 3rd party libraries they used. The NASA package included a script that adjusted paths to ensure that the correct tools were found.
Hell, why don't just we just dump *everything* into one huge directory? That's make it really easy to manage (not). I get the impression that there's too many un-trained programmers in the world, and much of what they've learned has come from bad examples.
And, with AI, some no longer even attempt to learn.
This malarkey is up there with we can't have /usr in a separate mount point, any more, because we've put things in there that we need at boot time. Well don't bloody do that!
I think we are headed towards having a minimal set of scripts and binaries installed in a system with most applications running in isolated containers with their own scripts and libraries so they give the same results across multiple distros.
On Sat, 19 Apr 2025 09:11:16 -0300 "George N. White III" gnwiii@gmail.com wrote:
On Fri, Apr 18, 2025 at 10:32 PM Tim via users users@lists.fedoraproject.org wrote: On Fri, 2025-04-18 at 18:38 -0400, Sam Varshavchik wrote:
This malarkey is up there with we can't have /usr in a separate mount point, any more, because we've put things in there that we need at boot time. Well don't bloody do that!
...
I think we are headed towards having a minimal set of scripts and binaries installed in a system with most applications running in isolated containers with their own scripts and libraries so they give the same results across multiple distros.
-- George N. White III
This is exactly what snaps does today.
https://en.wikipedia.org/wiki/Snap_(software)
It's not exactly my first choice, but sometimes comes right. Telegram application, for example.
On Sat, 2025-04-19 at 17:07 +0000, Bob Marčan via users wrote:
On Sat, 19 Apr 2025 09:11:16 -0300 "George N. White III" gnwiii@gmail.com wrote:
On Fri, Apr 18, 2025 at 10:32 PM Tim via users users@lists.fedoraproject.org wrote: On Fri, 2025-04-18 at 18:38 -0400, Sam Varshavchik wrote:
This malarkey is up there with we can't have /usr in a separate mount point, any more, because we've put things in there that we need at boot time. Well don't bloody do that!
...
I think we are headed towards having a minimal set of scripts and binaries installed in a system with most applications running in isolated containers with their own scripts and libraries so they give the same results across multiple distros.
-- George N. White III
This is exactly what snaps does today.
https://en.wikipedia.org/wiki/Snap_(software)
It's not exactly my first choice, but sometimes comes right. Telegram application, for example.
In the RedHat/Fedora world the preferred option is Flatpaks, which have a number of advantages over snaps.
poc
Tim:
But I'd argue that the hierarchies were there for a good reason. *Simple* no-access to some things for some people/software. *Simple* more privileged access to things in /sbin to those who had it in their path, and lesser privileged versions of a command with the same name to other people/things. Although another definition of sbin was not more privileged commands, but static binaries.
George N. White III:
This points to a failure of the linux community to carry through with past initiatives to standardize the file hierarchy.
Did you ever dabble with the Amiga?
That had the system organised quite well. All commands went in C: (an assign to a C directory in the root), all handlers in L: (another assign, like with C:), all libraries in LIBS:, all fonts in FONTS:, all scripts in S:, et cetera. 'twas quite neat and easily understandable. Inspired by TripOS.
Trouble is they didn't follow through the neatness when it came to your applications. They just went in some folder in your drive, often selected by your own choice, and in isolation (no execution /path/ to a binary to follow). To start it, you had to drill through folders to find its icon, or leave it out on the desktop, or find a third-party program to launch things (menus, docks, whatever), which had another problem (programs started via command line weren't the same as their icon being double-clicked). And if program A wanted to interact with program B, either they had to be configured for their locations, or both programs had to be already running.
Someone always cuts corners when it comes to designing things.
Hell, why don't just we just dump *everything* into one huge directory? That's make it really easy to manage (not). I get the impression that there's too many un-trained programmers in the world, and much of what they've learned has come from bad examples.
And, with AI, some no longer even attempt to learn.
I think the writing was on the wall with that one. I know people who'd rather trawl through 20 minutes of some (bad) youtube tutorial on how to do something that 5 minutes of reading text and actually learning how something worked so you could figure it out for yourself.
There are people actually proud of being idiots (I'd make a voting joke here, but it made itself), and pour scorn on anyone smarter than they are (there's another joke that just writes itself, here, too).
This malarkey is up there with we can't have /usr in a separate mount point, any more, because we've put things in there that we need at boot time. Well don't bloody do that!
I think we are headed towards having a minimal set of scripts and binaries installed in a system with most applications running in isolated containers with their own scripts and libraries so they give the same results across multiple distros.
There's sense in that, at least in a achieving standardised organisation. The trouble is we're getting flatpacks and appimages that I find about as on-par as backyard programmers with their C64. Every app looks different, doesn't fit into your desktop, poor feature support, only concentrates on their app (ignorant that a PC is a multi- tasking device, and their program isn't the only thing I'm running).
1) They aren't as universal as claimed.
(a) They still need to be compiled with something compatible with your OS (there are some apps I can't install and run for that reason).
(b) They only have minimal compatibility with your OS so you get minimum functionality compared to native apps (lowest common denominators is all they support, it's too much effort to do more).
(c) Thanks to (b) more functionality has to be put into itself to do the same jobs.
2) We get huge apps, because they have to replicate much of the OS inside themselves (or the additional than OS support that used to be installed for common use).
3) Thanks to sandboxing, or just plain lack of functionality, we get apps that can't print, for instance.
I've got ones that can't, I have to print to PDF, then find something else to print that PDF (which will fail when they eventually appimage the whatever that prints PDF). If you attempt to print from the app, it goes through the motions without error messages but does nothing. Or, it cannot print to/through CUPS, it has to find the printer directly using ZeroConf (hostname.local) instead of using the fully- functional DNS system on the LAN, and will only print that way.
Tim via users writes:
- Thanks to sandboxing, or just plain lack of functionality, we get apps that can't print, for instance.
I've got ones that can't, I have to print to PDF, then find something else to print that PDF (which will fail when they eventually appimage the whatever that prints PDF). If you attempt to print from the app, it goes through the motions without error messages but does nothing. Or, it cannot print to/through CUPS, it has to find the printer directly using ZeroConf (hostname.local) instead of using the fully- functional DNS system on the LAN, and will only print that way.
The Firefox snap in Ubuntu doesn't even start in a VNC session. Everyone appears to be ok with fiddling with environment variables, in order to do that. Nobody appears to believe that there's something wrong with this state of affairs. I haven't tried printing, yet, from Firefox, to see how much fun that is.
I think what this is, overall, is watching idiocracy evolve, in realtime.
On Sun, 2025-04-20 at 08:14 -0400, Sam Varshavchik wrote:
Tim via users writes:
- Thanks to sandboxing, or just plain lack of functionality, we get apps that can't print, for instance.
I've got ones that can't, I have to print to PDF, then find something else to print that PDF (which will fail when they eventually appimage the whatever that prints PDF). If you attempt to print from the app, it goes through the motions without error messages but does nothing. Or, it cannot print to/through CUPS, it has to find the printer directly using ZeroConf (hostname.local) instead of using the fully- functional DNS system on the LAN, and will only print that way.
The Firefox snap in Ubuntu doesn't even start in a VNC session. Everyone appears to be ok with fiddling with environment variables, in order to do that. Nobody appears to believe that there's something wrong with this state of affairs. I haven't tried printing, yet, from Firefox, to see how much fun that is.
I think what this is, overall, is watching idiocracy evolve, in realtime.
One of the big problems with containers (including Flatpaks) is that they don't integrate well with the desktop environment. Then the app relies on the DE to (say) print things, there's usually some jumping through hoops to be done.
poc
Patrick O'Callaghan writes:
On Sun, 2025-04-20 at 08:14 -0400, Sam Varshavchik wrote:
The Firefox snap in Ubuntu doesn't even start in a VNC session. Everyone appears to be ok with fiddling with environment variables, in order to do that. Nobody appears to believe that there's something wrong with this
state
of affairs. I haven't tried printing, yet, from Firefox, to see how much
fun
that is.
I think what this is, overall, is watching idiocracy evolve, in realtime.
One of the big problems with containers (including Flatpaks) is that they don't integrate well with the desktop environment. Then the app relies on the DE to (say) print things, there's usually some jumping through hoops to be done.
I routinely run apps on my server that have no issues, whatsoever, with the desktop I'm running on my laptop (the one that's logged in to the server).
Of course, they are X11 apps, and they don't seem to pay much attention that they're using my ssh-tunneled X11 connection. It just works.
I estimate that I'll be able to use my setup for no more than 2-3 years, max, before X11 is sacrificed on the altar of progress, and latest and greatest.
Fortunately, I'm not going to be responsible for figuring out how to integrate containers with Wayland. This is going to be someone else's problem. You want to cram Wayland down everyone's throats? You figure out how to make all this crap work together. I'll just shrug my shoulders.
On Sun, 2025-04-20 at 17:45 -0400, Sam Varshavchik wrote:
One of the big problems with containers (including Flatpaks) is that they don't integrate well with the desktop environment. Then the app relies on the DE to (say) print things, there's usually some jumping through hoops to be done.
I routinely run apps on my server that have no issues, whatsoever, with the desktop I'm running on my laptop (the one that's logged in to the server).
That's not at all what I meant. A DE includes a layer of inter-app communication between its components which doesn't work well when the apps are sandboxed from each other. A typical example: a Flatpak-based Email MUA doesn't know what other apps you have installed, so when you click on something in a message it can only offer you a picker to decide which app has to process it rather than using your system default.
Running apps on a server and communicating with a client desktop is a different situation.
Of course, they are X11 apps, and they don't seem to pay much attention that they're using my ssh-tunneled X11 connection. It just works.
I estimate that I'll be able to use my setup for no more than 2-3 years, max, before X11 is sacrificed on the altar of progress, and latest and greatest.
The issue is not specific to one windowing system. Both X11 and Wayland based DEs have the same problem in this respect.
poc
Patrick O'Callaghan writes:
On Sun, 2025-04-20 at 17:45 -0400, Sam Varshavchik wrote:
One of the big problems with containers (including Flatpaks) is that they don't integrate well with the desktop environment. Then the app relies on the DE to (say) print things, there's usually some jumping through hoops to be done.
I routinely run apps on my server that have no issues, whatsoever, with
the
desktop I'm running on my laptop (the one that's logged in to the server).
That's not at all what I meant. A DE includes a layer of inter-app communication between its components which doesn't work well when the apps are sandboxed from each other. A typical example: a Flatpak-based Email MUA doesn't know what other apps you have installed, so when you click on something in a message it can only offer you a picker to decide which app has to process it rather than using your system default.
I know what you meant. What I was saying is that X also provides all the necessary glue itself. Now, it may very well be that most kinds of inter-app communications ended up getting built on top of other scaffolding. But not all of them did, and it doesn't mean that it doesn't exist.
You might be surprised to learn that you've been one version of it forever. It's called "cut-n-paste". Content cut or copied in one app gets pasted into another app. The apps don't really know anything about each other, directly, all of that is handled through X. This is all handled via inter-app communication through X.
Running apps on a server and communicating with a client desktop is a different situation.
Not really, it's all part of the same, basic, scaffolding. You can run an app on the server, cut some content of it, and paste it into a different app running on the client. X arranges the breadcrumbs by which the clients' windows find each other and grok the same language.
I rooted through all of the gory details, icccm, ewmh, and X11 primitives, some time ago. It's all there. The fly in the ointment is that the nuts and the bolts of it are difficult use, cumbersome, and lack some convenient features. This should've been addressed – and could've been addressed – a long time ago. And because it wasn't (or, at least, that's one of the reasons), the whole thing is being dumped into a trash bin. Sad.
On Sun, 2025-04-20 at 20:51 -0400, Sam Varshavchik wrote:
I rooted through all of the gory details, icccm, ewmh, and X11 primitives, some time ago. It's all there. The fly in the ointment is that the nuts and the bolts of it are difficult use, cumbersome, and lack some convenient features. This should've been addressed – and could've been addressed – a long time ago. And because it wasn't (or, at least, that's one of the reasons), the whole thing is being dumped into a trash bin. Sad.
It's an on-going problem. As things roll on, someone decides it'll be easier to just start again with another project to replace it, rather than fix the problems. Someone will decide there's too many options to support, and drop all the ones that they don't use themselves. Then there'll be arguments about putting them back in, no they're not needed, you can make do with a screen with one window, one icon, and one button, none of which are user-configurable. Then they get put back in, and someone decides the project is too cumbersome, and needs starting over. And you're back to square one.
I miss the little things that have gone over the years.
It used to be *easy* to customise the logon screen. Want your corporate logo there, just click this button and change the background image. Now you have to hand-edit some XML file, hoping that an update won't undo it.
Want to change the colours of your desktop, click this button to change the window title bar of the active window to this, the inactive window to that. Now you just get a choice of premade themes (and no theme editor anywhere to be found, so they must be hand-coding them). Most of which I hate for various reasons, but at the top of the list is being unable to tell out of which window which is the active one!
The on-screen keyboard (on the login screen, on the logged in screen). Have you ever had to repair computers? Do you not realise how useful it is to be able do something when there's no spare keyboard, or its faulty?
What I don't miss is things like Gnome going to a behemoth that needs an expensive graphics card to carry the load (and they cost more than the motherboard and CPU, together, these days). With an interface that's more aligned with a touchscreen tablet than a desktop PC.
On Sunday, 20 April 2025 17:45:44 EDT Sam Varshavchik wrote:
I estimate that I'll be able to use my setup for no more than 2-3 years, max, before X11 is sacrificed on the altar of progress, and latest and greatest.
I just updated a workstation to f41 and didn't install the x11 KDE components -- I just accepted Wayland. Now Wayland is still broken as far as I am concerned when it comes to session restore stuff and new window placement. But I notice that I can ssh to that system and run x11 apps (over the local network) no problem. Wayland supplies an x11 server for compatibility.
On Sun, 2025-04-20 at 22:16 -0400, Garry T. Williams wrote:
Wayland is still broken as far as I am concerned when it comes to session restore stuff and new window placement. But I notice that I can ssh to that system and run x11 apps (over the local network) no problem. Wayland supplies an x11 server for compatibility.
The irony... it can't do it itself.
How many years has Wayland been out, so far? By the time they get it functional to the point it could replace X properly, someone will have obsoleted it and started inventing the wheel again with another system.
On 4/20/25 10:40 PM, Tim via users wrote:
On Sun, 2025-04-20 at 22:16 -0400, Garry T. Williams wrote:
Wayland is still broken as far as I am concerned when it comes to session restore stuff and new window placement. But I notice that I can ssh to that system and run x11 apps (over the local network) no problem. Wayland supplies an x11 server for compatibility.
The irony... it can't do it itself.
Actually, it can. Try out "waypipe". I just tested it and I will definitely be using it from now on. It absolutely blows away X11 forwarding.
On Sun, 2025-04-20 at 22:16 -0400, Garry T. Williams wrote:
On Sunday, 20 April 2025 17:45:44 EDT Sam Varshavchik wrote:
I estimate that I'll be able to use my setup for no more than 2-3 years, max, before X11 is sacrificed on the altar of progress, and latest and greatest.
I just updated a workstation to f41 and didn't install the x11 KDE components -- I just accepted Wayland. Now Wayland is still broken as far as I am concerned when it comes to session restore stuff and new window placement. But I notice that I can ssh to that system and run x11 apps (over the local network) no problem. Wayland supplies an x11 server for compatibility.
Yes, I bit the bullet when F41 came out and have managed reasonably since then. As you say, the big missing functionality is proper KDE session restore (though that was never perfect on X11 either). I wrote a couple of scripts which kind of work most of the time, at least for me (see: https://gist.github.com/pjoc/a5059b93321ee67d42f4af8d45b7276f), and opened a bug report on it a while ago. Apparently KDE 6.4 will have some progress in this direction:
https://bugs.kde.org/show_bug.cgi?id=436318
poc
Garry T. Williams writes:
On Sunday, 20 April 2025 17:45:44 EDT Sam Varshavchik wrote:
I estimate that I'll be able to use my setup for no more than 2-3 years, max, before X11 is sacrificed on the altar of progress, and latest and greatest.
I just updated a workstation to f41 and didn't install the x11 KDE components -- I just accepted Wayland. Now Wayland is still broken as far as I am concerned when it comes to session restore stuff and new window placement. But I notice that I can ssh to that system and run x11 apps (over the local network) no problem. Wayland supplies an x11 server for compatibility.
If you ssh to a server that has wayland installed, but you're running an X app that displays on your screen, wayland is completely out of the picture. Your X connection is tunneled over ssh ASAP. ssh handles all the X forwarding stuff. There might be a few bits that the server needs to provide, WRT .Xauthority fiddling.
On 04/20/2025 06:14 AM, Sam Varshavchik wrote:
I think what this is, overall, is watching idiocracy evolve, in realtime.
Personally, I've always considered Ubuntu to be designed for Windows refugees. They want to get away from the built in problems of Windows but don't want to learn how to do things properly.
On Sun, 2025-04-20 at 11:54 -0600, Joe Zeff wrote:
Personally, I've always considered Ubuntu to be designed for Windows refugees. They want to get away from the built in problems of Windows but don't want to learn how to do things properly.
I'd come to a similar conclusion. Windows refugees that didn't actually want to stop using Windows...
Long ago, I came to loathe their forums coming up as search results for solving some problem. It was the blind leading the blind.
It's a shame, since Debian (that it forked off) had a very good reputation.
Joe Zeff writes:
On 04/20/2025 06:14 AM, Sam Varshavchik wrote:
I think what this is, overall, is watching idiocracy evolve, in realtime.
Personally, I've always considered Ubuntu to be designed for Windows refugees. They want to get away from the built in problems of Windows but don't want to learn how to do things properly.
I don't get this impression, but I've only been dithering with Ubuntu for a few years. I don't get any Windows vibes from Ubuntu. My take is that it's Debian, without solving any of Debian's problems, but instead creating more Ubuntu-specific problems.