• 0 Posts
  • 35 Comments
Joined 1 year ago
cake
Cake day: August 8th, 2023

help-circle
  • Until a couple of weeks ago I used Fedora Silverblue.

    Then, after mostly using GNOME Shell for about a decade, I (reluctantly) tried KDE Plasma 5.27 on my desktop due to its support for variable refresh rate and since then I have fallen in love with KDE Plasma for the first time (retrospectively I couldn’t stand it from version 4 until around 5.20).

    Now I am using Fedora 39 Kinoite on two of my three devices and Fedora 39 KDE on a 2-in-1 laptop that requires custom DKMS modules (not possible on atomic Fedora spins) for the speakers.

    Personally I try to use containers (Flatpaks on the desktop and OCI images on my homeserver) whenever possible. I love that I can easily restrict or expand permissions (e. g. I have a global nosocket=x11 override) and that my documentation is valid with most distributions, since Flatpak always behaves the same.

    I like using Fedora, since it isn’t a rolling release, but its software is still up-to-date and it has always (first version I used is Fedora 15) given me a clean, stable and relatively bug-free experience.

    In my opinion Ubuntu actually has the perfect release cycle, but Canonical lost me with their flawed-by-design snap packages and their new installers with incredibly limited manual partitioning options (encryption without LVM, etc.).


  • In my opinion Plasma has gotten much better with the last couple of releases. Around 5.21 the defaults actually got pretty good and since 5.24 Wayland support is quite good, on par with GNOME in my opinion.

    After using GNOME Shell for a decade I have recently switched to Plasma 5.27 on my desktop due to its VRR support (I have two 170 Hz QHD monitors). A couple of weeks later I also moved my laptops to Plasma, even though I wanted to keep GNOME on them, since Plasma has gotten so nice!

    Just wanted to give a heads-up in case you haven’t tried Plasma in the last couple of years. ;) But especially if you rely on dynamic workspaces and don’t want to adapt your workflow (like I did when I switched to Plasma), there’s just no alternative to GNOME and it has gotten really polished and nice as well.


  • This has always been the case with Ubuntu. Ubuntu only ever supported its main repository with security updates. Now they offer (paid) support for the universe repository in addition, which is a bonus for Ubuntu users, as they now have a greater selection of packages with security updates.

    If you don’t opt-in to use Ubuntu Pro, nothing changes and Ubuntu will be as secure (or insecure) as it has always been. If you disable universe and multiverse you have a Ubuntu system where all packages receive guaranteed security updates for free.

    Please note: I still don’t recommend Ubuntu due to snapd not supporting third-party repositories, but that’s no reason not to get the facts right.


    Debian has always been the better choice if you required security updates for the complete package repository.

    Personally I have my doubts if Debian actually manages to reliably backport security updates for all its packages. Afterall Eclipse was stuck on version 3.8 for multiple Debian releases due to lack of a maintainer …


  • There are plenty of reasons to get rid of Ubuntu, but this isn’t one of them.

    Before Ubuntu Pro, packages in universe (and multiverse) were not receiving (security) updates at all, unless someone from the community stepped up and maintained the package. Now Canonical provides security updates for universe, for the first time since Ubuntu has been introduced, via Ubuntu Pro, which is free for up to five personal devices and paid for all other use cases.

    Debian is actually not that different (anymore). If you read the release notes of Debian 12, you’ll notice that quite a few package groups are excluded from guaranteed security updates, just like packages in universe are in Ubuntu. Unlike Ubuntu, Debian doesn’t split its package repository by security support though.







  • how would that impact your configuration?

    It impacts my documentation. If, for example, gsettings set org.gnome.software allow-update false no longer works, because they changed the key from allow-update to updates-allowed, then my documentation no longer works correctly. Same when new technology is introduced, e. g. a switch from Pulseaudio to Pipewire. With a rolling release distribution these changes can happen at any time, whereas with a fixed release these changes only occur when a new release of the distribution is made and I upgrade to it.

    I don’t have the time to continously track these changes and modify my documentation accordingly. Therefore I appreciate it if people bundle all those changes for me into one single distribution upgrade and write release notes with a changelog. Then I can spend a day reading the release notes, adjust the documentation, apply the upgrade on all devices and then move on for the next couple of months/years.

    which as nothing to do with rolling release or distributions.

    I tried to explain to you why I dislike rolling release distributions. That’s why I tried to give you one example where a fixed release distribution is more suitable in my opinion.

    I understand that these things might not matter to you, if you only have one computer (or so) to maintain at home or maintaining home computers is your hobby. But I have four personal computers and multiple devices from the family to maintain and system administration is no longer my hobby …


  • I’ve used Arch Linux and openSUSE Tumbleweed in the past and I have been using Linux for over 10 years …

    With each new version of an application there’s the change that configuration files or functionality changes. Packages might even get replaced with others.

    You would be surprised how much changes between Ubuntu LTS versions … My archived Ubuntu installation script had lots of if-statements for different versions of Ubuntu, since stuff got moved around. Such things can be as simple as gsettings schemas (keys might get renamed), but even these minor changes make documentation and therefore reproducable reinstallations troublesome.

    With a fixed release all these changes are nicely bundled in one large upgrade every couple of months/years, which makes it easy to document and to plan when to do the upgrade.


  • I can’t stand rolling releases (for personal use) and I never recommend them to anyone. To me it feels like being in drift sand.

    I need fixed releases to test my documentation (shell scripts) against something. With a rolling release those scripts can break at any time, unless you read the changelog of every package update.

    But I also want and use fully automatic updates, so reading changelogs for every update would be the direct opposite of what I am looking for in an OS. I am ok with reading release notes every couple of months for a distribution upgrade though.

    I want my systems to be reproducible and that’s impossible with drift sand rolling releases. In my opinion Fedora or Ubuntu have a decent release cycle, I would never consider Arch or Tumbleweed or Solus.


  • Innovation or regression?

    Innovation doesn’t necessarily mean that all past functionality needs to be carried over. Actually innovation often means that past technology becomes obsolete and gets replaced with something new.

    Gnome used to have optional desktop icons. They removed them.

    They removed them because with GNOME Shell those icons no longer made sense. There was no longer a concept of dragging apps from a panel menu to a desktop, instead apps were now pinned from the fullscreen app overview to the dash.

    Since the code was no longer used by the default GNOME experience, it became unmaintained and eventually got removed.


  • Because GNOME is the only DE with some potential and by not having 2 or 3 simple optional features aren’t getting more traction.

    But everyone has different requirements and my “2 or 3 simple optional features” that are missing are completely different than what you think is missing. I couldn’t care less about desktop icons or system trays. I even prefer not having a system tray, as this functionality should be provided via notifications and regular application shortcuts in my opinion.

    But in the end, a software project only has a limited amount of resources available and developers have to decide where they want to focus on. GNOME chose not to focus on desktop icons:

    GNOME had icons, v3.28 discontinued them

    Because the code was “old and unmaintained” and probably no one was willing to modernise and maintain it. Desktop icons were already disabled by default before 3.28, so they didn’t “re-invent” this feature with the removal of the code in Nautilus.

    Using other DE doesn’t make much sense as you’ll inevitable run in GTK and parts of GNOME and having to mix and match to get a working desktop experience.

    I use GNOME and KDE and use the same applications (as Flatpaks) on both desktops: I use GNOME Calculator on KDE, because I dislike both KDE calculators, and I use Ark on GNOME with a Nautilus script, as File Roller doesn’t allow me to set the compression ratio (I need to create zip files with 0 compression for modding games). So for me it has become the norm to mix applications created with different toolkits. Thanks to Flatpak I still have a “clean” base system though.

    Btw. I am getting tired of these re-occurring complaints that GNOME works differently than other desktops. I am not constantly complaining about what features KDE is, in my opinion, missing all the time either (e. g. dynamic workspaces, same wallpaper and desktop configuration across all existing and new monitors, online account integration, command line config tool, etc.), instead I accept that this is how it is at the moment and either use KDE the way it is (like I do on my desktop PC) or use something that better suits my needs (like I do on all my laptops).


  • FOSS Is Fun@lemmy.mltoLinux@lemmy.mlWho uses pure GNOME (no extensions)
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    1 year ago

    Because it takes manpower to develop and maintain these features?

    Especially desktop icons are difficult to get right (see workarounds like “ReIcon” on Windows). E. g. keeping icon positions across multiple monitors and varying resolutions and displays (which can be unplugged at any time). They can also be a privacy-issue, e. g. when doing a presentation.

    But most importantly: GNOME doesn’t want to be a traditional (Windows-like) desktop, so why would they implement features that don’t align with their ideas for a desktop experience?

    There are lots of other desktops, like Cinnamon, that offer a traditional desktop experience within the GTK ecosystem. There is also plenty of room for desktops, like GNOME, that have a different philosophy and feature set.

    In my opinion it would be boring, if every desktop tried to do the same thing. And there wouldn’t be any innovation, if no one tried to do things differently.



  • I’ve tried to combat this a bit with a global Flatpak override that takes unnecessarily broad permissions away by default, like filesystem=home, but apps could easily circumvent it by requesting permissions for specific subdirectories. This cat-and-mouse game could be fixed by allowing a recursive override, such as nofilesystem=home/*.

    But even then, there is still the issue with D-Bus access, which is even more difficult to control …

    I think it is sad that Flatpak finally provides the tool to restrict desktop apps in the same way that mobile apps have been restricted for a decade, but the implementation chooses to be insecure by default and only provides limited options to make it secure by default.




  • Actually that’s one of the main reasons I use Syncthing: It doesn’t need a server, as it is a peer-to-peer architecture. Unlike a centralised solution (cloud storage, Nextcloud, etc.) devices sync directly with each other. If they are on the same local network, you get to enjoy the full bandwidth of your local network. If they need to sync over a long distance over the internet, you are limited by the upload and download speeds of your internet provider, just like with centralised storage.

    I have a server that serves as an introducer, so I don’t have to connect each device with every other device manually. But the server doesn’t need to be available once all devices are connected with each other.

    Syncing continues to work without it for as long as I don’t reinstall any of the other devices. And even if I’d reinstall a device, I could delegate any other device to be the introducer or connect the devices manually with each other. It really is quite robust and fail-safe.