I want to create a backup of my Linux system, including user files, from the command line. I tried using Timeshift but it doesn’t have a CLI argument to include a folder.

I found a guide on dev.to that explains how to use Timeshift from the command line, but it doesn’t mention how to include user files. According to ItsFOSS, Timeshift is designed to protect system files and settings, not user data, so user home directories are excluded by default.

I came across a list of backup programs for Linux on Slant, and BackInTime appears to be the best.

Has anyone used BackInTime to backup the whole system including user files? Are there any other tools that you would recommend?

Edit: would also be nice if it had similar features to Timeshift, like incremental snapshots, weekly snapshots, list, restore and delete snapshots, etc.

    • tal@lemmy.today
      link
      fedilink
      arrow-up
      11
      ·
      edit-2
      1 year ago

      I use rdiff-backup run by backupninja. Rdiff-backup is based on rsync.

      Rsync alone only maintains one backup copy. If you want to roll back by, say, a month, because you did something that wiped out somehing you wanted a month back, you can’t. For most use cases for backup, you want multiple incremental backups.

      There are two potential drawbacks to rdiff-backup. They aren’t an issue for my use case. But I mention them to highlight them because some people may need those features.

      • It doesn’t encrypt the backup copy. I do an on-site backup, which will deal with a drive failure, but not a fire. I don’t need encryption. But if you’re backing up to a remote service that you don’t control, encryption may be important to you. There’s a program similar to rdiff-backup that adds encryption to the mix, duplicity, which I have used before, and would use again if I needed encryption. If you want this with rdiff-backup, you’ll have to do it on the underlying storage media.

      • No deduplication. A backup system can be designed such that it identifies identical files on the backed-up system, and stores a copy only once. A more-elaborate backup system can also deal with deduplicating at non-fixed-block-offsets. Rdiff-backup doesn’t do that. It will efficiently store changes to a given file, so that if you modify a large file, it won’t store a whole new copy. But it doesn’t dedup across files. This isn’t something that comes up much for me. The main time I see it is renames.

      EDIT: Well, one more caveat. If you’re having a system use it to push updates to backup storage, it can’t use immutable backup storage, which can be valuable (that is, one where the system being backed up is not authorized to delete or modify older backups). That can be useful if you’re worried about someone breaking into your system and deleting or modifying backups. That doesn’t really affect me, because I use local storage for my backup. So if you’re using an off-site backup target, or have a dedicated backup system that’s receiving these and only lets the client push backups, but not delete or remove existing ones, that could be important for some use cases. It looks like restic, which someone else mentioned, uses rclone to push to storage, and while I only took a very brief glance at rclone, it looks like it can support immutable storage, if you have immutable storage rigged up.

      • Eikichi [Any] ⏚ 🇵🇸@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        maybe i will say an big mistake, But if you use “date” within an bash’s script, you can do multiples backups, and remove the ones who are too old.

        For example,
        u can do an first archive,
        and then,
        bash scrpt doing, an copy of the first archive, then increment the copy with rsync.

        and in the time, let say 2month,
        ur bash script can delete file olders than one month.
        Just for example, u can do that on an year.

        • tal@lemmy.today
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          1 year ago

          Yes, though then you won’t have incremental backups. That is, if you want 20 copies, you’ll require 20x the storage unless you’re using some kind of copy-on-write underlying storage on the backup server side and your copy mechanism is rigged up to leverage that.

          • Eikichi [Any] ⏚ 🇵🇸@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            To be honest i should check how rdiff works, i dont know it. Because i would have guess it does that too, if you saying me its working on rsync.

            And tar+rsync isnt an option ?

            • tal@lemmy.today
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              1 year ago

              working on rsync

              Well, strictly-speaking, it uses librsync, which is the core of rsync, rather than the command. The other backup utility I mentioned, duplicity, which unlike rdiff-backup does encryption, also uses librsync.

              Because i would have guess it does that too

              No, it only stores a full copy of the most-recent backup, and the rest are incrementals, only store a set of changes. If you want to pull from an older one, then you need to use the rdiff-backup command to generate that older one. If you just want the most-recent one, you can just copy the files directly. I’ll take a nightly backup, and it’ll go back…I haven’t actually looked if I have any bound on it, but something over six months.

              And tar+rsync isnt an option ?

              Tar won’t give you incrementals for free. You could store full backups and gzip them or something, and that might save some space if your data is compressable, but it’s still a full backup rather than an incremental.

              I mean, I’m sure that, given enough effort, you can set up some job using a shell script that uses rsync internally and generates incremental backups, the same way rdiff-backup does, but then you’re basically heading down the path of reimplementing rdiff-backup. Someone else has already done the work, so…shrugs

              What you’ll wind up with rdiff-backup is functionally what you’ll get with rsync – a mirror of files, with replicated metadata on the destination. But in addition, you get a history of incrementals. In general, one probably would prefer to have those incrementals. I’m not saying that that is absolutely true of everyone. Maybe some use cases legitimately will never have use for a backup prior than the most-recent one. But I think that the common case is that people would probably prefer to have it available.

              • Eikichi [Any] ⏚ 🇵🇸@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                1 year ago

                I will really go take a look on it XD,
                U Said it,

                I mean, I’m sure that, given enough effort, you can set up some job using a shell script that uses rsync internally and generates incremental backups, the same way rdiff-backup does, but then you’re basically heading down the path of reimplementing rdiff-backup.

                Then u can just use gpg on your rsync backups ? Like u said it to me, ur solution is the best.

                Personally, I’m not in IT to feed me, I just use rsync+tar, depending the context. Regex it can be cool but hey tools exists.

                And ty then ^^"

                • tal@lemmy.today
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  edit-2
                  1 year ago

                  Then u can just use gpg on your rsync backups ?

                  Yes, if you wanted to implement encryption, you could do it using gpg. That’s what duplicity does.

                  If you wanted to try implementing deltas for incrementals yourself in shell, I guess you could try doing so using xdelta, though I have no idea whether it’s possible to make it reasonable on performance for this kind of workload. It’s just that, I mean, all of this stuff takes time and testing, and people have already built backup systems atop rsync in the form of rsnapshot or rdiff-backup or duplicity and such.

                  I’ve got nothing against rsync. I use it to replicate file trees all the time, and it’s fine for what it was built for. It’s just that as a tool, it’s aimed at generating a replicated filetree…but generally, backups can benefit from more than just a replicated tree of files.

                  I’m not specifically-saying that rdiff-backup is the end-all be-all backup system, just that it’s what I’ve found to be useful, and that I’m familiar with rsync. If it kept an index of hashes of files, it could dedup whole files and make renames-space-efficient, which would be kind of nice. If it retained copies of inode numbers, it could be used to cheaply-detect renames. There are algorithms to detect non-aligned duplicate chunks, which could cram the size down further. It needs a filesystem as the target, not a blob store, the way it looks like restic+rclone can, and for some users – like, say they want to use Amazon S3 storage as their backing storage to get offsite storage – might make sense. It can’t leverage information stored in something like btrfs to rapidly-detect that a file has changed; like rsync, it can use mtime as a quick check, without hashing the file. It isn’t (itself) designed for things like backing up live SQL databases. But for my use case, and I think for most people, it probably covers the stuff that they’re gonna want in a backup tool.

    • Papamousse@beehaw.org
      link
      fedilink
      arrow-up
      8
      arrow-down
      1
      ·
      1 year ago

      I agree, a good old rsync will do the job.

      You can do a “rsync -avn source destination” to see what will be done, and when you’re confident just remove the “n”. I also add the “–delete” option when I want to cleanup the destination drive.

    • PumpkinDrama@reddthat.comOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Do you know where I can find a guide explaining how to set up periodic snapshots using rsync and how to restore to a previous snapshot?

      • Dran@lemmy.world
        link
        fedilink
        arrow-up
        12
        ·
        1 year ago

        Rsync is more “copy on steroids” than “backup utility”. Many people use it as a backup tool because it allows very lightweight syncs between a source and a destination. It has no concept of snapshots or restores, it’s just copying files. You’d have to build a snapshot system around rsync. It’s not the solution you think you’re looking for, but by the time you figure out how to use it it’s the solution you probably always wanted. If that makes any sense

      • Eikichi [Any] ⏚ 🇵🇸@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        for snapshot depending your filesystem, the function included/built-in, to manage it, can be better.

        I agree with dan’s comment,
        Use rsync first to sync your home, or wallpapers,
        You will figure out, how it work, with the good options,

        And then you will do an script or crontab for you personnally, depending what you want to save, how, and how many times.

        Rsync, can restore files, delete files. Basycally its an file manager XD
        Like for /etc/ you maybe want to replace file and do an copy of it before if they have the same name. But not for your picture’s folder.

        Learn, and read doc of rsync, use it, on low importance files, and you will manage regarding your needs :).

        OFC you can ask for specific help,
        I just dont want to tell you how to do something if im not sure its good regardings your needs.

        Edit : Check “tar” too maybe,

  • restic, and there are a bunch of 3rd party utilities to help with things, including multiple GUIs. Restic has build in support for several cloud storage providers (including the most excellent BackBlaze), and encrypts data so you can feel safe using them.

    What I like most about restic is that you can mount your backups and browse them like a filesystem; it allows you to easily pull out a single file from a filesystem backup, or view different versions of one, without having to remember a path or restore.

  • Dataprolet@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    1 year ago

    BackInTime or Borg
    BackInTime should be easier to set up, Borg is more feature-rich and flexible. If you have any questions, feel free to ask. I use both for local and remote backup for years.

  • Captain Aggravated@sh.itjust.works
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    I use BackIntime to back up my user files. I kinda don’t bother backing up my system files; both times I’ve had a major issue of the kind that took the OS down A. were my fault and B. I used as an excuse to do an OS version upgrade.

    I like BackInTime quite a bit, it works well, AFAIK it’s a front-end for rsync. BackInTime does incremental backups using hardlinks, and you can set how often it takes backups of what directories and how long to keep them for autodeletion, and you can manually delete them.

  • Maoo [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    4
    ·
    1 year ago

    If it’s a desktop/laptop, I recommend Pika, which is just a nice frontend and scheduler for borg backup. If it’s a server, I recommend borgmatic.

    The nice thing about borg is that it does all of the things people usually want from backups but that are kind of frustrating to do with scripts:

    • Encryption so they’re private and can be uploaded to cloud storage safely.
    • Compression so they aren’t too big.
    • Uses snapshots with deduplication so that they don’t take up too much space.
    • Snapshots happen on a schedule.
    • There’s a retention policy of how many snapshots to keep and at what interval (1 snapshot per year for the last 4 years and 1 per month for 12 months, for example).
    • You can browse through old snapshots to retrieve files.
    • You can restore from a snapshot.
    • Ignore certain files, directories, and patterns.

    It is surprisingly difficult to get all of that in one solution, but borg things will do all of the above.

  • overkill@feddit.de
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I use btrbk (on a btrfs filesystem) and I’ve never been happier. It fits my workflow perfectly: Frequent automated local snapshots with the occasional incremental backup to one of several encrypted external drives. It’s fast and reproducible since it’s all in a single conf file.

  • I mostly use DejaDup, the GUI tool that comes with many Gnome distros. Easy to use and supports a variety of backends.

    I should switch over to something that can take advantage of the digging done by snapshotting a drive and transferring the deltas using btrfs send, but I haven’t found something quite as simple yet.

  • SmoothIsFast@citizensgaming.com
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    I have always used clonezilla to just make a complete system image if that’s what you are looking for. The downside being its not gonna be a tool you just run from the command line, you need to install it to some bootable media and select each drive to backup. But if a full system image backup is your goal, I’d say give clonezilla a go.

        • NaN@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          I only discovered the btrfs issues recently, thankfully when trying to move stuff to a new machine. Direct copy, imaging, using dd mode, didn’t matter, the destination filesystem was corrupted. Only part that actually annoyed me is that everything looked fine until I attempted to use it, so there was no warning.

  • Kongar@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I like separating backups and snapshots as timeshift recommends. Backups are better handled by a different process copying your files to a remote location (pc failure, house fire, etc.). Lastly, backups are personal, so you gotta do what works for you - whatever makes them happen is good enough in my opinion ;)

    My setup (not perfect, but it works for me). I keep one snapshot only - but it is the entire drive including the home folder. It’s really close to a disk image minus the mount folders. This is done to a second local disk via rsync. The arch wiki entry on rsync has the full rsync command for this operation called out. I run this right before a system update.

    Backups go to my NAS. Synology in my case. They have a cloud software package like iCloud, OneDrive, etc, except I run it on the NAS and I’m only limited on storage by what drives I throw into it. That software scoops up my user folders on all my PCs and I set it to keep the 10 latest versions.

    Then since my NAS is inside my house, I back the entire NAS up to an external hdd and sneaker net it to work and keep it in my office drawer. This protects me from fires and whatnot. I do this monthly. This is a completely manual process.

    Some people have accused me of insanity-but it’s really not that hard. I don’t worry about losing pictures of my kids, and it’s aged well with my family (for example, my daughter doesn’t worry about losing stuff while she’s in college - if she writes a paper, 10 copies are kept here at home on the NAS automatically). And none of it was hard to set up, maybe just a bit pricey for the NAS (but it’s got a lot of other super useful things going for it)

    So ya, I’d recommend letting timeshift do its thing for snapshots, and I’d rethink what you’re trying to do for backups. I strongly believe they are two different things.

  • TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    Desktop or servers? Syncthing and rsync respectively. However I use Syncthing on servers for specific things and its just perfect.