rsync without retransmitting moved files

i’m using rsync a lot; both at work [ backups, replication of content to various servers, ad-hoc copying ] and privately. it’s smart enough to avoid re-sending the whole file if it has grown a bit [ like logs like to do ] or only few bytes changed in source or destination. out-of-the-box rsync is ... Read More

Ideas: rsync-like tool parallelizing transfer of large files

i use rsync heavily for backups, batch jobs and one-off tasks. every day it moves around terabytes of data for me. while it’s great – saves time and bandwidth, it could make even better use of modern hardware – multicore, and often with plenty of unused storage IOps: one day, when i’ll have some spare ... Read More

parallel rsync

rsync remains my main tool for transferring backups or just moving data between servers. but it has some pain points – e.g. rsync’s checksum calculation or ssh over which data is piped can easily saturate single CPU core before i run out of storage I/O or network bandwidth. how to parallelize it – based on ... Read More

rsync to exfat mount

exfat does not carry information about user/group ownership of files/directories, has less precise timestamps. to make rsync stop complaining about it i’m using:

rsync with more efficient compression, hash algorithm

rsync 3.2.0 and newer supports more compression and hash algorithms. zstd compression is well suited for slower network connections [ tens mbit/s ], lz4 – for faster. xxh3 hash is worth using regardless of the network speed. syntax:

pigz –rsyncable, rdiff

i’m backing up in total ~90GB of mysqldumps each night. the more data, the bigger pain it is.

samba, rsync, rdiff

sometimes my backups are not consistent. rsync / samba to blame?