Have ordered Samsung PCIe SSD and the appropriate card for the X1 slot on my motherboard. The idea is to run everything that isn't /home from there. Writes would therefore be relatively few. It's 1tb, so there are wide open spaces for wear leveling. Should greatly reduce drive-caused slowness.
All that isn't /home is on a 75gb partition on sda, and I thought I'd just dd it over. But then it occurred to me: wouldn't that create the problem of two drives with the same UUID?
The second issue is keeping the contents of the backup boot partition, a partition on sda, in sync with the SSD. I looked at Raid 1, but I believe that requires starting with two or more blank disks, not something to be tacked on later, so that's out. Is there a fairly uncomplicated way of keeping their contents identical? Nice if automatic, but if I need to do it manually every week or two, so be it.
dep Pictures: http://www.ipernity.com/doc/depscribe/album Column: https://ofb.biz/author/dep/
On Thursday 26 September 2024 00.59:40 dep via tde-users wrote:
All that isn't /home is on a 75gb partition on sda, and I thought I'd just dd it over. But then it occurred to me: wouldn't that create the problem of two drives with the same UUID?
You can dd then change the UUID
dep wrote:
All that isn't /home is on a 75gb partition on sda, and I thought I'd just dd it over. But then it occurred to me: wouldn't that create the problem of two drives with the same UUID?
e2fsck -f <targetdevicename> tune2fs -U random -L mylabel <targetdevicename>
is normal procedure here following cloning from partitions.
said Felix Miata via tde-users: | dep wrote: | > All that isn't /home is on a 75gb partition on sda, and I thought I'd | > just dd it over. But then it occurred to me: wouldn't that create the | > problem of two drives with the same UUID? | | e2fsck -f <targetdevicename> | tune2fs -U random -L mylabel <targetdevicename> | | is normal procedure here following cloning from partitions.
Thanks. Forgive my denseness, but is this before or after i dd it over?
Anno domini 2024 Thu, 26 Sep 17:38:07 +0000 dep via tde-users scripsit:
said Felix Miata via tde-users: | dep wrote: | > All that isn't /home is on a 75gb partition on sda, and I thought I'd | > just dd it over. But then it occurred to me: wouldn't that create the | > problem of two drives with the same UUID? | | e2fsck -f <targetdevicename> | tune2fs -U random -L mylabel <targetdevicename> | | is normal procedure here following cloning from partitions.
Thanks. Forgive my denseness, but is this before or after i dd it over?
After ... but before rebooting :)'
-- Please do not email me anything that you are not comfortable also sharing with the NSA, CIA ...
Anno domini 2024 Thu, 26 Sep 19:49:04 +0200 Dr. Nikolaus Klepp via tde-users scripsit:
Anno domini 2024 Thu, 26 Sep 17:38:07 +0000 dep via tde-users scripsit:
said Felix Miata via tde-users: | dep wrote: | > All that isn't /home is on a 75gb partition on sda, and I thought I'd | > just dd it over. But then it occurred to me: wouldn't that create the | > problem of two drives with the same UUID? | | e2fsck -f <targetdevicename> | tune2fs -U random -L mylabel <targetdevicename> | | is normal procedure here following cloning from partitions.
Thanks. Forgive my denseness, but is this before or after i dd it over?
After ... but before rebooting :)'
oh, I forgot: when this is patition of your boot drive then you'll have to adjust /etc/fstab, too
-- Please do not email me anything that you are not comfortable also sharing with the NSA, CIA ... ____________________________________________________ tde-users mailing list -- users@trinitydesktop.org To unsubscribe send an email to users-leave@trinitydesktop.org Web mail archive available at https://mail.trinitydesktop.org/mailman3/hyperkitty/list/users@trinitydeskto...
-- Please do not email me anything that you are not comfortable also sharing with the NSA, CIA ...
said Dr. Nikolaus Klepp via tde-users: | Anno domini 2024 Thu, 26 Sep 17:38:07 +0000 | | dep via tde-users scripsit: | > said Felix Miata via tde-users: | > | dep wrote: | > | > All that isn't /home is on a 75gb partition on sda, and I thought | > | > I'd just dd it over. But then it occurred to me: wouldn't that | > | > create the problem of two drives with the same UUID? | > | | > | e2fsck -f <targetdevicename> | > | tune2fs -U random -L mylabel <targetdevicename> | > | | > | is normal procedure here following cloning from partitions. | > | > Thanks. Forgive my denseness, but is this before or after i dd it | > over? | | After ... but before rebooting :)'
Thanks!
dep composed on 2024-09-26 17:38 (UTC):
said Felix Miata via tde-users:
| dep wrote: | > All that isn't /home is on a 75gb partition on sda, and I thought I'd | > just dd it over. But then it occurred to me: wouldn't that create the | > problem of two drives with the same UUID? | | e2fsck -f <targetdevicename> | tune2fs -U random -L mylabel <targetdevicename> | | is normal procedure here following cloning from partitions.
Thanks. Forgive my denseness, but is this before or after i dd it over?
following == after
You could do it before, or any time, as long as you account for the change in appropriate fstab(s) and grub.cfg(s), but you'd still need to do it after.
said Thierry de Coulon via tde-users:
| On Thursday 26 September 2024 00.59:40 dep via tde-users wrote: | > All that isn't /home is on a 75gb partition on sda, and I thought I'd | > just dd it over. But then it occurred to me: wouldn't that create the | > problem of two drives with the same UUID? | | You can dd then change the UUID
Thanks. I remember doing this once before and it being a pain, but I think that was when I was trying to *preserve* a UUID.
On 9/25/24 5:59 PM, dep via tde-users wrote:
Have ordered Samsung PCIe SSD and the appropriate card for the X1 slot on my motherboard. The idea is to run everything that isn't /home from there. Writes would therefore be relatively few. It's 1tb, so there are wide open spaces for wear leveling. Should greatly reduce drive-caused slowness.
Side note:
If you are worried about the SSD as indicated by your "wear leveling" comment, if you haven't run a SSD before, the concern is now very close to a non-issue. The MTBF data and drive lifetimes are based on writing 70% of the drive every 24 hours (in your case with a 1TB drive, you would be reading/writing/deleting 700GB every day). If you are replicating large databases all day long, maybe, but for a normal user - never happens.
The performance increase (and responsiveness feel) are 4X to 5X better than old spinning drives. (a SSD will even make that old Core2 Duo box in the bone-pile run TDE blisteringly fast).
I've beaten the tar out of SSDs probably much more than most compiling big projects almost daily (like building PHP from source, etc..) for years and never had any issue with wear.
So when I started with SSD I had concerns and was worried they may fail before the normal 5-10 years I get from good spinning drives. I've now put 5-10 years of abuse on a dozen SSDs, and today's drives are every bit as robust as the best rotating kind -- and much better today than when first introduced.
Lesson: throw the SSD in the box and then drive it like you stole it -- you won't have any problems...
said David C. Rankin via tde-users:
| If you are worried about the SSD as indicated by your "wear leveling" | comment, if you haven't run a SSD before, the concern is now very close | to a non-issue. The MTBF data and drive lifetimes are based on writing | 70% of the drive every 24 hours (in your case with a 1TB drive, you | would be reading/writing/deleting 700GB every day). If you are | replicating large databases all day long, maybe, but for a normal user - | never happens.
Thanks; alas, my experience varies from yours. I've dealt with four SSDs. A couple of years ago I tried to do what I'm trying to do here, with a WD 500gb SATA SSD and it worked perfectly for about three weeks before it didn't work anymore. It was very fast to boot but also very fast to die. This spring I tried to use a 1tb Crucial PCIe M.2 NVME in a Raspberry Pi and it died almost instantly. (To their credit, Crucial replaced it quickly and without complaint, and the replacement continues to work well.) So there are circumstances under which they fail without so much as a by-your-leave, and there are no symptoms ahead of time. Indeed, they seem pretty delicate under those circumstances, so I'm trying to identify and avoid those.
| I've beaten the tar out of SSDs probably much more than most compiling | big projects almost daily (like building PHP from source, etc..) for | years and never had any issue with wear.
Glad to hear it. Still, I hope to avoid even theoretical problems; at this point I've experienced a 50 percent failure rate under light use. So I'm trying to minimize writes to the thing and, where they are necessary, to leave lots of space so that for the forseeable future every write will be to virgin space. The drive is rated for 600 writes, and I expect it to be years before it has to rewrite even once, hence a terabyte for less than 60gb of data. My little script that writes every three seconds won't go there. Nor is there any reason for it to do so -- in that case, the write speed of an old MFM drive would be sufficient.
| So when I started with SSD I had concerns and was worried they may fail | before the normal 5-10 years I get from good spinning drives. I've now | put 5-10 years of abuse on a dozen SSDs, and today's drives are every | bit as robust as the best rotating kind -- and much better today than | when first introduced.
Good to know. I'm not exactly risk-averse; the archives of this list would demonstrate the new and imaginative ways I've found to break things. But if I can prevent unrecoverable failure at little or no cost, I'd just as soon do it.
| Lesson: throw the SSD in the box and then drive it like you stole it -- | you won't have any problems...
Thanks for the encouragement.
On 9/26/24 12:57 PM, dep via tde-users wrote:
hanks; alas, my experience varies from yours. I've dealt with four SSDs. A couple of years ago I tried to do what I'm trying to do here, with a WD 500gb SATA SSD and it worked perfectly for about three weeks before it didn't work anymore. It was very fast to boot but also very fast to die. This spring I tried to use a 1tb Crucial PCIe M.2 NVME in a Raspberry Pi and it died almost instantly. (To their credit, Crucial replaced it quickly and without complaint, and the replacement continues to work well.) So there are circumstances under which they fail without so much as a by-your-leave, and there are no symptoms ahead of time. Indeed, they seem pretty delicate under those circumstances, so I'm trying to identify and avoid those.
I've had flawless luck with Samsung EVO 960, 970 and 980 drives. Of course the manufactures always find ways to cheapen the drives each iteration (ATA/SATA instruction set removal of some commands, etc..), but on balance, the drive media has always been good. From 3D Nand on. I've also run Crucial, HP and other manufacturer drives that came in whatever box I bought. Same result - no issues.
But, drives are drives, so it may just be the luck of the dice. I've had supposed "Top Quality" Ironwolf server drives die in a matter of weeks, while I've had white-label remans run for 12+ years.
So "good luck" on the next drive you get.
said David C. Rankin via tde-users:
| I've had flawless luck with Samsung EVO 960, 970 and 980 drives. Of | course the manufactures always find ways to cheapen the drives each | iteration (ATA/SATA instruction set removal of some commands, etc..), | but on balance, the drive media has always been good. From 3D Nand on. | I've also run Crucial, HP and other manufacturer drives that came in | whatever box I bought. Same result - no issues.
This one is a 1tb 980. I am hoping the PCIe motherboard adapters are pretty much the same -- they're almost frighteningly cheap -- and reliable.
| But, drives are drives, so it may just be the luck of the dice. I've had | supposed "Top Quality" Ironwolf server drives die in a matter of weeks, | while I've had white-label remans run for 12+ years.
I've had one Seagate spinning disk DOA, and in the olden days I managed to blow up a nide Conner 120mb drive while learning that no, they couldn't be hot swapped. (It contained the contents of a book, due in days. Fortunately, I had a backup on floppy. Unfortunately, I decided I'd better back up the floppy, and while the new disc was formatting I looked down to see I was holding the new disk in my hand -- while the book was being formatted away. That mistake cost me $1200 to a date recovery company, plus express shipping. I've tried to be more careful since then.)
| So "good luck" on the next drive you get.
Thanks. It arrived today. Rehearsing the procedure I hope to employ tomorrow when I install it. The Raspberry Pi 5 devices have spoiled me for very quick boots, and the Debian boot seems very long, probably because so much of it is blank screen, just long enough to seem as if something has gone wrong.
On 9/27/24 11:07 PM, dep via tde-users wrote:
Thanks. It arrived today. Rehearsing the procedure I hope to employ tomorrow when I install it. The Raspberry Pi 5 devices have spoiled me for very quick boots, and the Debian boot seems very long, probably because so much of it is blank screen, just long enough to seem as if something has gone wrong.
Raspberry Pi board do tend to spoil you. I've not played with the latest, but have probably a 1/2 dozen Pico/Pico W boards (RP2040) that are pretty amazing. I've got 3b and 3b+ and a couple of Zero 2 W (which are equally amazing). Using the PiOS installer makes preparing the SD cards (and configuring the wireless so it works out of the box). Zero 2 W's are still on bullseye (one 64-bit one 32-bit install -- if you get a Zer0 2 W -- stick with 32-bit, it's about 2X as fast -- but you lose Arm8 assembly to play with)
The Pi 4 and 5 are on my get to list. I've also used a number of TI boards and the Milkv-Duo which runs busybox as the OS (RISC based). It is the same format/size as the Pico but with 64M DDR2 RAM (verses 2M for the Pico). The Zero 2 W comes with 512M and runs full Debian fine (don't load firefox or chromium -- they run, but s l o w l y.... The LXDE (or LXQt - whatever it is) desktop also works fine -- but I run mine headless.
I look forward to your book -- we've all held the wrong drive in the hand once or twice. I was fortunate not to have to worry about hot-swapping in the early days, nothing I had had it. I've got two SuperMicro boxes (a 4U and 2U) that do have hot-swap back-planes for SAS/SATA drives -- that will spoil you. Simply grab the caddy and yank -- while the box is running. That indeed is a neat trick :)
said David C. Rankin via tde-users:
| The Pi 4 and 5 are on my get to list. I've also used a number of TI | boards and the Milkv-Duo which runs busybox as the OS (RISC based). It | is the same format/size as the Pico but with 64M DDR2 RAM (verses 2M for | the Pico). The Zero 2 W comes with 512M and runs full Debian fine (don't | load firefox or chromium -- they run, but s l o w l y.... The LXDE (or | LXQt - whatever it is) desktop also works fine -- but I run mine | headless.
I got two RPi boards when Roku really pissed me off -- I read their terms of data gathering -- so I wanted to make my own TV boxen, using the big, nice TCL TV as a dumb terminal but for its HDMI switching, which is excellent and can be controlled by the remote on the hifi. It has been a hoot, in many ways. First, it's one click to be rid of Wayland and back to X11. Second, it runs TDE perfectly, even though I'm not using most of its features (though they're handy to have). Third, the latest version of ProtonVPN is architecture agnostic as far as I can tell. This means that if's running the very cool IPTVnator and want to watch something that's geoblocked, no problem, I just emerge in a favored country. This is especially useful when the world is exploding. Fourth, though I had to make a couple of little changes, orincipally yanking out pipewire and stomping it to atoms, and the Hauppauge dongle works just fine with Kaffeine, ao I have all the local stations. And all without a bit of it going back to Roku (which I assured by taking away all its network privileges and, to be sure, changing the router password).
Upstairs, I got a $150 32-inch ONN computer monitor that works better than I expected with the RPi. Configuring it required taking the micro SD card from the one downstairs and using the built-in utility to copy it to the SSD on that Pi. The one annoyance was/is that it has two HDMI ports; one goes just fine to the monitor, and I figured I could get sound by hooking the other to a soundbar. Nope, silence from the soundbar. So I've had to use Bluetooth, which I don;t like for philosophical reasons -- why use wireless for a distance of three feet -- and practical ones -- the first Bluetooth channel is really noisy, so upon reboot I have to disconnect and reconnect to get onto the second one, which is okay. (The monitor speakers have worse audio quality than some cat's whisker crystal radios I've built.)
Even so, they are far and away the best TVs I've ever owned, when controlled by the RPis.
| I look forward to your book -- we've all held the wrong drive in the | hand once or twice. I was fortunate not to have to worry about | hot-swapping in the early days, nothing I had had it. I've got two | SuperMicro boxes (a 4U and 2U) that do have hot-swap back-planes for | SAS/SATA drives -- that will spoil you. Simply grab the caddy and yank | -- while the box is running. That indeed is a neat trick :)
The book was a ghost job for a fairly famous author, and appeared in 1990 (and disappeared quickly thereafter). Mine weren't the only disasters involved, which included the titular author dying right after it was released. But it was a book, and my name is inside it in the acks, so I'm content.
You're definitiely running a far more elaborate rig than mine. The closest to networking here is everything running off the same router.
said Greg Madden via tde-users:
| I used rsync to move files around. Flexible, run with a cron job,
Thanks very much. I'm hoping it works with a mounted drive?
said Greg Madden via tde-users: | rsync only works on mounted drives. cli goodness.
This maketh me happy. Thanks!
On 9/27/24 8:56 PM, dep via tde-users wrote:
said Greg Madden via tde-users: | I used rsync to move files around. Flexible, run with a cron job, Thanks very much. I'm hoping it works with a mounted drive?
The problem with using rsync to back up a mounted boot partition is, it's not instant. So while rsync is backing up one part, some other part is probably being changed. So if you're lucky, you might end up with a functional bootable backup, but you'll have some unknown bit(s) of corruption that may or may not be fatal.
If you want to get a good bootable backup, you'd have to boot from something else before backing up. Then, if you want it to be bootable should you ever have to restore and use it, fsarchiver would be a better bet than rsync. An image backup with dd would also work, but the boot partition would have to be smaller or equal to the backup location.
Anno domini 2024 Sat, 28 Sep 08:27:30 -0700 Dan Youngquist via tde-users scripsit:
On 9/27/24 8:56 PM, dep via tde-users wrote:
said Greg Madden via tde-users: | I used rsync to move files around. Flexible, run with a cron job, Thanks very much. I'm hoping it works with a mounted drive?
The problem with using rsync to back up a mounted boot partition is, it's not instant. So while rsync is backing up one part, some other part is probably being changed. So if you're lucky, you might end up with a functional bootable backup, but you'll have some unknown bit(s) of corruption that may or may not be fatal.
If you want to get a good bootable backup, you'd have to boot from something else before backing up. Then, if you want it to be bootable should you ever have to restore and use it, fsarchiver would be a better bet than rsync. An image backup with dd would also work, but the boot partition would have to be smaller or equal to the backup location.
I intend to oject: filesystem based backup systems do not have the risk of saving a corrupt filesystems as blockbased backup systems have when done on a mountd filesystem. The filesystem (as long as it is sane) is always in a cosistent state, while the blockdevice (as long as mounted) is not. That's why no sane person uses dump/restore anymore.
As long as you do not run "apt dist-upgrade" at the same time as you rsync you are fine (in respect of bootable backup). Nothing changes kernel + grub + modules + /bin ... under normal conditions so your copy will be able to boot - that is if your got UUID and GRUB/EFI stuff right in the first place. What gets busted are logfiles, open datanbases, files that are just been written. So if you use some brain cells you can shut down whatever is not essential, close your kmail + editors + firefox and just make the sync. Snapshots (ZFS) would be better, but you take what you get :)
Anyway,
Nik
-- Please do not email me anything that you are not comfortable also sharing with the NSA, CIA ...
On 9/28/24 9:18 AM, Dr. Nikolaus Klepp via tde-users wrote:
I intend to oject: filesystem based backup systems do not have the risk of saving a corrupt filesystems as blockbased backup systems have when done on a mountd filesystem. The filesystem (as long as it is sane) is always in a cosistent state, while the blockdevice (as long as mounted) is not. That's why no sane person uses dump/restore anymore.
As long as you do not run "apt dist-upgrade" at the same time as you rsync you are fine (in respect of bootable backup). Nothing changes kernel + grub + modules + /bin ... under normal conditions so your copy will be able to boot - that is if your got UUID and GRUB/EFI stuff right in the first place. What gets busted are logfiles, open datanbases, files that are just been written. So if you use some brain cells you can shut down whatever is not essential, close your kmail + editors + firefox and just make the sync. Snapshots (ZFS) would be better, but you take what you get :)
You're probably right; I've never backed up a running boot partition with rsync. But if I were going to depend on it, I'd want to test it a time or two first.
On 9/28/24 9:51 AM, dep via tde-users wrote:
The purpose, besides the obvious, is to keep the second drive updated as to security and other updates and any additional software I might install. If there were a way to do the usual update-upgrade to a non-booted drive, and to install applications to the second drive, that would be fine.
Is it really necessary to backup after every single change? Should you ever need to use the backup, updates and other software can always be quickly & easily reinstalled. User configuration settings will still be in /home, since it's on a separate partition. So maybe a few backups a year would be sufficient.
A RAID 1 seemed a good idea, but I believe that this cannot be added to a drive after the fact -- both must be blank to start with. And I think the speed would then be determined, at least to some extent, by the slower drive.
I know very little about RAID, but would it be possible to backup the existing drive, make the RAID 1, then restore the backup to it? Or would that not work for some reason?
re: speed, is it possible to make the RAID default to the faster drive, then update the slower drive in the background? Or maybe it does that anyway?
said Dan Youngquist via tde-users: | On 9/28/24 9:18 AM, Dr. Nikolaus Klepp via tde-users wrote: | > I intend to oject: filesystem based backup systems do not have the | > risk of saving a corrupt filesystems as blockbased backup systems have | > when done on a mountd filesystem. The filesystem (as long as it is | > sane) is always in a cosistent state, while the blockdevice (as long | > as mounted) is not. That's why no sane person uses dump/restore | > anymore. | > | > As long as you do not run "apt dist-upgrade" at the same time as you | > rsync you are fine (in respect of bootable backup). Nothing changes | > kernel + grub + modules + /bin ... under normal conditions so your | > copy will be able to boot - that is if your got UUID and GRUB/EFI | > stuff right in the first place. What gets busted are logfiles, open | > datanbases, files that are just been written. So if you use some brain | > cells you can shut down whatever is not essential, close your kmail + | > editors + firefox and just make the sync. Snapshots (ZFS) would be | > better, but you take what you get :)
So, basically, it would be simply to do nothing while the sync is made, yes? Is this a fairly quick function or a long, complicated one?
I've actually had that question about the copy function in, for instance, Konqueror, for decades. If I'm copying a directory that contains different-sized files with the same name, will it pick up more than the filename when asking if I want to overwrite? Would be nice to see a comparison and possibility of rename. (Not in this particular case, but it would be a big help in, say, backing up my 8tb of pictures. I'd like to be able to use autoskip, but not at the cost of losing edits.)
| You're probably right; I've never backed up a running boot partition | with rsync. But if I were going to depend on it, I'd want to test it a | time or two first.
What is regularly written in / besides log files?
| Is it really necessary to backup after every single change? Should you | ever need to use the backup, updates and other software can always be | quickly & easily reinstalled. User configuration settings will still be | in /home, since it's on a separate partition. So maybe a few backups a | year would be sufficient.
For that matter, I could just boot into the other drive and do the update/upgrade thing. Which would cover a lot but probably not everything. I was hoping to avoid this, but it looks increasingly as if that's what it will have to be.
| > A RAID 1 seemed a good idea, but I believe that this cannot be added | > to a drive after the fact -- both must be blank to start with. And I | > think the speed would then be determined, at least to some extent, by | > the slower drive. | | I know very little about RAID, but would it be possible to backup the | existing drive, make the RAID 1, then restore the backup to it? Or | would that not work for some reason?
Someone more skilled than I am could probably do it. But I'm not utterly familiar with the new bios-related stuff beyond having learned it is deceptively easy now to make a drive unbootable. I do not know what establishing the software RAID would write that restoring from backup might overwrite.
| re: speed, is it possible to make the RAID default to the faster drive, | then update the slower drive in the background? Or maybe it does that | anyway?
There must be some mechanism for this, because otherwise a main reson for a RAID would be removed.
There is no doubt out there an application that does what I'm looking for, though I thought there was no doubt an application that would ping oevery x seconds and log the results. If there was one, I didn't find it.
Anno domini 2024 Sat, 28 Sep 21:23:36 +0000 dep via tde-users scripsit:
said Dan Youngquist via tde-users: | On 9/28/24 9:18 AM, Dr. Nikolaus Klepp via tde-users wrote: | > I intend to oject: filesystem based backup systems do not have the | > risk of saving a corrupt filesystems as blockbased backup systems have | > when done on a mountd filesystem. The filesystem (as long as it is | > sane) is always in a cosistent state, while the blockdevice (as long | > as mounted) is not. That's why no sane person uses dump/restore | > anymore. | > | > As long as you do not run "apt dist-upgrade" at the same time as you | > rsync you are fine (in respect of bootable backup). Nothing changes | > kernel + grub + modules + /bin ... under normal conditions so your | > copy will be able to boot - that is if your got UUID and GRUB/EFI | > stuff right in the first place. What gets busted are logfiles, open | > datanbases, files that are just been written. So if you use some brain | > cells you can shut down whatever is not essential, close your kmail + | > editors + firefox and just make the sync. Snapshots (ZFS) would be | > better, but you take what you get :)
So, basically, it would be simply to do nothing while the sync is made, yes? Is this a fairly quick function or a long, complicated one?
I've actually had that question about the copy function in, for instance, Konqueror, for decades. If I'm copying a directory that contains different-sized files with the same name, will it pick up more than the filename when asking if I want to overwrite? Would be nice to see a comparison and possibility of rename. (Not in this particular case, but it would be a big help in, say, backing up my 8tb of pictures. I'd like to be able to use autoskip, but not at the cost of losing edits.)
| You're probably right; I've never backed up a running boot partition | with rsync. But if I were going to depend on it, I'd want to test it a | time or two first.
What is regularly written in / besides log files?
Depends on what's mounted under / - usually only logfiles that are of less interest when restoring.
| Is it really necessary to backup after every single change? Should you | ever need to use the backup, updates and other software can always be | quickly & easily reinstalled. User configuration settings will still be | in /home, since it's on a separate partition. So maybe a few backups a | year would be sufficient.
For that matter, I could just boot into the other drive and do the update/upgrade thing. Which would cover a lot but probably not everything. I was hoping to avoid this, but it looks increasingly as if that's what it will have to be.
But you'd need a copy of the EFI boot partition on both drives, with different UUID, but same content in sync.
| > A RAID 1 seemed a good idea, but I believe that this cannot be added | > to a drive after the fact -- both must be blank to start with. And I | > think the speed would then be determined, at least to some extent, by | > the slower drive. | | I know very little about RAID, but would it be possible to backup the | existing drive, make the RAID 1, then restore the backup to it? Or | would that not work for some reason?
Someone more skilled than I am could probably do it. But I'm not utterly familiar with the new bios-related stuff beyond having learned it is deceptively easy now to make a drive unbootable. I do not know what establishing the software RAID would write that restoring from backup might overwrite.
RAID volumes: filesystem lives on top of it, so it's not affected. But RAID oly checksums writes, not reads, so when your drive silently zeroes blocks on reading it's no use. ZFS: magic and just works.
| re: speed, is it possible to make the RAID default to the faster drive, | then update the slower drive in the background? Or maybe it does that | anyway?
There must be some mechanism for this, because otherwise a main reson for a RAID would be removed.
Speed is not the intention of RAID. Resilvering is done in the background - that's the task where most drives fails, so keep an eye on the log.
Nik
There is no doubt out there an application that does what I'm looking for, though I thought there was no doubt an application that would ping oevery x seconds and log the results. If there was one, I didn't find it.
-- Please do not email me anything that you are not comfortable also sharing with the NSA, CIA ...
https://www.system-rescue.org/lvm-guide-en/Making-consistent-backups-with-LV... does most/all of your needs. Snapshots..way cool.
rsync, dd are as far as I went in my day. I backed up /home and /usr/local which were on there own partitions.
said Greg Madden via tde-users:
| https://www.system-rescue.org/lvm-guide-en/Making-consistent-backups-wit |h-LVM/ does most/all of your needs. Snapshots..way cool.
Okay, I'm about half sold. Have spent a while doing some reading on LVM, which provided some information but not much about what I sought to know. (Has anyone else noticed the rubbish the search engines have become? When I search "adding LVM to an existing system," I get pages of description about adding a drive to an existing LVM.)
So: can LVM be added to an existing system, or is it like RAID, which needs to be installed from the get go?
Also, while a nifty backup method is way cool, my goal is having two identical boot drives and the ability to boot from one in the event of the other's failure. That is the beginning, middle, and end of it.
At this point, the most straightforward way seems to be installing the NVMe SSD, installing Linux on it from scratch, telling it that home is ~/ on the other drive, and once all done copying the system from the existing drive to the SSD. This would handle UUID, all the stuff the bios wants, and so on. Then once every week or two booting into the secondary drive so as to apply such updates as were applied to the primary one during the intervening time.
On Sat September 28 2024 18:18:19 dep via tde-users wrote:
So: can LVM be added to an existing system, or is it like RAID, which needs to be installed from the get go?
LVM uses hard drive partitions. They can be of any size. An LVM partition can contain many filesystems. A single filesystem or even a single file can be bigger than a single LVM partition. (I am deliberately deferring LVM terminology until the last paragraph below.) But you can't normally share a single partition between both LVM and regular storage.
If you have a spare partition or can make a spare partition then you can give that partition to LVM and start moving stuff into LVM. Basically the more partitions you have and the more free space you have the easier things will be. I generally divide my hard drives into four to eight partitions for flexibility.
But if your hard drive is nearly full you're going to find it hard to make a spare partition to add LVM to an existing system.
If I need some software RAID-1 I generally make the RAID out of hard drive partitions and then give the RAID to LVM, rather than giving those partitions to LVM and then trying to make a RAID inside LVM. I may for example have the source code I'm working on in a RAID but my steam downloads not in a RAID.
Ideally you'd want your root filesystem inside LVM but I often end up with my root outside LVM for $reasons - I forgot to do it during the original setup or it would have been a pain to do it while converting a laptop mixed Windows and Linux to all Linux or else the configuration tool for a cheap VPS didn't want to play ball.
LVM terminology is that a hard drive partition is formatted for LVM making it a "physical volume", one or more physical volumes make a "volume group", and you create filesystems as "logical volumes" in a particular volume group. For example I may make volume groups for RAID or non-RAID storage, or for slow spinning rust and fast SSD storage. On my main backup server I have separate volume groups for each of three physical hard drives so I can control which of my backup logical volumes resides on which physical hard drive.
--Mike
said Mike Bird via tde-users:
| LVM uses hard drive partitions. They can be of any size. An LVM | partition can contain many filesystems. A single filesystem or even a | single file can be bigger than a single LVM partition. (I am | deliberately deferring LVM terminology until the last paragraph below.) | But you can't normally share a single partition between both LVM and | regular storage.
[much deleted]
Thanks very much for the description. What I still don't know is what it would do that I would want done. Which suggests to me that I probably don't need it.
Anno domini 2024 Sun, 29 Sep 03:32:01 +0000 dep via tde-users scripsit:
said Mike Bird via tde-users:
| LVM uses hard drive partitions. They can be of any size. An LVM | partition can contain many filesystems. A single filesystem or even a | single file can be bigger than a single LVM partition. (I am | deliberately deferring LVM terminology until the last paragraph below.) | But you can't normally share a single partition between both LVM and | regular storage.
[much deleted]
Thanks very much for the description. What I still don't know is what it would do that I would want done. Which suggests to me that I probably don't need it.
Please don't forget that anything LVM (and RAID) provides works on a blockdevice level from the filesystems point of view --> lvm snapshops will contain an inconsistent filesystem, just like after a powerloss.
Nik
-- Please do not email me anything that you are not comfortable also sharing with the NSA, CIA ...
said Dan Youngquist via tde-users:
| The problem with using rsync to back up a mounted boot partition is, | it's not instant. So while rsync is backing up one part, some other | part is probably being changed. So if you're lucky, you might end up | with a functional bootable backup, but you'll have some unknown bit(s) | of corruption that may or may not be fatal.
Maybe it would help if I described the goal I'm trying to achieve.
It is to run an SSD boot, while keeping a conventional hard drive that is identical to the SSD and on the GRUB menu to use if the SSD fails. Not inlike a RAID 1, but with a couple of differences: the HD copy would not be running all the time, and would not automatically switch over in case of reboot.
The purpose, besides the obvious, is to keep the second drive updated as to security and other updates and any additional software I might install. If there were a way to do the usual update-upgrade to a non-booted drive, and to install applications to the second drive, that would be fine.
I'd just as soon not have to boot from a USB drive every time I wanted to sync the drives.
And if it is not perpetual and automatic, something I could do manually, say once a week, preferably a little more elegant than booting to the second drive and doing the update/upgrade there, plus adding anything new I've installed.
A RAID 1 seemed a good idea, but I believe that this cannot be added to a drive after the fact -- both must be blank to start with. And I think the speed would then be determined, at least to some extent, by the slower drive.
| If you want to get a good bootable backup, you'd have to boot from | something else before backing up. Then, if you want it to be bootable | should you ever have to restore and use it, fsarchiver would be a better | bet than rsync. An image backup with dd would also work, but the boot | partition would have to be smaller or equal to the backup location.
But dd would also result in a corrupt volume for the same reason rsync would, no? Or at least suffer from the shortcoming that neither drive should be mounted at the time.
said dep via tde-users: | said Dan Youngquist via tde-users: | | The problem with using rsync to back up a mounted boot partition is, | | it's not instant. So while rsync is backing up one part, some other | | part is probably being changed. So if you're lucky, you might end up | | with a functional bootable backup, but you'll have some unknown bit(s) | | of corruption that may or may not be fatal. | | Maybe it would help if I described the goal I'm trying to achieve. | | It is to run an SSD boot, while keeping a conventional hard drive that | is identical to the SSD and on the GRUB menu to use if the SSD fails. | Not inlike a RAID 1, but with a couple of differences: the HD copy would | not be running all the time, and would not automatically switch over in | case of reboot.
Oops. Meant SSD failure, not reboot.
| The purpose, besides the obvious, is to keep the second drive updated as | to security and other updates and any additional software I might | install. If there were a way to do the usual update-upgrade to a | non-booted drive, and to install applications to the second drive, that | would be fine. | | I'd just as soon not have to boot from a USB drive every time I wanted | to sync the drives. | | And if it is not perpetual and automatic, something I could do manually, | say once a week, preferably a little more elegant than booting to the | second drive and doing the update/upgrade there, plus adding anything | new I've installed. | | A RAID 1 seemed a good idea, but I believe that this cannot be added to | a drive after the fact -- both must be blank to start with. And I think | the speed would then be determined, at least to some extent, by the | slower drive. | | | If you want to get a good bootable backup, you'd have to boot from | | something else before backing up. Then, if you want it to be bootable | | should you ever have to restore and use it, fsarchiver would be a | | better bet than rsync. An image backup with dd would also work, but | | the boot partition would have to be smaller or equal to the backup | | location. | | But dd would also result in a corrupt volume for the same reason rsync | would, no? Or at least suffer from the shortcoming that neither drive | should be mounted at the time. | -- | dep | | Pictures: http://www.ipernity.com/doc/depscribe/album | Column: https://ofb.biz/author/dep/ | | ____________________________________________________ | tde-users mailing list -- users@trinitydesktop.org | To unsubscribe send an email to users-leave@trinitydesktop.org | Web mail archive available at | https://mail.trinitydesktop.org/mailman3/hyperkitty/list/users@trinityde |sktop.org